entry_id
stringlengths
33
33
published
stringlengths
14
14
title
stringlengths
17
188
authors
sequence
primary_category
stringlengths
5
18
categories
sequence
text
stringlengths
2
629k
http://arxiv.org/abs/2307.07553v1
20230714180005
Parker Bounds on Monopoles with Arbitrary Charge from Galactic and Primordial Magnetic Fields
[ "Takeshi Kobayashi", "Daniele Perri" ]
hep-ph
[ "hep-ph", "astro-ph.CO", "gr-qc", "hep-th" ]
XMM-Newton Observations of Two Archival X-ray Weak Type 1 Quasars: Obscuration Induced X-ray Weakness and Variability [ received: ** 2023, accepted: * 2023 ===================================================================================================================== § INTRODUCTION Magnetic monopoles have long been a topic of intense study since Dirac showed that their existence is consistent with quantum electrodynamics <cit.>. This discovery was followed by 't Hooft <cit.> and Polyakov <cit.> who found classical soliton solutions that correspond to monopoles. Such solitonic monopoles can be produced during phase transitions in the early universe <cit.> and are an inevitable prediction of theories of grand unification. The magnetic charge of monopoles is constrained by the Dirac quantization condition as e g= 2 π n, n ∈ℤ. For this reason, experimental searches over the years have mostly focused on monopoles with a charge g ∼ 2 π / e. However recently a number of theoretical works have considered monopoles possessing a wide range of charges. Minicharged monopoles with g ≪ 2 π / e can be realized by having a physical Dirac string. Such configurations can arise, for instance, from a kinetic mixing between the Standard Model photon and a dark massive photon, in which case the monopole's charge under the visible magnetic fields is proportional to the mixing parameter <cit.>. Going to very large masses, magnetically charged black holes can be seen as giant monopoles with small charge-to-mass ratio. The phenomenology of black holes with magnetic charge has recently been discussed in <cit.>. Such black holes are interesting as they cannot Hawking evaporate beyond extremality, leading to the possibility for primordial black holes with very small masses to survive until today. Both minicharged monopoles and magnetic black holes have also been considered as interesting candidates of dark matter. The relic abundance of magnetic monopoles is constrained by the requirement that they do not exceed the critical density of the universe <cit.>. However, even stronger constraints can be obtained from the magnetic fields present in the universe. The idea behind this is that magnetic fields lose energy by accelerating monopoles, hence requiring their survival imposes an upper bound on the monopole abundance. This was first proposed by Parker, who derived an upper bound on the monopole flux inside our Galaxy from the survival of the Galactic magnetic fields <cit.>. This so-called Parker bound was subsequently extended by considering a seed magnetic field of our Galaxy <cit.>. Intergalactic magnetic fields <cit.>, on the other hand, may not directly yield Parker-type bounds. This is because the accelerated monopoles do not effectively dissipate their kinetic energy in the intergalactic voids, and thus can end up returning the energy to the magnetic field. However, if the intergalactic fields have a primordial origin, as suggested by various studies (see e.g. <cit.> for a review), the monopoles could have shorted out the magnetic fields in the early universe by transferring the magnetic energy into the cosmic plasma. Parker-type bounds from primordial magnetic fields have thus been derived based on the fields' survival during the radiation-dominated epoch <cit.> and the reheating epoch <cit.>. We also note that strong magnetic fields can give rise to monopole pair production through the magnetic dual of the Schwinger effect <cit.>. Lower bounds on the monopole mass have been obtained by analyzing this effect on the surface of magnetars <cit.>, in heavy-ion collisions at the LHC <cit.>, and in primordial magnetic fields <cit.>. Direct searches for monopoles mainly rely on the detection of an induced electric current in superconducting rings <cit.>, or of the energy released into calorimeters from the interactions of a crossing monopole with the charged particles of the material <cit.>. However, it is extremely difficult to apply these methods to minicharged monopoles due to the sensitivity of the detectors and the selecting algorithms used in the experiments. For magnetic black holes, their very large masses combined with the constraint from the critical density of the universe restrict their flux on Earth to be extremely tiny; hence they are also minimally constrained by direct searches. We should note that a subclass of GUT monopoles can catalyze nucleon decay, and searches based on this process have been performed; however whether monopole catalysis happens depends on the details of the model. Thus, the possibility of deriving indirect bounds from astrophysical observations is even more compelling for monopoles possessing charges that are very different from the Dirac charge. In this work we present a comprehensive study of Parker-type bounds on the flux of magnetic monopoles with arbitrary charge, including minicharged monopoles and magnetically charged black holes. We derive the flux bounds based on the survival of galactic magnetic fields, seed magnetic fields, and primordial magnetic fields, by clarifying the range of applicability of each bound along the way. We find that, depending on the type of monopoles, the strongest bound arise from different astrophysical systems. In particular, we show that while seed galactic magnetic fields impose tight bounds on monopoles with a Dirac charge, minicharged monopoles are strongly constrained by primordial magnetic fields, and magnetic black holes by comparison with the dark matter density. We also derive conditions for monopoles to be able to cluster with galaxies hosting magnetic fields, based on which we examine whether the various types of monopoles can provide viable dark matter candidates. This paper is organized as follows. In Section <ref> we revisit bounds from galactic fields and extend them to monopoles with arbitrary charge. In Section <ref> we review the evolution of primordial magnetic fields in the presence of monopoles and derive bounds based on the survival of primordial fields. In Section <ref> we make a comparison of the different Parker bounds. In Section <ref> we investigate how the bounds apply to extremal magnetic black holes. We then conclude in Section <ref>. Appendix <ref> is dedicated to a study of monopole dynamics in galactic magnetic fields. Throughout this work we use Heaviside-Lorentz units, with c = ħ = k_B = 1, and use to denote the reduced Planck mass (8 π G)^-1/2. We denote the monopole's mass by m, and the amplitude of the magnetic charge by g. The charge of a Dirac monopole is written as g_D = 2 π / e ≈ 21. § BOUNDS FROM GALACTIC MAGNETIC FIELDS In this section we revisit the Parker bounds on the monopole flux from galactic magnetic fields <cit.> and seed fields <cit.>. We extend the previous computations to allow for the monopoles to carry arbitrary magnetic charge, and we also clarify the range of applicability of the bounds. Let us consider a generic galaxy hosting magnetic fields, that are amplified by dynamo action with a time scale τ_gen. After the dynamo saturates, the magnetic field is assumed to stay nearly constant, and we represent the time period between saturation and today by τ_sat. All cases with τ_sat being comparable to or smaller than τ_gen describe similar situations where the fields have been growing until very recent times. Hence, without loss of generality we impose τ_sat≥τ_gen. Monopoles within a galaxy are accelerated by the magnetic fields. We model the fields such that they exist in a region of size R, which is further divided into cells of uniform field. The size of each cell, i.e. the magnetic field's coherence length, is denoted by l_c (< R). We further assume that the field strength B is the same in all cells, but the direction of the field is uncorrelated from one cell to the next. The average energy gain per monopole after it has passed through N uncorrelated cells is derived in Appendix <ref> as Δ E_N ∼N/4(g B l_c)^2/m (γ_i - 1) for N ≪ 8 ( m (γ_i - 1)/g B l_c)^2 , √(N/2) g B l_c for N ≫ 8 ( m (γ_i - 1)/g B l_c)^2 , where γ_i is the initial Lorentz factor of the monopole upon entering the first cell.[It would be very interesting to study more realistic models where the directions of magnetic fields are not completely random; this should realize a more efficient acceleration of monopoles and thus yield stronger flux bounds. The analysis here also neglects the effect of the galaxy's gravitational potential, as well as the possibility that the monopoles spend ample time in galactic regions without magnetic fields.] In the first line the energy gain is smaller than the initial kinetic energy, i.e. Δ E_N < m (γ_i -1), while in the second line the monopole has been sufficiently accelerated such that Δ E_N > m (γ_i -1). If m (γ_i - 1) ≪ g B l_c, the energy gain is given by the second line from the first cell. For the first line of (<ref>) to describe well the average behavior of a set of monopoles, the product of the number of monopoles p and the number of cells N each monopole passes through need to be large enough such that p N ≫ 16 ( m (γ_i - 1)g B l_c)^2 . The second line works for p ≫ 1. §.§ Do monopoles cluster with a galaxy? If monopoles are bound in a galaxy, they would be moving with the virial velocity v_vir (≪ 1). However since the monopoles, on average, are constantly accelerated in galactic magnetic fields, they will eventually acquire a large enough velocity to escape from the galaxy. Considering that the escape velocity is not much larger than the virial velocity, let us estimate the time scale for the monopoles to escape from the galaxy as the time it takes for the monopoles' velocity to become larger than v_vir by a factor of order unity. If m v_vir^2 / 2 ≪ g B l_c, then the monopole is accelerated to the escape velocity within a single cell. Then it suffices to consider a uniform magnetic field, in which the velocity varies as Δv = g BΔ t / m while the monopole is nonrelativistic. Hence we can estimate the escape time as τ_esc∼m v_vir/g B. On the other hand if m v_vir^2 / 2 ≫ g B l_c, the monopoles pass through multiple cells before reaching the escape velocity. The two limiting expressions in (<ref>) represent the regimes where the monopole velocity has barely/significantly increased from its initial velocity. The escape velocity is acquired in between the two regimes, when the number of cells passed through is N_esc∼ 2 ( m v_vir^2/g B l_c)^2. Hence the escape time is τ_esc∼N_esc l_c/v_vir∼2 m^2 v_vir^3/g^2 B^2 l_c. The escape time for both cases m v_vir^2 / 2 ≪ g B l_c and m v_vir^2 / 2 ≫ g B l_c can collectively be written as τ_esc∼max.{ m v_vir/g B, 2 m^2 v_vir^3/g^2 B^2 l_c} ∼max.{ 10^7 yr(m/10^17 GeV) ( g/g_D)^-1 ( B/10^-6 G)^-1( v_vir/10^-3), 10^7 yr(m/10^17 GeV)^2 ( g/g_D)^-2 ( B/10^-6 G)^-2( l_c/1 kpc)^-1( v_vir/10^-3)^3 }. The escape time decreases as B is amplified, given that the other parameters do not change as much as B. Monopoles can thus stay clustered with a galaxy if the escape time is longer than the time elapsed since the magnetic field achieved its present-day strength B_0, i.e.,[The derivation of τ_esc uses the assumption of a constant B, which breaks down if τ_esc > τ_sat. In such cases the exact value of τ_esc can be modified from (<ref>), but we can still conclude that the monopoles can cluster with the galaxy. A similar discussion applies to the magnetic field dissipation time which we derive later.] . τ_esc|_B = B_0 > τ_sat. Let us assume hereafter that the time scale of dynamo is comparable to or larger than the time it takes for a particle with virial velocity to cross the magnetic field region of the galaxy, τ_gen≳R/v_vir∼ 10^7 yr( R/10 kpc) ( v_vir/10^-3)^-1. From this it follows that τ_sat > l_c / v_vir, indicating that monopoles that obtain the escape velocity within a single cell cannot stay clustered until today. Hence for monopoles to be clustered, m v_vir^2 / 2 ≫ g B l_c is a necessary condition. An even stronger condition is obtained by substituting (<ref>) into (<ref>), which yields a lower bound on the mass of clustered monopoles as m ≳ 10^18 GeV( g/g_D) ( B_0/10^-6 G) ( l_c/1 kpc)^1/2(τ_sat/10^10 yr)^1/2( v_vir/10^-3)^-3/2. Considering for instance the Milky Way, for which the typical parameters of the magnetic field and virial velocity are shown on the right-hand side as the reference values <cit.>, monopoles with a Dirac charge can be clustered today only if their mass is larger than 10^18 GeV.[A similar bound can be obtained by requiring the gravitational acceleration of a monopole with virial velocity on a circular orbit at the radius of the galaxy (v_vir^2 / r_g), to be larger than the magnetic acceleration (g B_0 / m). This yields m ≳ 10^18 GeV( g/g_D) ( B_0/10^-6 G) (r_g/10 kpc) ( v_vir/10^-3)^-2. ] Producing such ultraheavy monopoles in the postinflation universe presents a challenge for monopoles with charge g ≥ g_D to serve as dark matter. This is no longer the case for minicharged (g ≪ g_D) monopoles, which can cluster with smaller masses. §.§ Backreaction from monopoles We now derive bounds on the flux of monopoles inside galaxies by studying the backreaction from the monopoles on galactic magnetic fields. §.§.§ Unclustered monopoles We start by considering monopoles that are not trapped inside a galaxy but pass through it. The incident flux of such unclustered monopoles on a galaxy is equivalent to the flux inside the galaxy, from monopole number conservation.[The velocity and number density upon entering the galaxy can each be different from those inside the galaxy, however their product remains constant. Here we do not consider initially unclustered monopoles becoming clustered, or vice versa. We also neglect monopole-antimonopole annihilation.] Writing the flux per area per solid angle per time as F, and modeling the magnetic field region of the galaxy by a sphere with radius R, then the number of monopoles passing through the magnetic region per time is 4 π^2 R^2 F. (The extra power of π is from integrating over the solid angle on one side of the surface of the magnetic region.) Each monopole crosses roughly N = R / l_c cells as it traverses the magnetic region, and on average gains energy of Δ E_N = R / l_c. In turn, the magnetic field loses energy at a rate, Ė_B ∼ - 4 π^2 R^2 F Δ E_N = R / l_c. Comparing this with the total magnetic field energy, E_B = (4 π R^3 / 3) (B^2 / 2), the time scale for the magnetic field to be dissipated is computed as τ_dis = E_B/Ė_B∼max.{2 m (γ_i - 1)/3 π g^2 F l_c , B/3 √(2)π g F √(R/l_c)}, where we substituted (<ref>) into Δ E_N = R / l_c. Here γ_i is understood as the Lorentz factor of the monopoles with respect to the galaxy, upon galaxy entry. The backreaction from the monopoles has little effect on the magnetic field evolution if the field amplification by dynamo proceeds at a faster rate, τ_dis > τ_gen. This condition should hold throughout the galactic history for negligible backreaction,[This guarantees negligible backreaction even after the dynamo saturates, if τ_gen also sets the time scale for the magnetic field's deviations from the saturation value to decay. However since the field amplification lives on a finite supply of energy of the galaxy, one may instead require τ_dis > τ_sat, giving a stronger bound.] and it translates into an upper bound on the monopole flux, F ≲max.{ 10^-16 cm^-2sec^-1sr^-1(m/10^17 GeV) ( g/g_D)^-2( l_c/1 kpc)^-1(τ_gen/10^8 yr)^-1( γ_i - 1 /10^-6), 10^-16 cm^-2sec^-1sr^-1( g/g_D)^-1( B/10^-6 G) ( R/l_c)^1/2(τ_gen/10^8 yr)^-1}. The first (second) line sets the bound when m is larger (smaller) than the threshold value, m̂∼ 10^17 GeV( g/g_D) ( B/10^-6 G) ( l_c/1 kpc) (R/l_c)^1/2( γ_i - 1 /10^-6)^-1. Monopoles with masses smaller than this exit the galaxy with a velocity much larger than their incident velocity v_i. By using the expression (<ref>) for Δ E_N in the above derivation, it was implicitly assumed that the monopoles each pass through at least one cell within the dissipation time τ_dis. Moreover for small-mass monopoles which gain energy as Δ E_N ∝√(N) (cf. second line of (<ref>)), we assumed that the time it takes for the monopoles to cross the entire magnetic region is shorter than τ_dis. These two assumptions are automatically satisfied when the condition (<ref>) holds along with τ_gen≳R/v_i∼ 10^7 yr( R/10 kpc) ( v_i/10^-3)^-1. In other words, the flux bound (<ref>) applies without modification under (<ref>). It should also be noted that for the first line of (<ref>) to well describe the mean behavior of monopoles, the monopole number needs to be large enough to satisfy (<ref>). The total number of unclustered monopoles passing through the magnetic region before the field is dissipated is p = 4 π^2 R^2 F τ_dis. Using also the first term in the far right-hand side of (<ref>) for τ_dis, and N = R / l_c for the number of cells each monopole crosses, then (<ref>) yields an upper bound on the monopole mass, m ≲ 10^63 GeV(B/10^-6 G)^2 ( R/10 kpc)^3( γ_i - 1 /10^-6)^-1. The condition (<ref>) is not necessary when τ_dis is given by the second term in (<ref>), however even in this case the monopole number p ∼ B R^5/2 / g l_c^1/2 should be larger than unity for the derivation of the flux bound to be valid. This requires B ≳ 10^-52 G( g/g_D) ( l_c/1 kpc)^1/2( R/10 kpc)^-5/2. This condition is equivalent to requiring that m̂ given in (<ref>) is smaller than the upper mass limit of (<ref>). The conditions (<ref>) and (<ref>) seem rather weak, however they can become important when considering systems with extremely weak B, or when constraining extremely massive monopoles such as magnetic black holes. The magnetic field energy taken away by the monopoles can, in principle, later be returned to the field. Then τ_dis would only correspond to the half-period of the energy oscillation between the magnetic field and monopoles, and the flux bound would be invalidated. However it was pointed out in <cit.> that for monopoles with charge of g ∼ g_D, the galactic magnetic fields cannot be maintained in this way since the oscillations are subject to Landau damping, and also because the oscillations would give features of the field that do not match with observations. It would be important to analyze whether Landau damping is effective with minicharges, g ≪ g_D. We leave this for future work. We also note that for unclustered monopoles, they may fly away from the galaxy before returning the energy to the field. §.§.§ Clustered monopoles Monopoles that are bound in a galaxy move with the virial velocity v_vir, and hence each monopole crosses approximately N = v_vir / l_c cells per unit time. The energy the monopoles steal from the magnetic field per time per volume is thus ρ̇_B ∼ - n Δ E_N = v_vir / l_c, where n is the number density of clustered monopoles. While a monopole is clustered, its energy follows Δ E_N ∝ N as shown in the first line of (<ref>), with γ_i - 1 ≃ v_vir^2 / 2. Taking the ratio with the magnetic energy density ρ_B = B^2 / 2, and noting that the flux is written as F = n v_vir / 4 π, the dissipation time scale is obtained as τ_dis = ρ_B/ρ̇_B∼m v_vir^2 /4 π g^2 F l_c. This matches up to an order-unity factor with the first expression in (<ref>) for unclustered monopoles,[This is because in both (<ref>) and the first expression of (<ref>), monopoles gain energy as Δ E_N ∝ N, and the number of cells crossed per unit time by all the monopoles in the magnetic region is ∼ 4 π^2 F R^3 / l_c.] after the replacement v_i → v_vir. Hence the requirement of negligible backreaction on the magnetic field, τ_dis > τ_gen, yields a flux bound that is similar to the first line of (<ref>), but with v_vir^2/2 instead of γ_i - 1. The derivation assumes that the monopoles pass through at least one cell before their backreaction becomes relevant, i.e. τ_dis > l_c / v_vir. This is automatically satisfied under τ_dis > τ_gen and the condition (<ref>). From (<ref>) it also follows that the lower mass limit (<ref>) for clustered monopoles is larger than the threshold mass (<ref>) where the flux bound for unclustered monopoles switches its behavior, if v_i = v_vir. The flux bound also requires a monopole number large enough to satisfy (<ref>), which yields a mass limit similar to (<ref>). Here we ignored the possibility of the monopoles escaping from the galaxy before dissipating the magnetic field, while in Section <ref> we ignored the monopoles' backreaction on the magnetic field. By combining the discussions, however, we can say that clustered monopoles need to satisfy both the flux bound and the mass bound (<ref>). Otherwise, either the galactic magnetic field is dissipated, the monopoles are ejected from the galaxy, or both.[We may guess what happens by comparing the energy required to eject all monopoles from the galaxy per volume, ρ_ej∼ n m v_vir^2 / 2, and the magnetic energy density today, ρ_B = B_0^2 / 2. The former is larger if m F ≳ B_0^2 / 4 π v_vir. This threshold matches with the value of m F where the mass lower limit (<ref>) and flux upper limit (<ref>) becomes equal, up to a factor of ∼τ_sat / τ_gen.] §.§ Summary of bounds from galactic magnetic fields We have seen that the bounds on the flux of clustered and unclustered monopoles inside galaxies are collectively described by (<ref>), given that the dynamo time scale, monopole mass, and magnetic field respectively satisfy (<ref>), (<ref>), and (<ref>). For unclustered monopoles γ_i in these expressions denotes the initial Lorentz factor with respect to the galaxy, while for clustered monopoles it is given by the virial velocity as γ_i - 1 = v_vir^2 / 2. Clustered monopoles further need to satisfy the lower bound on the mass (<ref>) in order to stay clustered until today. The flux bound (<ref>) at large m increases with m whereas it is independent of B, and vice versa at small m. Considering present-day magnetic fields, whose amplitude in spiral galaxies is typically of B_0 ∼ 10^-6 G, one reproduces the results of <cit.> (see also <cit.>). However, the bound applies throughout the history of a galaxy, and thus the bound at low masses can be improved by studying galaxies in the past when their magnetic fields were weaker. Strong bounds are obtained from the initial seed field for galactic dynamo <cit.>,[The results in <cit.> and <cit.> are slightly different at the high mass end where the bound is independent of B; this is because the two works use different values for the other parameters such as l_c, and also different rounding methods.] although there is a huge uncertainty in the seed field ranging typically between 10^-30 G≲ B ≲ 10^-10 G <cit.>. We also note that increasing l_c and/or g improves the flux bound, as well as the lower mass limit for clustered monopoles. In Figure <ref> we show the flux upper bound (<ref>) as a function of the monopole mass, with the magnetic charge varied as g = g_D (red), 10^-3 g_D (purple), 10^-6 g_D (blue). The solid lines denote bounds from the magnetic field in the present Milky Way, taken as B = 10^-6 G. The dashed lines show how the bound improves by considering a seed field of B = 10^-11 G. The dotted vertical lines represent the lower mass limit (<ref>) of clustered monopoles in the Milky Way. Here the other parameters are taken as l_c = 1 kpc, R = 10 kpc, τ_gen = 10^8 yr, τ_sat = 10^10 yr, and γ_i - 1 = 10^-6. In the plot we also show bounds from the requirement that the density of monopoles ρ_M does not exceed the dark matter density ρ_DM. Using ρ_M = m n for nonrelativistic monopoles with n being the number density, the requirement translates into an upper bound on the monopole flux F = n v_i / 4 π as, F ≤ρ_DM v_i /4 π m ≈ 3 × 10^-17 cm^-2sec^-1sr^-1(m/10^17 GeV)^-1( v_i/10^-3) ( ρ_DM/1.3 × 10^-6 GeV cm^-3). The flux of unclustered monopoles is bound by setting the dark matter density to the average value in the universe, ρ_DM≈ 1.3 × 10^-6 GeV cm^-3 <cit.>; this is shown in the plot as the gray solid line. On the other hand, the abundance of clustered monopoles should be compared to the local dark matter density in galaxies; the gray dotted line shows the bound using the value in our Milky Way, ρ_DM≈ 0.4 GeV cm^-3 <cit.>. One sees in the plot that for clustered monopoles, the bound from the local dark matter density (which scales as ∝ m^-1) is stronger than that from the survival of galactic fields (∝ m) for most of the mass range where the monopoles can be clustered. This can be shown explicitly by comparing the mass m_eq where the two upper bounds ((<ref>) and the first line of (<ref>)) become equal, to the lower limit on the mass m_cl for clustered monopoles (cf. (<ref>)); their ratio is m_eq/m_cl∼ 10 ( B_0/10^-6 G)^-1(τ_gen/10^8 yr)^1/2(τ_sat/10^10 yr)^-1/2( v_vir/10^-3) ( ρ_DM/0.4 GeV cm^-3)^1/2. This shows that m_eq and m_cl are not too different in the Milky Way whose magnetic field and dark matter parameters are typically given by the reference values in the right-hand side. This means that if monopoles can cluster with our Galaxy and their density does not exceed that of dark matter, then they almost automatically satisfy the Parker bound from Galactic fields.[It would be interesting to understand whether m_eq∼ m_cl holds for generic galaxies hosting magnetic fields, by studying the relation between the dark matter density and the dynamo action.] In the literature the Galactic Parker bound has often been analyzed for constraining monopoles as a dark matter candidate; however most such studies focus on parameter regions where the monopoles actually cannot cluster with our Galaxy and hence obviously cannot serve as dark matter. § BOUNDS FROM PRIMORDIAL MAGNETIC FIELDS In this section we extend the computations for the bounds from primordial magnetic fields derived in <cit.> to allow for the monopoles to carry arbitrary magnetic charge. Magnetic monopoles are accelerated by the primordial magnetic fields and the fields consequently lose their energy. If the interaction between the monopoles and the charged particles of the primordial plasma is sufficiently strong, the energy of the primordial magnetic fields is eventually transferred to the primordial plasma. From the requirement that the primordial magnetic fields survive until today, we get a bound on the abundance of monopoles. On the other hand, if the interaction between the monopoles and the primordial plasma is weak, then the energy oscillates between the monopoles and the magnetic fields. This modifies the time evolution of the magnetic fields <cit.>, however it does not lead to a dissipation of the fields. Thus a bound on monopoles is obtained if their interaction with the primordial plasma is sufficiently strong at least for some period in the early universe. We first describe the evolution of the primordial magnetic fields in the presence of monopoles with arbitrary magnetic charge from the end of magnetogenesis to the epoch of e^+ e^- annihilation, when the number of charged particles in the universe becomes drastically reduced. Then, we derive bounds on the monopole abundance from the survival of the primordial magnetic fields. We consider a Friedmann-Robertson-Walker (FRW) background spacetime ds^2 = dt^2 - a^2 d x^2. In our analysis we suppose that the process of magnetogenesis terminates at the end of inflation or during the reheating phase. Thus, we study the dynamics of the primordial magnetic fields during the reheating epoch when the energy density of the universe is dominated by an oscillating inflaton field, and the subsequent epoch of radiation domination. We define T_dom as the temperature at the end of reheating when the universe becomes dominated by radiation (the subscript “dom” denotes quantities computed at this time). We use the subscript “end” to denote quantities computed at the end of magnetogenesis.[Notice that in <cit.> we referred to the time at the end of magnetogenesis as t_i, instead of t_end. Moreover, we used n to denote the number density of monopole-antimonpole pairs, while in this paper we will use it for the total number density of monopoles and antimonopoles, i.e. n → n/2.] In the absence of any source, the energy density of primordial magnetic fields ρ_B redshifts simply as radiation, ρ_B∝ a^-4. The magnetic field amplitude thus redshifts as B ∝ a^-2, since ρ_B = B^2/2. The existence of intergalactic magnetic fields with strengths B_0 ≳ 10^-15 G (we use the subscript “0” to denotes quantities in the present universe) coherent on Mpc scales or larger has been suggested by gamma ray observations <cit.>. Such scales have always been outside the Hubble horizon during the period from the end of inflation to e^+ e^- anihilation. Thus, since the distance crossed by the monopoles during the period of interest is smaller than the correlation length of the fields, we treat the fields as effectively homogeneous. In this paper we use 10^-15 G≃ 2 · 10^-17 eV^2 as the reference value for the intergalactic magnetic field strengh today. Under the above assumptions, the evolution of the energy density of primordial magnetic fields in the presence of monopoles accelerated by the magnetic fields is described by the equation: ρ̇_B/ρ_B=-Π_red - Π_acc , where an overdot denotes a derivative with respect to physical time t. Here Π_red and Π_acc are the dissipation rates of the magnetic field energy due to redshifting and monopole acceleration: Π_red = 4 H, Π_acc = 2 g/B n v , where H = ȧ/a is the Hubble rate, n is the physical number density of monopoles, and v is the velocity of the monopoles. We neglect the production of monopole pairs by the magnetic fields through the Schwinger effect <cit.>. Thus, we assume that the comoving number density is constant in time[See <cit.> for detailed discussions on monopoles produced by the primordial magnetic field itself.], i.e. n ∝ a^-3. The expression for the ratio Π_acc / Π_red then can be written as: Π_acc/Π_red = g/2 B H n v . We require the condition Π_acc / Π_red≪ 1 to hold during the period from the end of magnetogenesis, t = t_end, to e^+ e^- annihilation. This condition corresponds to having negligible backreaction on the primordial magnetic fields from the monopole acceleration. In order to rewrite such a condition as a bound on the monopole abundance, in the next section we study the evolution of the monopole velocity in the early universe. §.§ Monopole dynamics in primordial magnetic fields We now describe the motion of monopoles accelerated by a homogeneous magnetic field in the presence of a primordial plasma. For the analysis we suppose the plasma to be at rest in the coordinate system (t, x^i). We also ignore the components of the monopole velocity perpendicular to the direction of the magnetic field because any initial velocity decays away. Further ignoring random thermal velocities, the motion of monopoles with magnetic charge g and mass m can be described by the equation <cit.>: m d/dt ( γ v) = g B - ( f_p + m H γ) v . The term f_p is proportional to the cross section of the interaction between the monopoles and the charged particles in the plasma. When the particles in the plasma are relativistic and in thermal equilibrium, f_p can be expressed as <cit.>: f_p∼e^2 g^2 𝒩_c/16 π^2 T^2 . Here T is the temperature of the plasma and 𝒩_c is the effective number of relativistic and electrically charged degrees of freedom in thermal equilibrium including also the contributions of the spin and the charge of the scatterers. In Eq. (<ref>), the expansion of the universe can be seen as an additional frictional term proportional to the Hubble rate. In <cit.> the solution of the equation of motion has been studied for magnetic monopoles with Dirac charge g_D = 2 π /e. Here we are interested in generalizing the results to generic magnetic charges. Depending on the parameters one of the two frictional terms becomes dominant and eventually the monopoles achieve a terminal velocity. If the Hubble friction is the dominant term, i.e. mH γ≫ f_p, the terminal velocity is set approximately by: ( γ v )_H∼g B/m H . The expression is directly proportional to the magnetic charge of the monopoles. Thus, smaller magnetic charge corresponds to smaller v_H. On the other hand, when the drag force by the interaction with the plasma is dominant, i.e. mH γ≪ f_p, and the monopoles move at nonrelativistic velocities, the terminal velocity corresponds to: v_p = gB/f_p∼16 π^2 B/e^2 g 𝒩_c T^2 . Since the interaction rate with the particles of the plasma is proportional to g^2 and the monopole acceleration by the magnetic field to g, the velocity v_p scales as v_p∝ g^-1. Due to the γ factor in front of the Hubble friction term, for relativistic monopoles (γ≫ 1) the drag force due to the expansion of the universe tends to become dominant. In this case the terminal velocity of the monopoles corresponds to the value of v_H shown in Eq. (<ref>). However, in the case when the monopoles move at relativistic velocities and mH γ≪ f_p, the monopole velocity rapidly decreases to nonrelativistic values and eventually starts to follow the terminal velocity v_p <cit.>. In Figure <ref> we plot the time evolution of γ v (Figures <ref> and <ref>) and of the ratio Π_acc / Π_red normalized by the monopole number density today (Figures <ref> and <ref>). The time evolution of γ v is obtained by numerically solving the equation of motion Eq. (<ref>) with an initial condition of v_end = 0. The time evolution of Π_acc / Π_red is obtained by substituting into Eq. (<ref>) the numerical solution of Eq. (<ref>). For the plots we assume H_end = 10^11 GeV, H_dom = 10^-6 GeV (i.e. T_dom∼ 10^6 GeV), and fix the number of relativistic (charged) degrees of freedom as g_* = 𝒩_c = 100 throughout the displayed epochs. The magnetic field strength is taken such that it approaches a present-day strength of B_0 = 10^-15 G. In Figures <ref> and <ref>, the results are shown for a magnetic charge g = 10^-3 g_D and for different values of the monopole mass. The value of the magnetic charge has been chosen in order to cover a wide range of possible behaviors of the monopole velocity which we will explain in the following sections. Each value of the mass is associated to a differently colored solid curve; from bottom to top, red: m = 10^19 GeV, orange: m = 10^16 GeV, green: m = 10^13 GeV, blue: m = 10^10 GeV, purple: m = 10^7 GeV. The purple curve disappears when it is behind the blue curve. In Figure <ref>, the dashed gray line shows γ v with v substituted by v_p given in Eq. (<ref>); this corresponds to the terminal velocity set by the plasma when v_p≪ 1. In Figures <ref> and <ref>, the results are shown for a mass m = 10^11 GeV and for different values of the magnetic charge. As in the previous case, the value of the mass has been chosen in order to show the various behaviors of the monopole velocity. Each value of the charge is associated to a differently colored solid curve; from top to bottom, red: g = g_D, orange: g = 10^-3 g_D, green: g = 10^-6 g_D, blue: g = 10^-9 g_D, purple: g = 10^-12 g_D. In Figure <ref> the orange curve disappears when it is behind the red curve. In Figure <ref>, the dashed curves show (γ v)_p for different charges, indicated by the colors. The blue and purple dashed curves are not shown in the plot because a terminal velocity set by the friction with the plasma cannot be defined in those cases, being v_p≫ 1. In the figures the monopole velocity follows v_p in Eq. (<ref>) shown as the dashed lines, otherwise it follows v_H in Eq. (<ref>) (except for at the left edges of the plots where H ∼ H_end). This indicates that one of the two terminal velocities always gives an attractor solution for the monopole velocity. One also sees from the figures that the velocity can make a transition from v_H to v_p as the universe expands, but not vice versa. The transition can be smooth as for the blue curve in Figure <ref>, but can also take the form of a sudden jump as for the purple curve in Figure <ref>. §.§ Radiation-dominated epoch We start by analyzing the backreaction of monopoles on primordial magnetic fields during the radiation-dominated epoch. Neglecting the time dependence of g_*(s) and 𝒩_c, then during radiation domination the Hubble rate redshifts as H ∝ a^-2, and the temperature of the plasma as T ∝ a^-1. These, together with B ∝ a^-2, render both v_p and v_H constant in time. The monopoles during radiation domination thus move with a constant velocity. Whether the monopole velocity during radiation domination follows v_p or v_H depends on the monopole properties and the magnetic field strength. This is illustrated in Figure <ref> in the m-g plane, where we took the field strength such that it becomes B_0 = 10^-15 G today. The numbers of relativistic (charged) degrees of freedom are fixed to g_* = 𝒩_c = 10.75. The purple curve shows where the plasma and Hubble frictions in the monopole's equation of motion Eq. (<ref>) are comparable, i.e. f_p = m H γ_H. In the red region the plasma friction is dominant (f_p≫ m H γ_H) and the monopole velocity is given by v_p. On the other hand, in the blue region the Hubble friction is dominant (f_p≪ m H γ_H) and the velocity is given by v_H. The balance condition f_p = m H γ_H is rewritten using Eqs. (<ref>) and (<ref>) as (e^2 g^2 𝒩_c T^2/16 π^2)^2 ∼ (g B)^2 + (m H)^2 . This can be solved for the magnetic charge, and the solution g = g_min is approximated by: g_min∼16 π^2/e^2 𝒩_c B/T^2 for m ≪16 π^2/e^2 𝒩_cB^2/H T^2, ( 16 π^2/e^2 𝒩_c m H/T^2)^1/2 for m ≫16 π^2/e^2 𝒩_cB^2/H T^2. For g > g_min the monopole velocity approaches v_p, and for g < g_min it approaches v_H. In other words, g_min sets the minimum charge for a monopole during radiation domination to lose its kinetic energy mainly through its interaction with the plasma. The expressions of Eq. (<ref>) describe the two asymptotic behaviors of the purple curve in the figure. In the first line the balance condition is realized for relativistic velocities ((γ v)_H≫ 1), while the second line is for nonrelativistic velocities ((γ v)_H≪ 1). In the figure, the dashed gray line shows where (γ v)_H = 1, with (γ v)_H < 1 on its right side. One actually sees that the dashed gray and purple lines intersect at the point where the purple line bends. We also note that the first line of Eq. (<ref>) corresponds to the charge that gives v_p = 1; this is depicted in the figure by the dashed black line. In the region below this line the expression Eq. (<ref>) yields v_p > 1, indicating that the plasma friction does not yield a terminal velocity for monopoles. The expressions in Eq. (<ref>) are time independent during radiation, up to mild variations due to the change in the numbers of relativistic degrees of freedom. The cosmic temperature and the Hubble rate during the radiation-dominated epoch are related to the redshift as T ∼ 1 MeV(10^-10/a/a_0), H ∼ 10^-15 eV(10^-10/a/a_0)^2, where we ignored their mild dependence on g_*(s). For the number of relativistic charged degrees of freedom, hereafter we use 𝒩_c∼ 10 as a reference value. Combining these with the magnetic scaling B = B_0 (a_0/a)^2, one can rewrite Eq. (<ref>) as g_min∼ 10^-8 g_D( B_0/10^-15 G) for m ≪ 10^2 GeV( B_0/10^-15 G)^2, 10^-1 g_D(m/10^17 GeV)^1/2 for m ≫ 10^2 GeV( B_0/10^-15 G)^2. The terminal velocities of Eqs. (<ref>) and (<ref>) can also be rewritten as, v_p∼ 10^-8( g/g_D)^-1( B_0/10^-15 G), ( γ v )_H∼ 10^-7( m/10^17 GeV)^-1( g/g_D) ( B_0/10^-15 G) . The monopole velocity during radiation domination shown in the right parts of Figure <ref> can be understood from Figure <ref>, and by noting that the terminal velocities scale with the monopole mass and charge as v_p∝ g^-1, (γ v)_H∝ g m^-1. The variation of the velocities in Figure <ref> is given by moving horizontally in Figure <ref> along g = 10^-3 g_D; for small m the velocity is set to v_p which is independent of m (cf. purple and blue lines in Figure <ref>), while for large m the velocity is v_H which decreases with m (cf. green, orange, and red lines). On the other hand, Figure <ref> corresponds to moving vertically in Figure <ref> along m = 10^11 GeV; for small g the velocity v_H increases with g (cf. purple, blue, and green lines in Figure <ref>), while for large g the velocity v_p decreases with g (cf. orange and red lines). With constant monopole velocities, the dissipation rate ratio grows as Π_acc / Π_red∝ a. Thus, requiring negligible monopole backreaction while there are abundant charged particles in the universe amounts to demanding that this ratio is smaller than unity at e^+ e^- annihilation, i.e., ( Π_acc/Π_red)_T ∼ 1 MeV < 1. This also means that the bounds we derive in this subsection apply as long as the primordial magnetic fields have been generated before e^+ e^- annihilation. In the case of g > g_min, the ratio Π_acc / Π_red is evaluated by substiting v = v_p into Eq. (<ref>). Hence the condition in Eq. (<ref>) can be rewritten by using n = n_0 (a_0/a)^3 as an upper bound on the present-day monopole number density, n_0 ≲ 10^-21 cm^-3. We express this condition also in terms of the present-day monopole flux F = n_0 v_0 / 4 π: F ≲ 10^-14 cm^-2sr^-1s^-1( v_0/10^-3) . The bound is mainly determined by the temperature and redshift at e^+ e^- annihilation, and thus is independent of the amplitude of the magnetic fields and of the mass and the charge of the monopoles. However, the red region in Figure <ref> where the bound can be applied (g > g_min) becomes smaller for stronger magnetic fields. In the case of g < g_min, the monopoles do not efficiently transfer the magnetic energy to the plasma. Hence their presence does not lead to the dissipation of primordial magnetic fields, but can only induce oscillations of the fields and affect their redshift evolution. In order for the fields' redshifting to be unaltered by monopoles, the condition Π_acc / Π_red < 1 should hold all the way until today. However, in order to connect with the bounds we derived for g > g_min, here let us only require the redshifting to be unaltered at temperatures T > 1 MeV and impose Eq. (<ref>). We further limit our analysis to nonrelativistic monopoles, i.e. (γ v)_H≲ 1, which from Eq. (<ref>) is equivalent to considering masses of: m ≳ 10^10 GeV(g/g_D) (B_0/10^-15 G). (We are thus focusing on the region in Figure <ref> on the right of both the purple and gray dashed lines.) Then the dissipation rate ratio can be evaluated by substituting v = v_H∼ gB / m H into Eq. (<ref>), and the condition of Eq. (<ref>) translates into: n_0 ≲ 10^-22 cm^-3( m/10^17 GeV) ( g/g_D)^-2 . The condition in terms of the present-day monopole flux is: F ≲ 10^-16 cm^-2sr^-1s^-1( m/10^17 GeV) ( g/g_D)^-2( v_0/10^-3). §.§ Reheating epoch During reheating the universe is effectively matter-dominated, and the Hubble rate redshifts as H ∝ a^-3/2. The final results of this section depend only mildly on the numbers of relativistic degrees of freedom. Thus, for simplicity we ignore their time dependences and use g_*(s)∼𝒩_c ∼ 100 in the following analyses. We also assume the plasma particles to be in thermal equilibrium during reheating. Under these assumptions, the temperature of the primordial plasma redshifts as T ∝ a^-3/8 <cit.>. Consequently, the plasma-induced terminal velocity scales as v_p∝ a^-5/4, and the Hubble-induced terminal velocity scales as (γ v)_H∝ a^-1/2; the redshifting of the former being faster is related to the fact that the monopole velocity during reheating can make a transition from v_H to v_p but not vice versa <cit.>, as was shown in Figure <ref>. Combining this with the discussion in the previous subsection, we see that monopoles make the transition to the v_p-branch before reheating completes if the charge satisfies g > g_min, with g_min given in Eq. (<ref>). For the case of g > g_min, we define t_* as the time when the plasma friction takes over the Hubble friction, i.e. f_p * = m H_* γ_H * (the subscript “*” stands for quantities computed at time t_*). For t < t_* the monopoles move at the terminal velocity v_H, while for t > t_* the monopoles follow v_p. The balance of the frictional forces is rewritten as Eq. (<ref>), which can be transformed into an equation for H_* by considering that f_p∝ H^1/2 and B ∝ H^4/3 during reheating as: f_p dom^2 ∼ (g B_dom)^2 ( H_*/H_dom)^5/3 + (m H_dom)^2 ( H_*/H_dom). Considering that the right-hand side is dominated by one of the terms, this equation can be solved approximately as H_* ∼ H_dom( f_p/g B)_dom^6/5 for m ≪m̅, H_dom( f_p/m H)_dom^2 for m ≫m̅. Here m̅ is defined as m̅ = ( g^3 B^3 f_p^2/H^5)_dom^1/5, however we note that the combination B^3 f_p^2 / H^5 is actually time-independent during the reheating epoch, up to mild variations from changes in the numbers of relativistic degrees of freedom. Using the relations in Eq. (<ref>) computed at t_dom, we can rewrite these expressions as H_* ∼ 10^5 GeV( g/g_D)^6/5( B_0/10^-15 G)^-6/5( T_dom/10^6 GeV)^2 for m ≪m̅, 10^-1 GeV( m/10^17 GeV)^-2( g/g_D)^4 ( T_dom/10^6 GeV)^2 for m ≫m̅, m̅∼ 10^14 GeV( g/g_D)^7/5( B_0/10^-15 G)^3/5 . If m ≪m̅, the balancing of the frictional forces happens while the monopoles are relativistic, and the monopole velocity jumps from an ultrarelativistic v_H to a mildly relativistic v_p. On the other hand if m ≫m̅, the balancing happens while the monopoles are nonrelativistic, and the velocity transition is smooth. These explain the behavior of the monopole velocity described at the end of Section <ref>. The relation H_* > H_dom holds if g > g_min; while monopoles carrying such a charge move at the velocity v_p and transfer the magnetic field energy into the plasma, the dissipation ratio decreases in time as Π_acc / Π_red∝ a^-3/4 (this behavior is shown in Figure <ref> by the middle parts of the blue and purple lines). Hence requiring that the fields survive during reheating amounts to imposing ( Π_acc/Π_red)_* < 1. Combining this with v = v_p and Eq. (<ref>) yields a bound on the monopole abundance,[For m ≪m̅, the ratio Π_acc / Π_red undergoes a jump shortly after t_* (cf. purple line in Figure <ref>). By plugging v = v_p, here we are approximately evaluating the value of Π_acc / Π_red after the jump.] n_0 ≲max.{ 10^-16 cm^-3( g/g_D)^-3/5( B_0/10^-15 G)^3/5( T_dom/10^6 GeV) , 10^-13 cm^-3( m/10^17 GeV) ( g/g_D)^-2( T_dom/10^6 GeV) } . The first (second) line sets the condition when m is smaller (larger) than m̅ given in Eq. (<ref>). In terms of the present monopole flux, the condition is: F ≲max.{ 10^-10 cm^-2sr^-1s^-1( g/g_D)^-3/5( B_0/10^-15 G)^3/5( T_dom/10^6 GeV) ( v_0/10^-3) , 10^-7 cm^-2sr^-1s^-1( g/g_D)^-2( m/10^17 GeV) ( T_dom/10^6 GeV) ( v_0/10^-3) } . The bounds in Eqs. (<ref>) and (<ref>) assume that the velocity transition happens during the reheating epoch, and in particular after the primordial magnetic fields are generated, hence H_* < H_end, H_inf. Here H_inf is the inflationary Hubble rate, which is constrained by current observational limits on primordial gravitational waves as <cit.>, H_inf≲ 10^14 GeV. Thus the condition of Eq. (<ref>) would be violated if H_* becomes very large, for instance due to a large T_dom. We also note that, going back in time in the reheating epoch, the magnetic energy grows relative to the total density as ρ_B / ρ_tot∝ a^-1. For magnetic fields generated at the end of inflation or during reheating, requiring that they have never dominated the universe yields a constraint on the scale of magnetogenesis as H_end≲ 10^22 GeV( T_dom/10^6 GeV)^2 (10^-15 G/B_0)^3. For B_0 ∼ 10^-15 G and g ≲ g_D, there is a wide range for H_end where both this and (<ref>) are satisfied. However this constraint can become relevant for magnetic black holes, as we will later see. If the fields are generated after t_*, then the monopole bound becomes weaker; such cases are studied in <cit.>. If the charge is as small as g < g_min, the monopole velocity never approaches v_p. We can also derive the condition for such monopoles not to affect the redshifting of the magnetic fields during the reheating epoch. In this case the dissipation rate ratio is non-decreasing during reheating: It increases as Π_acc / Π_red∝ a^1/2 while the monopoles move at relativistic v_H, then stays constant after v_H becomes nonrelativistic. Hence we impose ( Π_acc/Π_red)_dom < 1. This condition assumes that the magnetic fields were produced before the radiation-dominated epoch begins. We further focus on monopoles that become nonrelativistic before t_dom, which amounts to considering masses satisfying the condition of Eq. (<ref>). This allows us to plug v_dom∼ (gB / m H)_dom into Eq. (<ref>). One can check that the upper bound on the present-day monopole number density thus obtained is the same as the second line of Eq. (<ref>), and the flux bound is the same as the second line of Eq. (<ref>). §.§ Summary of bounds from primordial magnetic fields We have seen that the bounds on the monopole flux from the survival of primordial magnetic fields are described by Eq. (<ref>) during radiation domination, and by Eq. (<ref>) during reheating. The bounds are valid under the condition g > g_min, where the minimum magnetic charge g_min is given in Eq. (<ref>). The bound from radiation domination assumes that primordial magnetic fields are generated before e^+ e^- annihilation. For the bound from reheating it is further assumed that the scales of magnetogenesis H_end, inflation H_inf, and H_* given in Eq. (<ref>) satisfy the condition of Eq. (<ref>); here H_end is also constrained by Eq. (<ref>), and H_inf by Eq. (<ref>). The flux bound from radiation domination is independent of the amplitude of the magnetic fields and of the mass and the charge of the monopoles, although the minimum charge g_min depends on the field strength and mass. The bound from reheating depends on a number of parameters, and in particular it becomes stronger for larger charges and lower reheating temperatures. For monopoles with g < g_min, we derived conditions for them not to alter the redshifting of primordial magnetic fields.[Monopoles with g > g_min can also affect the magnetic redshifting, however we did not investigate this case.] Focusing on masses satisfying Eq. (<ref>), the condition during radiation domination gives the flux bound in Eq. (<ref>) (assuming magnetogenesis before e^+ e^- annihilation), and from reheating arises the bound which takes the same expression as the second line of Eq. (<ref>) (assuming magnetogenesis before the radiation-dominated epoch begins). For these bounds, we stress that primordial magnetic fields can survive even if they are violated, however in such cases one needs to take into account the monopoles in order to assess the cosmological evolution of primordial magnetic fields. In Figure <ref> we show the upper bounds on the monopole flux from radiation domination (thick lines), and from reheating with T_dom = 100 MeV (thin lines). The magnetic charge is varied as g = g_D (red), g = 10^-3 g_D (purple), and g = 10^-6 g_D (blue). We have chosen a rather low reheating temperature just a few orders of magnitude above the scale of Big Bang Nucleosynthesis, as an optimal value for the reheating bound. The purple, blue, and red thick lines overlap in the left part of the plot. Here we assume v_0 = 10^-3, and B_0 = 10^-15 G. The solid parts of the lines are based on the survival of primordial fields (g > g_min), while the dashed parts are from the requirement that the redshifting of the primordial fields is unaltered (g < g_min). For the masses shown in the plot, g_min is given by the second line of (<ref>), and thus the condition g > g_min can be rewritten as: m ≲ 10^19 GeV( g/g_D)^2 . The points in the plot show where this bound is saturated. We also note that the parameters used for the plot allow for ranges of values for H_end and H_inf that satisfy the assumptions in Eqs. (<ref>), (<ref>), and (<ref>). Moreover, the condition in Eq. (<ref>) is satisfied on the dashed lines. As shown in the plot, for T_dom = 100 MeV the bound from reheating is stronger than the bound from radiation domination at low masses, for g ≳ 10^-5 g_D. However for T_dom≳ 10^2 GeV, the bound from radiation domination becomes stronger than the bound from reheating even at g = g_D. We stress again that the bound from the survival of primordial fields during radiation domination does not weaken for smaller charges (although its range of applicability shrinks to smaller masses); this feature makes the radiation domination bound particularly useful for constraining minicharged monopoles. § COMPARISON OF BOUNDS Let us now compare the various bounds presented in the previous sections. In Figure <ref> we show the upper bounds on the flux of magnetic monopoles as functions of the mass, for different values of the magnetic charge. The solid gray line shows the cosmological abundance bound in Eq. (<ref>) where ρ_DM is taken as the average dark matter density in the universe, i.e. ρ_DM = 1.3 × 10^-6 GeV cm^-3, along with v_i = 10^-3. The orange line shows the bound based on the survival of Galactic magnetic fields, using Eq. (<ref>) and Galactic field strength B = 10^-6 G, along with the other parameters as l_c = 1 kpc, R = 10 kpc, τ_gen = 10^8 yr, and γ_i - 1 = 10^-6. Using the same set of parameters, the dotted vertical line shows the lower mass limit in Eq. (<ref>) for monopoles to be clustered with our Galaxy. The pink line shows the bound from seed Galactic fields, using again Eq. (<ref>) but with the seed field assumed to be B = 10^-11 G; the other parameters are the same as the orange line. The pink and orange lines overlap on the right side of the plots. The red line shows the bounds in Eqs. (<ref>) and (<ref>)) from primordial magnetic fields during radiation domination. The blue line shows the bound in Eq. (<ref>) from primordial magnetic fields during reheating, for T_dom = 100 MeV. For the primordial bounds we assume the present-day amplitude of the primordial magnetic fields to be B_0 = 10^-15 G, and monopole velocity v_0 = 10^-3; moreover the solid parts of the lines are based on the survival of the fields, while the dashed parts are from the requirement that the redshifting of the fields is unaltered. With B_0 = 10^-15 G, the smallest charge with which monopoles can dissipate primordial magnetic fields is of 10^-8 g_D, cf. Figure <ref>. As a value slightly above this, g = 10^-7 g_D is shown in panel <ref>. In panel <ref> for g = g_D, we also show in black the limit from direct searches by the MACRO collaboration <cit.>. We see that monopoles with large masses are most strongly constrained by the cosmological abundance bound, while those with intermediate to low masses are mainly constrained by the Parker bounds. Which of the Parker bounds is most stringent for light monopoles depends on the charge. In particular, the bound from seed Galactic magnetic fields is by far the strongest for monopoles with a Dirac charge, while the primordial bounds become comparable or even stronger for monopoles with small magnetic charges. However we also note that the seed field bound further improves at very small masses if the field strength is smaller than B = 10^-11 G used in the plots. In Figure <ref> we have displayed the various bounds for comparison purpose. However, we should note that the target of each bound is not necessarily the same. The cosmological abundance bound and the primordial bounds constrain the average monopole density in the universe, while the Galactic bounds and the MACRO bound constrain the local monopole density inside the Galaxy. If monopoles are clustered with the Galaxy (although this can happen only in regions on the right of the dotted lines), their local density can be much higher than the average density; then the bounds on the local density translate into much stronger bounds on the average density. For monopoles that can cluster with the Galaxy, namely, for masses larger than that indicated by the dotted lines, the cosmological abundance bound gives the strongest constraint. The situation is similar even when comparing with the local dark matter density; see Figure <ref> and the discussion at the end of Section <ref>. Hence monopoles in this mass region, if they can be produced, is a valid candidate of dark matter. Monopoles that cluster with our Galaxy and whose density does not exceed that of dark matter almost automatically satisfy the Parker bounds. The region of parameters for which the Parker bounds are more constraining corresponds to monopoles that cannot cluster with the Milky Way. Thus, monopoles that cluster with our Galaxy are valid candidates of dark matter. § MAGNETICALLY CHARGED EXTREMAL BLACK HOLES In this section we apply the bounds from the survival of magnetic fields to magnetically charged black holes. In particular, we focus on (nearly) extremal black holes for which Hawking radiation can be neglected. At extremality the charge of a black hole is related to its mass through the relation: m = √(2) g . Extremal magnetic black holes can be considered as monopoles with large mass and small charge-to-mass ratio. Thus, all the bounds we discussed in the previous sections can basically be applied to magnetically charged black holes. However, the direct relation between the charge g and the mass m of Eq. (<ref>) changes the mass dependence of the bounds, as we will show in the following discussion. §.§ Bounds from galactic magnetic fields The bound from galactic magnetic fields in Eq. (<ref>) is rewritten for extremal magnetic black holes as: F ≲ max.{ 10^-27 cm^-2sec^-1sr^-1(m/10^10 gm)^-1( l_c/1 kpc)^-1(τ_gen/10^8 yr)^-1( γ_i - 1 /10^-6), 10^-30 cm^-2sec^-1sr^-1(m/10^10 gm)^-1( B/10^-6 G) ( R/l_c)^1/2(τ_gen/10^8 yr)^-1}, This bound is inversely proportional to the black hole mass. The first (second) line sets the bound when B is weaker (stronger) than the threshold value: B̅∼ 10^-3 G( l_c/1 kpc)^-1( l_c/R)^1/2( γ_i - 1/10^-6). Since galactic fields are typically weaker than this, the bound is given by the first line, which is independent of the field strength. This implies that the bound for extremal magnetic black holes does not improve by considering seed fields. We also remark that the conditions in Eqs. (<ref>) and (<ref>), which are necessary for the bound to apply, can be violated for massive magnetic black holes.[If B is below the threshold value (<ref>), then (<ref>) gives a stronger condition than (<ref>).] For extremal magnetic black holes that are initially bound in a galaxy, their escape time is obtained using Eq. (<ref>) as, τ_esc∼max.{ 10^9 yr( B/10^-6 G)^-1( v_vir/10^-3), 10^13 yr( B/10^-6 G)^-2( l_c/1 kpc)^-1( v_vir/10^-3)^3 } . Note that the escape time of extremal magnetic black holes is independent of the mass, and is determined only by the galactic field properties and the virial velocity. For galaxies similar to the Milky Way, the second line sets the escape time, which depends rather sensitively on the galactic parameters. The work <cit.> derived a constraint on the fraction of extremal magnetic black holes as dark matter by studying the Andromeda Galaxy, whose parameters were inferred from <cit.> and taken as l_c∼ 10 kpc, τ_gen∼ 10^10 yr, and v_vir∼ 10^-3. It was claimed that the large values of l_c and τ_gen improve the bound (<ref>) compared to the Milky Way. However, these combined with the Andromeda's field strength B ≈ 5 × 10^-6 G <cit.> yield τ_esc∼ 10^10 yr, which is comparable to the age of Andromeda itself. With the uncertainties in the parameters, we cannot yet give a definite answer on whether magnetic black holes can remain clustered with Andromeda until today. However, the general lesson here is that if some galaxy appears to give a significantly stronger Parker bound on extremal magnetic black holes than the Milky Way, then it is improbable that this galaxy can currently host magnetic black holes. The Parker bound from such a galaxy thus applies to unclustered black holes. §.§ Bounds from primordial magnetic fields Using the relation in Eq. (<ref>), the terminal velocity set by the Hubble friction in Eq. (<ref>) can be rewritten for extremal magnetic black holes as: (γ v)_H∼B/√(2) M_pl H . Thus, under the condition that the magnetic fields do not dominate the universe, i.e. ρ_B / ρ_tot∼ (B/(M_pl H))^2 ≪ 1, the velocity v_H is always nonrelativistic, (γ v)_H≪ 1. From this it also follows that extremal magnetic black holes satisfy the mass bound in Eq. (<ref>). Notice that (γ v)_H does not depend on the black hole mass. With v_H being nonrelativistic, the value of g_min is given by the second line of Eq. (<ref>). Consequently, using the relation in Eq. (<ref>) the condition g > g_min can be rewritten as: m ≳ 10^-3 gm . Extremal magnetic black holes with such masses are subject to the flux bound of Eq. (<ref>) which is based on the survival of primordial fields during radiation domination. Regarding the bound from the reheating epoch, we saw for generic monopoles that the second line of Eq. (<ref>) applies at larger masses for which the monopoles are nonrelativistic upon making the transition from v_H to v_p. However for extremal black holes, this instead corresponds to smaller masses, m < m̅_BH, with the threshold being m̅_BH∼ 10^10 gm( B_0/10^-15 G)^-3/2 . One can also check that for m > m̅_BH, the scale H_* which is given by the first line of Eq. (<ref>), is comparable to or larger than the uppper limit for H_end given in Eq. (<ref>); hence the assumption in Eq. (<ref>) breaks down. Therefore only the second line of the reheating bound in Eq. (<ref>) applies for extremal black holes, which is rewritten as F ≲ 10^-18 cm^-2sr^-1s^-1( m/10^10 gm)^-1( T_dom/10^6 GeV) ( v_0/10^-3) . This bound applies to the mass range 10^-3 gm≲ m < m̅_BH, given that the magnetogenesis and inflation scales satisfy the conditions in Eqs. (<ref>), (<ref>), and (<ref>). At m > m̅_BH, the bound is weaker than in the first line of Eq. (<ref>) as discussed in <cit.>, however we will not analyze this in detail. Extremal magnetic black holes as light as m ≲ 10^-3 gm move at nonrelativistic v_H throughout the early cosmic history. The condition for such black holes not to alter the redshifting of the magnetic fields during radiation domination is Eq. (<ref>), which is now rewritten as F ≲ 10^-14 cm^-2sr^-1s^-1( m/10^-3 gm)^-1( v_0/10^-3) . The condition from the reheating epoch has the same expression as Eq. (<ref>). §.§ Comparison of bounds In Figure <ref> we show the upper bounds on the flux of extremal magnetic black holes. The solid gray line shows the cosmological abundance bound in Eq. (<ref>) with ρ_DM taken as the average dark matter density in the universe, i.e. ρ_DM≈ 1.3 × 10^-6 GeV cm^-3. The dotted gray line shows the abundance bound with ρ_DM set to the local dark matter density in the Milky Way, i.e. ρ_DM≈ 0.4 GeV cm^-3. In both of the abundance bounds we also used v_i = 10^-3. The orange line shows the bound in Eq. (<ref>) from the survival of Galactic magnetic fields, with the parameters taken as l_c = 1 kpc, R = 10 kpc, τ_gen = 10^8 yr, and γ_i - 1 = 10^-6; this bound is independent of the Galactic field strength (as long as B ≲ 10^-3 G), hence the present-day and seed Galactic fields give similar bounds. We note that the condition in Eq. (<ref>) holds for all the values of the mass of the black holes shown in the plot, as long as B ≳ 10^-11 G. The red line shows the bounds in Eqs. (<ref>) and (<ref>) from primordial fields during radiation domination. The blue line shows the bound in Eq. (<ref>) from primordial fields during reheating for T_dom = 100 MeV. For the primordial bounds we assume a present-day field strength B_0 = 10^-15 G, and velocity v_0 = 10^-3. The filled points on the primordial bounds show where the mass limit of Eq. (<ref>) is saturated, and the dashed parts of the lines show where the fields exhibit modified redshifting behaviors, instead of being dissipated. Differently from Figure <ref>, the dashed parts of the bounds are now on the lower mass end. The blue open circle shows the threshold mass m̅_BH given in Eq. (<ref>). A reheating bound also exists at m > m̅_BH, however we did not analyze this case and hence the blue line is truncated at m̅_BH. Let us also note that an extremal magnetic black hole with a Dirac charge g = g_D has a mass of 7.1 × 10^19 GeV. However, lighter extremal black holes can in principle exist by absorbing minicharged monopoles. The bound from primordial fields during radiation domination does not depend on the mass of the black holes, while the other Parker bounds become stronger for larger masses. Consequently, the radiation domination bound is much less constraining. The reheating bound, even with the rather low reheating temperature chosen in the plot, is weaker than the Galactic bound; this is also seen in Figure <ref> for the mass-dependent segments of the reheating and Galactic bounds. We also see that the abundance bound is stronger than the Galactic Parker bound, even when considering the local dark matter density. The Parker bound can in principle be significantly improved by considering galaxies hosting magnetic fields with coherence lengths much larger than the Milky Way; however it is unlikely that magnetic black holes can be clustered with such galaxies, as was discussed at the end of Section <ref>. And if magnetic black holes cannot cluster with some galaxies, then they cannot make up all the dark matter. §.§ Comments on black hole-specific features In the above discussions we have treated extremal magnetic black holes simply as very massive monopoles with charges much larger than the Dirac charge, and ignored black hole-specific features. However if accretion disks form around the black holes in galaxies, the interaction between the disks and the interstellar medium may affect the acceleration of black holes along the galactic fields. On the other hand, there has not been enough time for accretion disks to form in the early universe <cit.>, hence this should not affect the bounds from primordial magnetic fields. Extremal magnetic black holes can also be surrounded by an electroweak corona, where the value of the Higgs field varies <cit.>. The presence of electroweak coronas can also change the interaction between the black holes and the interstellar medium or the primordial plasma, modifying the Parker-type bounds. We leave detailed studies of these effects for the future. § CONCLUSION We carried out a comprehensive study of the Parker-type bounds on magnetic monopoles with arbitrary charge. We summarized the bounds from galactic magnetic fields in Section <ref>, and the bounds from primordial magnetic fields in Section <ref>. The various bounds are compared in Figure <ref>. We showed that heavy monopoles are mainly constrained by the dark matter density limit, while intermediate to low mass monopoles are mainly constrained by the Parker bounds. Among the Parker bounds, the seed galactic field bound strongly constrains monopoles with a Dirac charge, while the primordial bound from radiation domination can be the strongest for monopoles with small magnetic charges. This is because the bound from radiation domination in the low-mass regime is independent of the monopole charge, while the other Parker bounds become weaker for smaller charges. While monopoles with a Dirac charge have to be heavier than 10^18 GeV to be able to cluster with our Galaxy, minicharged monopoles can cluster with much lighter masses. For monopoles that can cluster with our Galaxy, the Parker bounds are generically less constraining than the bound from the dark matter density. Such monopoles can thus make up the entire dark matter. We also studied extremal magnetic black holes, for which the various bounds are compared in Figure <ref>. We found that extremal magnetic black holes are mainly constrained by comparison with the dark matter density. Even stronger constraints can in principle be obtained if there exist galaxies whose magnetic fields have coherence lengths much larger than the Milky Way. However the large coherence lengths also lead to the acceleration of black holes up to the escape velocity within a rather short time period, and hence it is improbable that black holes remain clustered with such galaxies until today. The existence of galaxies not being able to host magnetic black holes, if confirmed, would rule out the possibility of magnetic black holes as a dark matter candidate. Minicharged monopoles are typically connected by dark strings, whose tension is set by the mass μ of dark photons; moreover they appear as minicharged monopoles only at distances larger than 1 / μ (see e.g. <cit.>). In our analyses we ignored these effects, supposing that the force from the background magnetic field is stronger than the string tension, and that the field's coherence length is larger than 1 / μ. These assumptions can break down depending on the dark photon mass, in which case our bounds can be modified. For extremal black holes, some of the Parker bounds could be modified by the presence of accretion disks and/or the electroweak corona <cit.>. We leave detailed considerations of these effects for future analysis. We thank Kyrylo Bondarenko, Michele Doro, Rajeev Kumar Jain, Maurizio Spurio, and Piero Ullio for helpful discussions. T.K. acknowledges support from the INFN program on Theoretical Astroparticle Physics (TAsP), and JSPS KAKENHI (Grant No. JP22K03595). § ACCELERATION OF MONOPOLES IN GALACTIC MAGNETIC FIELDS Here we study monopole dynamics in galactic magnetic fields. We divide the magnetic field region of the galaxy into cells of uniform field, and analyze the acceleration of monopoles as they pass through multiple cells. The equation of motion of a monopole passing through the Nth cell with uniform magnetic field B_N is m d /dt (γv ) = g B_N, where γ = 1/√(1-v^2) and v = v. (g denotes the amplitude of the magnetic charge, i.e. g > 0, and thus the monopole here has a positive charge. However the discussion in this appendix can be applied to negatively charged monopoles by replacing v→ - v.) By integrating the equation, one obtains the change in the monopole's Lorentz factor in the Nth cell as, γ_N^2 - γ_N-1^2 = ( g B τ_N/m)^2 + 2 g B_N ·v_N-1γ_N-1τ_N/m. Here τ_N denotes the time it takes for the monopole to pass through the Nth cell, and γ_N is the Lorentz factor when the monopole exits the Nth cell and simultaneously enters the (N+1)th cell; the same notation is used for the velocity v_N. For N=1, then γ_N-1 and v_N-1 in the equation are replaced by the initial Lorentz factor γ_i and velocity v_i upon entering the first cell. We take all cells to have the same size l_c and field strength, i.e. B = B_N for all N. Thus the kinetic energy of a monopole changes within each cell by at most ∼ g B l_c. If the kinetic energy is initially large such that m (γ_i - 1) ≫ g B l_c, then the energy barely changes in the first cell. On the other hand if m (γ_i - 1) ≪ g B l_c, the monopole is quickly accelerated so that upon exiting the first cell its energy reaches m (γ_1 - 1) ≃ g B l_c, and thereafter the energy does not change much within each cell. Hence independently of γ_i, we can write the crossing time for the second cell onward as[The exact value of τ_N also depends on the shape of the cell and the incident angle, however the expression (<ref>) is good enough for our purpose of obtaining an order-of-magnitude estimate of the average energy gain.] τ_N ∼l_c/v_N-1 for N ≥ 2. Let us consider nonrelativistic monopoles for the moment. Then (<ref>) at N ≥ 2 can be rewritten using (<ref>) as, v_N^2 - v_N-1^2 = v_mag^4/4 v_N-1^2 + v_mag^2 B̂_N ·v̂_N-1. Here a hat denotes a unit vector: B̂_N ≡B_N / B and v̂_N ≡v_N / v_N. We also introduced v_mag≡√(2 g B l_c/m), which corresponds to the velocity a monopole initially at rest obtains after passing through a single cell. From the discussions above (<ref>) it follows that v_1 ≳ v_mag for general v_i. Supposing for simplicity that the direction of the magnetic field is uncorrelated from one cell to the next, the second term in the right-hand side of (<ref>) sources a random walk of v^2 in each cell. As we are interested in the mean behavior of the monopoles, let us ignore this term for now. Then we obtain a recurrence relation of the form[Here we are also roughly approximating the mean ⟨ 1/v^2_N-1⟩ by 1/⟨ v^2_N-1⟩.] β_N - β_N-1 = 1/β_N-1, where β_N ≡ 2 v_N^2 / v_mag^2. Since β_1 ≳ 1, this recurrence relation has an approximate solution, β_N ≃√(β_1^2 + 2 (N-1) ) . Hence the exit velocity from the Nth cell is obtained as v_N^2 ≃√( v_1^4 + N-1/2 v_mag^4 ). If v_i ≳ v_mag, the discussions from (<ref>) onward apply also to N = 1, then one can make the replacements v_1 → v_i and N-1 → N in the right-hand side of (<ref>). On the other hand if v_i ≪ v_mag, then v_1 ≃ v_mag and (<ref>) becomes v_N^2 ≃√((N+1)/2) v_mag^2. In both cases, (<ref>) can be rewritten at the order-of-magnitude level as v_N^2 ∼√( v_i^4 + N/2 v_mag^4 ) . In particular, the net change in the velocity squared in the limit of small and large N takes the forms, Δ v_N^2 = v_N^2 - v_i^2 ∼N /4v_mag^4/v_i^2 for N ≪ 8 ( v_i/v_mag)^4 , √(N/2) v_mag^2 for N ≫ 8 ( v_i/v_mag)^4 . In the first line the acceleration is tiny such that Δ v_N ≲ v_i; this regime exists only if v_i ≳ v_mag. Eventually the monopole is accelerated as in the second line, where Δ v_N ≳ v_i. Let us discuss the second term in (<ref>) which we have been ignoring. This sources a random walk behavior of Δ v^2 in each cell with step size ≤ v_mag^2, which after N cells yields a root-mean-square distance of order √(N) v_mag^2. Now, consider p number of monopoles with initial velocity v_i, each passing through N cells in different parts of the galaxy. From the central limit theorem, the distribution of the average of Δ v_N^2 with large enough p is approximated by a normal distribution with mean (<ref>) and standard deviation of σ∼√(N/p) v_mag^2. The expression (<ref>) describes well the average behavior for the set of monopoles if it is much larger than σ. For this, the second line of (<ref>) requires only p ≫ 1, while the first line requires p N ≫ 16 ( v_i/v_mag)^4. For relativistic monopoles (v_N ≃ 1), the mean recurrence relation becomes γ_N^2 - γ_N-1^2 = ( g B l_c/m)^2, which yields γ_N = √(γ_1^2 + (N-1) (g B l_c/m)^2 ). By following a similar analysis as for nonrelativistic monopoles, one arrives at results that match at the order-of-magnitude level with (<ref>) and (<ref>), with v^2 replaced by 2 (γ - 1). In summary, for both nonrelativistic and relativistic monopoles, the average energy gain after passing through N cells takes the form Δ E_N = m (γ_N - γ_i) ∼N/4(g B l_c)^2/m (γ_i - 1) for N ≪ 8 ( m (γ_i - 1)/g B l_c)^2 , √(N/2) g B l_c for N ≫ 8 ( m (γ_i - 1)/g B l_c)^2 . For this to describe well the average behavior of a set of monopoles, the first line requires a sufficiently large number of monopoles p such that p N ≫ 16 ( m (γ_i - 1)g B l_c)^2 , while the second line requires only p ≫ 1. 99 Dirac:1931kp P. A. M. Dirac, “Quantised singularities in the electromagnetic field,,” Proc. Roy. Soc. Lond. A 133 (1931) no.821, 60-72 tHooft:1974kcl G. 't Hooft, “Magnetic Monopoles in Unified Gauge Theories,” Nucl. Phys. B 79 (1974), 276-284. Polyakov:1974ek A. M. Polyakov, “Particle Spectrum in Quantum Field Theory,” JETP Lett. 20 (1974), 194-195 PRINT-74-1566 (LANDAU-INST). Zeldovich:1978wj Y. B. Zeldovich and M. Y. Khlopov, “On the Concentration of Relic Magnetic Monopoles in the Universe,” Phys. Lett. B 79 (1978), 239-241. Preskill:1979zi J. Preskill, “Cosmological Production of Superheavy Magnetic Monopoles,” Phys. Rev. Lett. 43 (1979), 1365. Vilenkin:2000jqa A. Vilenkin and E. P. S. Shellard, “Cosmic Strings and Other Topological Defects,” Cambridge University Press, 2000, ISBN 978-0-521-65476-0. DelZotto:2016fju M. Del Zotto, J. J. Heckman, P. Kumar, A. Malekian and B. Wecht, “Kinetic Mixing at Strong Coupling,” Phys. Rev. D 95 (2017) no.1, 016007 [arXiv:1608.06635 [hep-ph]]. Brummer:2009cs F. Brummer and J. Jaeckel, “Minicharges and Magnetic Monopoles,” Phys. Lett. B 675 (2009), 360-364 [arXiv:0902.3615 [hep-ph]]. Hiramatsu:2021kvu T. Hiramatsu, M. Ibe, M. Suzuki and S. Yamaguchi, “Gauge kinetic mixing and dark topological defects,” JHEP 12 (2021), 122 [arXiv:2109.12771 [hep-ph]]. GomezSanchez:2011orv C. Gomez Sanchez and B. Holdom, “Monopoles, strings and dark matter,” Phys. Rev. D 83 (2011), 123524 [arXiv:1103.1632 [hep-ph]]. Graesser:2021vkr M. L. Graesser, I. M. Shoemaker and N. T. Arellano, “Milli-magnetic monopole dark matter and the survival of galactic magnetic fields,” JHEP 03 (2022), 105 [arXiv:2105.05769 [hep-ph]]. Hook:2017vyc A. Hook and J. Huang, “Bounding millimagnetically charged particles with magnetars,” Phys. Rev. D 96 (2017) no.5, 055010 [arXiv:1705.01107 [hep-ph]]. Bai:2020spd Y. Bai, J. Berger, M. Korwar and N. Orlofsky, “Phenomenology of magnetic black holes with electroweak-symmetric coronas,” JHEP 10, 210 (2020) [arXiv:2007.03703 [hep-ph]]. Maldacena:2020skw J. Maldacena, “Comments on magnetic black holes,” JHEP 04, 079 (2021) [arXiv:2004.06084 [hep-th]]. Ghosh:2020tdu D. Ghosh, A. Thalapillil and F. Ullah, “Astrophysical hints for magnetic black holes,” Phys. Rev. D 103 (2021) no.2, 023006 [arXiv:2009.03363 [hep-ph]]. Diamond:2021scl M. D. Diamond and D. E. Kaplan, “Constraints on relic magnetic black holes,” JHEP 03, 157 (2022) [arXiv:2103.01850 [hep-ph]]. Araya:2022few I. J. Araya, N. D. Padilla, M. E. Rubio, J. Sureda, J. Magaña and L. Osorio, “Dark matter from primordial black holes would hold charge,” [arXiv:2207.05829 [astro-ph.CO]]. Zhang:2023tfv C. Zhang and X. Zhang, “Gravitational capture of magnetic monopoles by primordial black holes in the early universe,” [arXiv:2302.07002 [hep-ph]]. Kolb:1990vq E. W. Kolb and M. S. Turner, “The Early Universe,” Front. Phys. 69 (1990), 1-547. ParticleDataGroup:2020ssz P. A. Zyla et al. [Particle Data Group], “Review of Particle Physics,” PTEP 2020 (2020) no.8, 083C01 Parker:1970xv E. N. Parker, “The Origin of Magnetic Fields,” Astrophys. J. 160, 383 (1970). Turner:1982ag M. S. Turner, E. N. Parker and T. J. Bogdan, “Magnetic Monopoles and the Survival of Galactic Magnetic Fields,” Phys. Rev. D 26, 1296 (1982). Parker:1987 E. N. Parker, “Magnetic Monopole Plasma Oscillations and the Survival of Galactic Magnetic Fields,” Astrophys. J. 321, 349 (1987). Adams:1993fj F. C. Adams, M. Fatuzzo, K. Freese, G. Tarle, R. Watkins and M. S. Turner, “Extension of the Parker bound on the flux of magnetic monopoles,” Phys. Rev. Lett. 70, 2511-2514 (1993). Tavecchio:2010mk F. Tavecchio, G. Ghisellini, L. Foschini, G. Bonnoli, G. Ghirlanda and P. Coppi, “The intergalactic magnetic field constrained by Fermi/LAT observations of the TeV blazar 1ES 0229+200,” Mon. Not. Roy. Astron. Soc. 406 (2010), L70-L74 [arXiv:1004.1329 [astro-ph.CO]]. Neronov:2010gir A. Neronov and I. Vovk, “Evidence for strong extragalactic magnetic fields from Fermi observations of TeV blazars,” Science 328 (2010), 73-75 [arXiv:1006.3504 [astro-ph.HE]]. Dermer:2010mm C. D. Dermer, M. Cavadini, S. Razzaque, J. D. Finke, J. Chiang and B. Lott, “Time Delay of Cascade Radiation for TeV Blazars and the Measurement of the Intergalactic Magnetic Field,” Astrophys. J. Lett. 733 (2011), L21 [arXiv:1011.6660 [astro-ph.HE]]. MAGIC:2022piy V. A. Acciari et al. [MAGIC], “A lower bound on intergalactic magnetic fields from time variability of 1ES 0229+200 from MAGIC and Fermi/LAT observations,” Astron. Astrophys. 670 (2023), A145 [arXiv:2210.03321 [astro-ph.HE]]. Subramanian:2015lua K. Subramanian, “The origin, evolution and signatures of primordial magnetic fields,” Rept. Prog. Phys. 79 (2016) no.7, 076901 [arXiv:1504.02311 [astro-ph.CO]]. Long:2015cza A. J. Long and T. Vachaspati, “Implications of a Primordial Magnetic Field for Magnetic Monopoles, Axions, and Dirac Neutrinos,” Phys. Rev. D 91, 103522 (2015) [arXiv:1504.03319 [hep-ph]]. Kobayashi:2022qpl T. Kobayashi and D. Perri, “Parker bound and monopole pair production from primordial magnetic fields,” Phys. Rev. D 106 (2022) no.6, 063016 [arXiv:2207.08246 [hep-ph]]. Schwinger:1951nm J. S. Schwinger, “On gauge invariance and vacuum polarization,” Phys. Rev. 82 (1951), 664-679. Affleck:1981ag I. K. Affleck and N. S. Manton, “Monopole Pair Production in a Magnetic Field,” Nucl. Phys. B 194 (1982), 38-64. Affleck:1981bma I. K. Affleck, O. Alvarez and N. S. Manton, “Pair Production at Strong Coupling in Weak External Fields,” Nucl. Phys. B 197 (1982), 509-519. Gould:2017zwi O. Gould and A. Rajantie, “Magnetic monopole mass bounds from heavy ion collisions and neutron stars,” Phys. Rev. Lett. 119 (2017) no.24, 241601 [arXiv:1705.07052 [hep-ph]]. MoEDAL:2021vix B. Acharya et al. [MoEDAL], “Search for magnetic monopoles produced via the Schwinger mechanism,” Nature 602 (2022) no.7895, 63-67 [arXiv:2106.11933 [hep-ex]]. Kobayashi:2021des T. Kobayashi, “Monopole-antimonopole pair production in primordial magnetic fields,” Phys. Rev. D 104 (2021) no.4, 043501 [arXiv:2105.12776 [hep-ph]]. Cabrera:1982gz B. Cabrera, “First Results from a Superconductive Detector for Moving Magnetic Monopoles,” Phys. Rev. Lett. 48 (1982), 1378-1380 MACRO:2002jdv M. Ambrosio et al. [MACRO], “Final results of magnetic monopole searches with the MACRO experiment,” Eur. Phys. J. C 25 (2002), 511-522 [arXiv:hep-ex/0207020 [hep-ex]]. IceCube:2021eye R. Abbasi et al. [IceCube], “Search for Relativistic Magnetic Monopoles with Eight Years of IceCube Data,” Phys. Rev. Lett. 128 (2022) no.5, 051101 [arXiv:2109.13719 [astro-ph.HE]]. Widrow:2002ud L. M. Widrow, “Origin of galactic and extragalactic magnetic fields,” Rev. Mod. Phys. 74, 775-823 (2002) [arXiv:astro-ph/0207240 [astro-ph]]. Arshakian:2008cx T. G. Arshakian, R. Beck, M. Krause and D. Sokoloff, “Evolution of magnetic fields in galaxies and future observational tests with the Square Kilometre Array,” Astron. Astrophys. 494, 21 (2009) [arXiv:0810.3114 [astro-ph]]. Beck:2012bc A. M. Beck, H. Lesch, K. Dolag, H. Kotarba, A. Geng and F. A. Stasyszyn, “Origin of strong magnetic fields in Milky-Way like galactic haloes,” Mon. Not. Roy. Astron. Soc. 422, 2152-2163 (2012) [arXiv:1202.3349 [astro-ph.CO]]. Planck:2018vyg N. Aghanim et al. [Planck], “Planck 2018 results. VI. Cosmological parameters,” Astron. Astrophys. 641, A6 (2020) [erratum: Astron. Astrophys. 652, C4 (2021)] [arXiv:1807.06209 [astro-ph.CO]]. Read:2014qva J. I. Read, “The Local Dark Matter Density,” J. Phys. G 41, 063101 (2014) [arXiv:1404.1938 [astro-ph.GA]]. BICEP:2021xfz P. A. R. Ade et al. [BICEP and Keck], “Improved Constraints on Primordial Gravitational Waves using Planck, WMAP, and BICEP/Keck Observations through the 2018 Observing Season,” Phys. Rev. Lett. 127 (2021) no.15, 151301 [arXiv:2110.00483 [astro-ph.CO]]. Fletcher:2003ec A. Fletcher, E. M. Berkhuijsen, R. Beck and A. Shukurov, “The Magnetic field of M 31 from multi-wavelength radio polarization observations,” Astron. Astrophys. 414 (2004), 53-67 [arXiv:astro-ph/0310258 [astro-ph]]. Ricotti:2007au M. Ricotti, J. P. Ostriker and K. J. Mack, “Effect of Primordial Black Holes on the Cosmic Microwave Background and Cosmological Parameter Estimates,” Astrophys. J. 680 (2008), 829 [arXiv:0709.0524 [astro-ph]].
http://arxiv.org/abs/2307.05682v1
20230711180004
Galaxy Quenching with Mass Growth History of Galaxy Groups and Clusters: The Importance of Post-Processing
[ "So-Myoung Park", "Kyungwon Chun", "Jihye Shin", "Hyunjin Jeong", "Joon Hyeop Lee", "Mina Pak", "Rory Smith", "Jae-Woo Kim" ]
astro-ph.GA
[ "astro-ph.GA" ]
Kyungwon Chun [email protected] Korea Astronomy and Space Science Institute, 776 Daedeok-daero, Yuseong-gu, Daejeon 34055, South Korea Korea Astronomy and Space Science Institute, 776 Daedeok-daero, Yuseong-gu, Daejeon 34055, South Korea Korea Astronomy and Space Science Institute, 776 Daedeok-daero, Yuseong-gu, Daejeon 34055, South Korea Korea Astronomy and Space Science Institute, 776 Daedeok-daero, Yuseong-gu, Daejeon 34055, South Korea Korea Astronomy and Space Science Institute, 776 Daedeok-daero, Yuseong-gu, Daejeon 34055, South Korea Korea Astronomy and Space Science Institute, 776 Daedeok-daero, Yuseong-gu, Daejeon 34055, South Korea School of Mathematical and Physical Sciences, Macquarie University, Sydney, NSW 2109, Australia ARC Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), Australia Universidad Técnica Federico Santa María, 3939 Vicuña Mackenna, San Joaquín, Santiago 8940897, Chile Korea Astronomy and Space Science Institute, 776 Daedeok-daero, Yuseong-gu, Daejeon 34055, South Korea We investigate the fraction of quenched satellite galaxies in host galaxy groups and clusters using TNG300 in the IllustrisTNG cosmological magnetohydrodynamical simulations. Simulations show that most satellites are quenched after they fall into their final hosts: post-processing is a more dominant mechanism of galaxy quenching than pre-processing. We find the fraction of quenched satellites at z=0 increases with host mass, which implies that more massive hosts have higher quenching efficiency because more massive hosts have more massive groups infalling. Furthermore, we find that hosts that have many early-infall satellites show a higher fraction of quenched satellites at z=0 than those having many late-infall satellites, which results in a scatter of the quenched fraction of satellites in a given mass range of hosts at z=0. Our results highlight the significance of the mass of hosts and the different infall times of satellites in understanding galaxy quenching. § INTRODUCTION Galaxies at low redshift are sorted out into two populations by their star formation (SF) activity. One is disk-like `star-forming' galaxies that have ongoing SF so young stellar populations with blue colors are observed. The other is elliptical-like `quenched (passive)' galaxies whose SF activity is strongly suppressed so old stellar populations with red colors are observed. Galaxy quenching is closely related to various galaxy properties such as morphology, color, age, star formation rate (SFR), and kinematics <cit.>. Thus, understanding when, where, and how galaxies halt SF is one of the important topics in extragalactic astronomy. In the observation, galaxies whose stellar mass is larger than 10^10 M_⊙ show low SFR (specific SFR ≤ 10^-11 yr^-1) regardless of the environment, called `mass quenching' <cit.>. In this case, all the internal processes of galaxies (secular evolution) that include outflows from stellar winds, supernova explosion, and active galactic nucleus (AGN) feedback decrease SFR <cit.>. On the other hand, in a high-density environment the SFR of low-mass galaxies whose stellar mass is smaller than 10^10 M_⊙ is also suppressed, defined as `environmental quenching' <cit.>. In this case, there are various mechanisms that stop the star formation of galaxies: ram-pressure stripping <cit.>, tidal stripping<cit.>, strangulation or starvation <cit.>, and harassment <cit.>. These two quenching mechanisms have been consistently suggested to describe the observed properties of passive galaxies <cit.>. In the framework of the Lambda cold dark matter (ΛCDM) model, it is widely accepted that galaxies are growing hierarchically, which means low-mass galaxies are fundamental building blocks for cosmological structure formation. In this hierarchical paradigm, low-mass galaxies that fall into high-mass galaxies and/or the larger structures become their satellites, which are expected to be merged eventually. During their first orbital passage to the pericenter, SF activities of satellites are highly suppressed by ram-pressure stripping and tidal stripping: called `post-processing' <cit.>. Interestingly, the SFR suppression of satellites is known to start even before satellites fall into the larger structures, known as `pre-processing' <cit.>. Various simulations and observations have been used to study the quenching mechanism of galaxies <cit.>, but the relative amount of pre- and post-processings for galaxy quenching is still under discussion. Since the galaxy groups and clusters are built up with satellites accreted from the outside, the fraction of quenched satellites can be a result that reflects both pre- and post-processings. In this paper, we use the IllustrisTNG simulation to investigate how the fraction of the quenched satellite in galaxy groups and clusters is determined by the pre- and post-processings. We also examine the relationship between the fraction of quenched satellites at z=0 and various properties of host groups and clusters. This paper is organised as follows. In Section <ref>, we introduce the IllustrisTNG simulations and how we select sample galaxies in our study. In Section <ref>, we investigate whether pre- or post-processing is more dominant for galaxy quenching and what makes a diversity of the fraction of quenched galaxies at z=0. Section <ref> shows which properties of hosts are related to the quenched fractions, and the effect of the passage of the pericenter of hosts on quenching is examined in Section <ref>. Section <ref> summerizes our results. § METHOD §.§ The IllustrisTNG Simulations To examine the quenched fraction of satellites in each galaxy group and cluster, we use the cosmological magnetohydrodynamical simulations of the IllustrisTNG project (hereafter IllustrisTNG)[<http://www.tng-project.org>], which is composed of three different simulation volumes whose one side length is 50, 100, and 300 Mpc: TNG50, TNG100, and TNG300, respectively <cit.>. To secure the best statistics for the galaxy groups and clusters, we use the TNG300 simulation, which has the largest volume of (300 Mpc)^3. The initial mass resolution of the dark matter (DM) particle and the gas cell is 5.0×10^7 M_⊙ and 1.1×10^7 M_⊙, respectively. Detailed baryonic physics bringing the galaxy formation can be found in <cit.> and <cit.>. Halos and subhalos in each snapshot are identified using friends-of-friends <cit.> and subfind <cit.> algorithms. To find the main progenitor of each halo, we use the merger tree made by sublink <cit.>. In this paper, we refer to galaxy groups and clusters as `hosts', and to the galaxies that have been accreted by these hosts as `satellites'. §.§ Sample selection For the completeness of the galaxy samples, we select galaxies whose stellar mass (M_ *,gal) is more massive than 10^9 M_⊙, which corresponds to more than 100 star particles. The TNG300 includes 4824 groups (10^13 M_⊙≤ M_ host<10^14 M_⊙) and 426 clusters (M_ host≥10^14 M_⊙), where M_ host is the virial mass of each host at z=0. The total number of satellites selected as the galaxy samples for the groups and clusters is 40,139 and 25,535, respectively. We use the SFR and star-forming main sequence (MS) to divide galaxies into star-forming and quenched galaxies, i.e., log SFR of quenched galaxies is 1 dex below from the star-forming MS <cit.>. In this study, the SFR of galaxies is calculated by stars that have formed in recent 200 Myrs <cit.>. The star-forming MS is calculated by the following procedure <cit.>: 1) we calculate the median SFR of star-forming galaxies in 0.2-dex logarithmic bins in the range of M_ *,gal=10^9-10^10.2 M_⊙, performing a fit to the median SFR linearly, 2) we re-calculate the new median SFR, excluding quenched galaxies, whose SFR is 1 dex below from the previous median SFR, and 3) we repeat the second step until the median SFR converges to a given accuracy (1 per cent). Finally, we linearly extrapolate the SFR to calculate the median SFR of star-forming galaxies massive than M_ *,gal>10^10.2 M_⊙. The slope and y-intercept used in the linear fitting are 0.77 and -7.77, respectively. The left panel of Figure <ref> shows the SFR of all galaxies in TNG300 with M_ *,gal at z=0. The criterion between star-forming and quenched galaxies is shown as the black solid line, which is 1-dex below from the star-forming MS (the black dashed line). The right panel of Figure <ref> shows the fraction of the quenched galaxies to all galaxies in individual hosts (f_ Q, hereafter `the quenched fraction') at z=0 as a function of M_ host. In this panel, the mean f_ Q(z=0) increases with M_ host, and the scatter in f_ Q(z=0) decreases as M_ host increases <cit.>. To check whether this trend is an intrinsic scatter or a statistical scatter, we resample the hosts with a bootstrapping resampling in each mass bin while each mass bin has an equal number of hosts. We still can see the increasing scatter with decreasing M_ host so this trend is an intrinsic scatter not a statistical scatter from the poor number of high-mass hosts. The results are consistent even if the quenched galaxies are replaced with red galaxies (see Appendix <ref>). Note that various papers have shown that the effective kinetic AGN feedback mode quenches most galaxies with M_ *,gal>10^10.3 M_⊙ <cit.>. In our sample, galaxies massive than 10^10.3 M_⊙ are mostly quenched by strong AGN feedback. To check how AGN feedback affects the results in Figure <ref>, we examine the relation between f_ Q(z=0) and M_ host, using low-mass satellites (M_ *,gal<10^10.3 M_⊙). Although there is an offset of the median f_ Q(z=0) of low-mass satellites (∼0.1 dex lower than that of all satellites), we find that the trend of the median f_ Q(z=0) of low-mass satellites is similar to that of all satellites: 1) f_ Q(z=0) is increasing with M_ host, and 2) the size of errorbars is decreasing with M_ host. § HOST-MASS DEPENDENCY AND THE SCATTER OF QUENCHED FRACTIONS We investigate why the mean f_ Q(z=0) increases with M_ host by dividing hosts into three mass ranges: low-mass (10^13.0 M_⊙<M_ host≤10^13.5 M_⊙), intermediate-mass (10^13.5 M_⊙<M_ host≤10^14.0 M_⊙), and high-mass (10^14.0 M_⊙≤ M_ host) bins or hosts. We then examine the origin of a scatter in f_ Q(z=0). §.§ The Effects of Pre- and Post-Processings on the Quenched Fraction In the right panel of Figure <ref>, the mean f_ Q(z=0) is increasing with M_ host: a host-mass dependency of f_ Q(z=0), shown in various observations <cit.>. To investigate the origin of the host mass-dependency of f_ Q(z=0), we measure the time interval (dt_ Q) of each satellite from when it first falls into the host until it is quenched. Quenched satellites are defined by the MS at each snapshot and we define the time of quenching/infall as the time of the first snapshot when galaxies are identified as quenched/satellites. The left three panels of Figure <ref> show the normalized number of galaxies (f_ N) with dt_ Q for hosts in three different mass bins. Here, the negative dt_ Q indicates that the satellites have been quenched before infall, while the positive dt_ Q indicates quenching after infall. Regardless of M_ host, the number of satellites that are quenched after falling into their final hosts is dominant (dt_ Q>0): the number fraction of quenched galaxies after infall is 0.84, 0.81, and 0.70 for low-, intermediate-, and high-mass bins, respectively. It means pre-processing may have contributed to reducing the SF but not fully quenched satellites but post-processing mainly contributes to the galaxy quenching <cit.>. The decreasing number fraction of quenched fractions after infall for more massive hosts indicates that there are more pre-processed satellites in the more massive hosts <cit.>. Because satellites in more massive hosts can have more massive infalling groups than less massive hosts <cit.>, satellites are more likely to be quenched than those in the low-mass hosts <cit.>. Indeed, among total quenched satellites, 25.8% of those in low-mass hosts and 50.2% of those in high-mass hosts are members of other structures when they fall into the final low- and high-mass hosts <cit.>. Note that these results can be affected by the high-mass satellites that experience strong AGN feedback. When we measure the normalized f_ N of low-mass satellites (M_ *,sat<10^10.3 M_⊙), most of them are quenched by post-processing not pre-processing. It means most pre-processed satellites are quenched by strong AGN feedback in our sample <cit.>. However, the results with low-mass satellites are still similar to those with all satellites 1) the number of post-processed satellites is more than that of pre-processed satellites, and 2) the median dt_ Q is decreasing with increasing M_ host. The other thing that we have to notice in the left three panels of Figure <ref> is the peak dt_ Q is getting shorter as M_ host is increasing. To measure this trend quantitatively, we plot the median dt_ Q with f_ Q(z=0) (the right panel of Figure <ref>). In high-mass hosts, which generally have denser and more extended gas reservoirs (intracluster/intergalactic medium), ram-pressure stripping is more efficient and affects infalling satellites earlier, making satellites quenched faster compared to those in low-mass hosts <cit.>. As a result, satellites in high-mass hosts undergo more intense pre- and post-processings than those in low-mass hosts. It enables high-mass hosts to have higher f_ Q(z=0) than low-mass hosts, as shown in the right panel of Figure <ref>. Interestingly, the median dt_ Q is steeply decreasing with f_ Q(z=0) in a high M_ host bin (the red line in the right panel of Figure <ref>). This trend is coming from the fact that the high-mass bin covers the wide range of M_ host from 10^14 M_⊙ up to ∼10^15.5 M_⊙. Indeed, among high-mass hosts, hosts with 0.8<f_ Q(z=0)≤1.0 are more massive than other hosts with 0.6<f_ Q(z=0)≤0.8 on average. Because more massive hosts might experience the most efficient pre- and post-processings, satellites in more massive hosts are more rapidly quenched than those in less massive hosts. In Figure <ref>, we demonstrate why high-mass hosts have higher f_ Q(z=0) than low-mass hosts: satellites in high-mass hosts experience pre-processing most strongly (0.30, low- and intermediate-hosts are 0.16 and 0.19, respectively) and are rapidly quenched by efficient post-processing. However, Figure <ref> does not clearly show whether pre- or post-processings contribute to the scatter of f_ Q(z=0) as shown in the right panel of Figure <ref>. To investigate this, we separately examine the fraction of pre- and post-processings and plot the right two panels in Figure <ref>, which shows the effects of f_ Q(pre) and f_ Q(post) on f_ Q(z=0). The fractions of f_ Q(pre) and f_ Q(post) are calculated by taking an average of the amount of pre- and post-processed satellites in each host and bins that have more than 10 data are used for the statistical significance. In the left panel of Figure <ref>, the mean f_ Q(pre) only depends on M_ host but not depends on f_ Q(z=0) so only the host-mass dependency of f_ Q(pre) is shown (see also Figure <ref>). It demonstrates that satellites are slightly quenched by pre-processing that depends on M_ host, before they fall into their final hosts. However, in the middle panel of Figure <ref>, the mean f_ Q(post) depends not only on M_ host but also onf_ Q(z=0), which demonstrates post-processing makes most satellites quenched and the large scatter in f_ Q(z=0) of low-mass hosts. To measure the amount of pre- and post-processed satellites quantitatively, we plot f_ Q(pre) and f_ Q(post) as a function of f_ Q(z=0) in the right panel in Figure <ref>. We take an average of f_ Q(pre) and f_ Q(post) in each f_ Q(z=0) bin so that f_ Q(z=0)=f_ Q(pre)+f_ Q(post). Satellites in high-mass hosts have slightly higher f_ Q(pre) than those in low- and intermediate-mass hosts, indicating that pre-processing is more effective for them <cit.>. However, f_ Q(pre) is almost constant with f_ Q(z=0) at a M_ host, which means pre-precessing is strongly depends on M_ host not f_ Q(z=0). On the other hand, f_ Q(post) increases with decreasing M_ host because satellites in low-mass hosts are less quenched by pre-processing than those in intermediate- and high-mass hosts. In addition, the mean f_ Q(z=0) at a fixed M_ host strongly depends on f_ Q(post). Therefore, although satellites are slightly quenched by pre-processing that depends on M_ host, post-processing that makes most satellites quenched mainly cause the scatter of f_ Q(z=0). §.§ The Effect of the Infall Time of Satellites on the Quenched Fraction In Figure <ref>, we showed that post-processing makes the scatter of f_ Q(z=0): there is a diversity of f_ Q(z=0) values in a narrow range of M_ host. In this section, we focus on the factors that drive the diversity f_ Q(z=0) values. As shown in Figures <ref> and <ref>, post-processing plays an important role in galaxy quenching so the time when satellites fall into their hosts and how long satellites stay in their hosts can have a significant impact on galaxy quenching <cit.>. To investigate this further, we define the infall time (T_ IF) as the time when satellites first fall into their final hosts, and measure it as a lookback time. A large median T_ IF in a given M_ host bin indicates that hosts gain their satellites early, while the small median T_ IF represents hosts gain their satellites recently. Thus, the median T_ IF indicates the mass growth history of hosts within a similar range of M_ host. The left three panels of Figure <ref> show the cumulative fraction of satellites with T_ IF, indicating that satellites in hosts with high f_ Q(z=0) fall into their hosts early regardless of M_ host. These early-infall satellites stay in their hosts long enough so most of these satellites are quenched by post-processing. To measure this trend quantitatively, we plot the median T_ IF with f_ Q(z=0) in the right panel of Figure <ref>. The median T_ IF increases with f_ Q(z=0), supporting the result that a large number of satellites that fall into their hosts early are more likely to be quenched by post-processing. In other words, satellites have a high probability of being quenched as they stay longer in their hosts. Furthermore, at a fixed median T_ IF, high-mass hosts have the highest f_ Q(z=0) because satellites in high-mass hosts experience pre-processing most strongly and are rapidly quenched by post-processing (as shown in Section <ref>). The early-infall satellites lose their orbital energy due to the gravitational drag force and gradually move towards the center of hosts, known as the `dynamical friction' <cit.>. As a result, early-infall satellites tend to stay close to the center of their hosts, losing their orbital energy, while late-infall satellites are more likely to be located in the outskirts of their hosts. This spatial separation between early- and late-infall satellites is shown in a phase-space diagram, which is a useful tool for studying the relationship between the infall time of satellites and their location in hosts <cit.>. To investigate how T_ IF and f_ Q(z=0) of satellites are shown in a phase-space diagram, we plot the left panel of Figure <ref>. Satellites that fall into their final hosts early are located innermost region, indicating that these satellites might have spent most of their time in their final hosts. We also plot the quenching finish time (T_ Q) of satellites in a phase-space diagram to compare it with T_ IF. We define T_ Q as T_ IF- dt_ Q, which represents the lookback time when the quenching of satellites is completed. The similarity in color gradient between T_ IF and T_ Q indicates that satellites that fall into their hosts earlier tend to be quenched earlier by post-processing <cit.>. Finally, we plot the right panel of Figure <ref> to examine the relationship of T_ IF and T_ Q to f_ Q(z=0). It shows that satellites located innermost region are the most quenched. The good agreement in color gradient of the three panels in Figure <ref> shows that satellites that fall into their final hosts early could be most quenched because they have enough time in their hosts. In Figure <ref>, we show that hosts with a higher f_ Q(z=0) have a higher median T_ IF than those with a lower f_ Q(z=0). To compare this result with the observation, we can use an observable projected phase-space diagram that can estimate the mean T_ IF (Jeong et al. in prep.). Based on the results in Figure <ref>, satellites can be divided into early- and late-infall satellites in a projected phase-space diagram <cit.>. To define early- and late-infall satellites, we measure the projected radius (r_ proj) and the line-of-sight velocity (v_ LOS) of each satellite. For the projected radius, we take an average of the radius from the center of hosts to their satellites in each xy-, yz-, and xz- plane. For the line-of-sight velocity, we correct the Hubble flow <cit.> and take an average of v_ LOS in each xy-, yz-, and xz- plane. The left panel of Figure <ref> shows early- and late-infall satellites defined as `ancient infallers (A)' and `recent infallers (R)' <cit.>[Note that we follow the method in <cit.> who divide the phase-space diagram into two regions for statistical significance unlikely <cit.> who divide it into five regions.]. We define the region `A', which includes most ancient infallers that fall into the final hosts before 6.45 Gyr and define the region `R', which includes most first infallers not fallen yet into their final hosts but will fall into their final hosts <cit.>. The middle panel of Figure <ref> shows the fraction of ancient infallers, f_ ancient=A/(A+R), with f_ Q(z=0). As f_ Q(z=0) increases, f_ ancient also increases, which indicates that hosts with high f_ Q(z=0) have many ancient infallers so hosts gain their mass early. Thus, ancient infallers are likely to be more quenched by post-processing because they have spent most of their time in their final hosts. Since we define ancient infallers as satellites within the region `A', which is determined by projected position and velocity, the projection effect may cause some non-member satellites to be included in the region `A'. To check the projection effect of the observation in the middle panel, we alternatively define `ancient infallers' as satellites that fall into their hosts 6.45 Gyrs ago <cit.>, using the T_ IF instead of the region `A'. The right panel of Figure <ref> shows the results. When we use T_ IF, which can be measured in simulations, the value of f_ ancient is much smaller than the value in the middle panel because, among the ancient infallers in the middle panel, about 30% of them fall into their hosts after 6.45 Gyr. This implies that a large number of recent infallers are misidentified as ancient infallers due to projection effects. However, both panels show that hosts with high f_ Q(z=0) tend to have a large population of ancient infallers, which suggests that they have grown earlier compared to hosts with low f_ Q(z=0). The trend of f_ ancient that is measured from a phase-space diagram (the middle panel) is similar to that of f_ ancient that is measured from T_ IF in simulations (the right panel). Thus, we can estimate the mass growth of hosts from the observable value of f_ ancient. Note that high-mass galaxies (10^10.5 M_⊙<M_ *,gal<10^11.0 M_⊙) in IllustrisTNG are overquenched by strong baryonic feedback from AGN <cit.>. Overquenching makes quenched fractions high compared to observations <cit.> so there might be some offset of f_ Q(z=0) between simulations and observations. However, we still can compare hosts with more early growth to those with more recent growth by using the relative values of f_ Q(z=0). § QUENCHED FRACTIONS WITH VARIOUS PROPERTIES OF HOSTS In this section, we investigate various properties of hosts that are related to their mass growth history and f_ Q(z=0). There are various observational properties of galaxy clusters: the abundance of satellites, density, colors, morphology, ellipticity, isotropy, and so on. Among them, we use the abundance of satellites, ellipticity (ϵ), number density (ρ_ n), nearest distance to groups or clusters (D_ G13), isotropy of satellites (β). Satellites whose stellat mass (M_ *,sat) is larger than 10^9 M_⊙ are used to calculate each property. The cluster formation time (z_ m50), which is the redshift when hosts first reach 50% of M_ host(z=0), is included as an indicator of the mass growth history of hosts <cit.>. Unlikely other observational properties, z_ m50 is not an observable property. However, we have already investigated the different T_ IF can affect the scatter of f_ Q(z=0) so how z_ m50 can affect f_ Q(z=0) has to be examined. We define the abundance of satellites as below: N_ sat/M_ host×10^12, where N_ sat is the number of satellites in the virial radius (R_ host). We use the abundance normalized by M_ host because in general, N_ sat depends strongly on M_ host <cit.>. Thus, if the abundance is small in a given range of M_ host, hosts are dynamically more evolved than hosts with a large abundance. We calculate the total ellipticity (ϵ) using the positions of satellites relative to the hosts <cit.>. We measure the ellipticity of satellites in R_ host in the xy-, yz-, and xz-planes, and take an average of these ellipticities (ϵ). Thus, ϵ denotes how satellites inside the host distribute. To measure the number density of satellites around each host (ρ_ n), we count the number of satellites from R_ host to 3R_ host. We calculate the nearest distance (D_ G13) to any group or cluster whose M_ host is larger than 10^13 M_⊙. Thus, hosts with small D_ G13 are located in denser environments, while those with larger D_ G13 are in less dense regions. We measure the isotropy of satellites from 0 to 5R_ host, using the azimuthal symmetric excess <cit.> with harmonic orders of m=1 to m=4: β=∑_m=1^nβ_m=∑_m=1^n|Q_m|/|Q_0|, where |Q_m| is the modulus of the aperture multipole moment at the order of m. In xy-,yz-, and xz-planes, we compute β and take an average of these three β. Figure <ref> shows 2D histograms of z_ m50 and various observational properties of hosts with M_ host. The properties of hosts and M_ host are divided into 10 bins to examine their variations. We find that f_ Q(z=0) strongly depends on M_ host (vertical stripes of colors) and weakly depends on z_ m50 (diagonal stripes of colors) and the abundance of satellites (only when M_ host is small). In Sections <ref> and <ref>, we find out that post-processing is more important for galaxy quenching than pre-processing; that is, satellites falling into their hosts early are likely to be more quenched due to a longer stay in their hosts compared to those falling later. Hosts with a high z_ m50 have many early-infall satellites (a high mean T_ IF), while those with a low z_ m50 have many late-infall satellites (a low mean T_ IF). Therefore, the different mass growth history results in diverse values of f_ Q(z=0). On the other hand, in low-mass hosts, f_ Q(z=0) decreases with increasing the abundance of satellites. In a given range of M_ host, hosts are dynamically unrelaxed if the abundance of satellites is high, indicating that satellites have fallen recently. These satellites have not stayed in their hosts long enough to be quenched so eventually hosts could have low f_ Q(z=0). Other properties of hosts, ϵ, ρ_n, D_ G13, and β, do not show the relation to f_ Q(z=0) because those properties are related to the environment or the mass distribution rather than the mass growth history of hosts. To quantitatively measure the variation of f_ Q(z=0) with various properties of hosts in Figure <ref>, we plot Figure <ref>. Only low-mass hosts (10^13 M_⊙<M_ host<10^13.5 M_⊙) are used for statistical reliability. We first sort the values of each property in ascending order and then divide each property into 5 ranks of 0-20%, 20-40%, 40-60%, 60-80%, and 80-100%, i.e., rank 1 represents the lowest value of each property and rank 5 denotes the highest value. Next, we calculate the mean f_ Q(z=0) of each rank. In Figure <ref>, the mean f_ Q(z=0) decreases with the abundance of satellites and increases with z_ m50. Hosts with a high z_ m50 gain their mass early due to the early infall of their satellites. Early-infall satellites have spent more time in the potential of their hosts on average, and naturally, lose more mass as they are subjected to post-processing, which can affect not only the mass loss of satellites but also the destruction of satellites. After satellites fall into their hosts, they experience hydrodynamical interaction with the intracluster medium (ICM) <cit.>, ram-pressure stripping, so satellites can halt the SF activity. Satellites are also affected by gravitational tides in the potential well of their hosts so dark matter, stars and gas in satellites can be stripped <cit.>. Harassment <cit.> and galaxy mergers <cit.> also can distort and destroy satellites. Some satellites lose their orbital energy and finally merge into the cluster galaxy (Brightest Cluster Galaxy), dynamical friction <cit.>. As a result, the destruction rate of early-infall satellites is higher, making the abundance of satellites lower. Therefore, in a low M_ host bin, the abundance of satellites and the fraction of quenched satellites at z=0 can be observable indicators of the mass growth history of hosts. We find that in intermediate- and high-mass hosts, only z_ m50 has a relation to f_ Q(z=0). The dependence of T_ IF with f_ Q has already been shown (the right panel of Figure <ref>) so we can still find this trend when it comes to z_ m50. Satellites in intermediate- and high-mass hosts fall into their hosts more recently and are quenched more rapidly than those in low-mass hosts (the right panels of Figures <ref> and <ref>). However, these satellites are not disrupted easily although they fall into their hosts a long time ago <cit.>. It makes the dependence of the abundance on f_ Q weak unlikely in the case of low-mass hosts. Therefore, only the fraction of quenched satellites at z=0 can be used as an indicator of the mass growth history of intermediate- and high-mass hosts. § EFFECT OF HOST PERICENTER PASSAGE ON QUENCHED SATELLITES In this paper, we emphasize the importance of post-processing for satellites to be quenched because 1) most satellites are quenched after they fall into their final hosts, and 2) satellites that fall into their hosts are quenched by efficient post-processing so the different mean T_ IF of satellites makes a scatter of f_ Q(z=0). Although satellites can be quenched by some physical mechanisms, we are unable to differentiate between them. Instead, we can derive the effect of the passage of the pericenter for galaxy quenching. Various studies have shown that galaxies are quenched around the first pericenter <cit.>. <cit.> analyze a sample of 11 massive ellipticals in the Coma cluster. They find that the SF in those ellipticals stops 1 Gyr after the first pericenter passage and this rapid quenching comes from the combination of ram-pressure stripping with tidal interactions. To investigate the effect of the passage of the pericenter of hosts on galaxy quenching, we trace the orbit of each satellite and measure the time interval from when satellites pass by the pericenter (T_ peri) to the quenching finish time: dt_ P-Q = T_ peri-T_ Q. Figure <ref> shows the normalized number of satellites with dt_ P-Q. Note that we exclude satellites that are pre-processed or do not pass by the pericenter to see the effect of the pericenter. Satellites with dt_ P-Q>0 are quenched after passing by the pericenter, while those with dt_ P-Q<0 are quenched before. Most satellites are quenched after passing by the pericenter so the interaction between the passage of the pericenter and satellites is important for galaxy quenching. We find that high-mass hosts have a slightly higher quenched fraction before the pericenter (0.31) compared to low- and intermediate-mass hosts (0.21 and 0.22, respectively). Although we remove satellites that do not pass by the pericenter or are pre-processed, some satellites in high-mass hosts might have low SFRs possibly approaching zero when they fall due to efficient pre-processing (see Section <ref>) and post-processing upon falling into their hosts, before the passage of the pericenter. This is why high-mass hosts have large quenched fractions before the passage of the pericenter. The mean dt_ P-Q is 1.9, 1.4, and 0.6 Gyrs in low-, intermediate-, and high-mass bin, respectively. The shortest mean dt_ P-Q in high-mass hosts might result from the efficient ram-pressure stripping <cit.> and tidal stripping <cit.>. We also measure dt_ P-Q with M_ *,sat to investigate how satellites are affected by the passage of the pericenter. Because high-mass satellites (10^10.3 M_⊙< M_ *,sat) are affected by strong AGN feedback, we only investigate dt_ P-Q of low-mass satellites (10^9.5≤ M_ *,sat<10^10.3). In the case of low-mass satellites, the average dt_ Q and dt_ P-Q is ∼3.1 Gyr and ∼1.4 Gyr, respectively, so low-mass satellites pass by the pericenter of their hosts after ∼1.7 Gyr since infall. It suggests that the SFR of low-mass satellites is rarely affected after the first infall or the SFR is gradually suppressed via ram-pressure stripping or strangulation <cit.>. When low-mass satellites fall into their final hosts, the density of the ICM around the outskirt of hosts is low so cold gas in satellites cannot interact with the ICM around them. During this phase, steady gas depletion (starvation/strangulation) occurs in low-mass satellites. Thus, they can continue to increase their M_ *,sat because their SFR is not completely suppressed. When satellites approach the inner region of their hosts, they can reach the threshold ICM density (∼10^-28 g cm^-3), and then, a significant fraction of cold gas becomes susceptible to ram-pressure stripping. Finally, satellites can be quenched after the passage of the pericenter of their hosts <cit.>. The other thing that we have to pay attention to is that dt_ P-Q of low-mass satellites is increasing with M_ *,sat. If M_ *,sat decreases, the potential well is getting shallower so gases in low-mass satellites are likely to be stripped by tides or ram pressure <cit.>. In this section, we investigate how the passage of the pericenter can affect the SF of satellites. We find that satellites in high-mass hosts are slightly more quenched than those in low- and intermediate-hosts due to efficient pre- and post-processings. After passing by the pericenter of high-mass hosts, satellites are most rapidly quenched due to the efficient ram-pressure stripping. Low-mass satellites are rapidly quenched, passing by the pericenter after a delay phase. The time from the passage of the pericenter to being quenched increases with the stellar mass of satellites because gases in low-mass satellites might be easily stripped due to a shallow potential well. § SUMMARY We investigate the fraction of quenched satellites of hosts whose virial mass is larger than 10^13 M_⊙ at z=0, using TNG300 in the IllustrisTNG cosmological magnetohydrodynamical simulations. In simulations, the fraction of quenched satellites depends on the virial mass of hosts and there is a scatter of quenched fractions at z=0. Throughout the paper, we examine which physical mechanisms cause those results and which properties of hosts are related to the fraction of quenched satellites. Belows are a summary of our results. * Post-processing is important for satellites to be quenched because 1) most satellites are quenched after falling into their hosts and 2) the fraction of quenched satellites at z=0 is increasing with the fraction of quenched satellites after infall. * Satellites in high-mass hosts experience pre-processing more than those in low- and intermediate-mass hosts and they are rapidly quenched by efficient post-processing after falling into their hosts. It makes satellites in high-mass hosts most quenched at z=0 so there is a host-mass dependency of the fraction of quenched satellites at z=0. * The different infall time of satellites makes a scatter in the fraction of quenched satellites at z=0 in a given mass range of hosts. Satellites that fall into their hosts early have spent most of their time in their hosts so they could be more quenched than those falling lately. Thus, hosts with a high fraction of quenched satellites at z=0 have many early-infall satellites, while hosts with low quenched fractions have many late-infall satellites. * In a phase-space diagram, satellites that fall into their hosts early are located in the innermost region of hosts. These satellites have spent most of their time in their hosts, and are most quenched by post-processing. Thus, if hosts have a high fraction of quenched satellites at z=0, the fraction of early-infall satellites is high, which means hosts gain their mass early. It indicates that if we know the fraction of quenched satellites at z=0, we can estimate the mass growth history of hosts. * Among the various properties of hosts, z_ m50 and the abundance of satellites are related to the fraction of quenched satellites at z=0 in low-mass hosts (10^13 M_⊙<M_ host≤10^13.5 M_⊙). Thus, the abundance and the fraction of quenched satellites at z=0 can be indicators of the history of hosts indirectly in low-mass hosts. In intermediate- and high-mass hosts, only z_ m50 is related to the fraction of quenched satellites at z=0. * Most satellites are quenched after passing by the pericenter of their hosts. The time interval from when satellites pass by the pericenter to the quenching finish time decreases with the mass of hosts because of effective ram-pressure stripping and tidal stripping in massive hosts. Furthermore, low-mass satellites seem to experience a delay phase before quenching because most of them spend more than half of their time since infall to the passage of the pericenter and then are quenched after the passage of the pericenter. This work was supported by the Korea Astronomy and Space Science Institute under the R&D program (Project No. 2023-1-830-01) supervised by the Ministry of Science and ICT (MSIT). KWC was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (2021R1F1A1045622). H.J. acknowledges support from the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. NRF-2019R1F1A1041086). JHL acknowledges support from the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2022R1A2C1004025). This research was partially supported by the Australian Research Council Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), through project number CE170100013. § PLOTS USING THE RED GALAXY FRACTION The Red galaxy in TNG300 is defined as galaxies that have the g-r color, larger than (g-r) = 0.5logM_ *,gal+0.1 mag <cit.>. We plot Figure <ref>, which shows the red galaxy fraction (f_ R) with M_ host at z=0. The trend of f_ R(z=0) increases with M_ host, similar to the trend in the right panel of Figure <ref>. We plot the mean f_ R(z=0) in a phase-space diagram (the left panel of Figure <ref>). The trend of color gradient is similar to that in the right panel of Figure <ref>. After ancient infallers are defined as satellites located in the region `A' (see the left panel of Figure <ref>), we plot the fraction of ancient infallers with f_ R(z=0) in the right panel of Figure <ref>. The fraction of ancient infallers increases with f_ R(z=0), indicating that satellites falling into their hosts early become red galaxies more than those falling recently. In Figure <ref>, we investigate the relation between z_ m50 and f_ R(z=0). Similar to the upper left panel of Figure <ref>, f_ R(z=0) also can be an indicator that represents the mass growth history of hosts. aasjournal
http://arxiv.org/abs/2307.05965v1
20230712072107
Descriptive properties of the type of an irrational number
[ "William Banks", "Asma Harcharras", "Dominique Lecomte" ]
math.GN
[ "math.GN", "math.LO", "math.NT" ]
http://arxiv.org/abs/2307.06100v1
20230712114816
Agilicious: Open-Source and Open-Hardware Agile Quadrotor for Vision-Based Flight
[ "Philipp Foehn", "Elia Kaufmann", "Angel Romero", "Robert Penicka", "Sihao Sun", "Leonard Bauersfeld", "Thomas Laengle", "Giovanni Cioffi", "Yunlong Song", "Antonio Loquercio", "Davide Scaramuzza" ]
cs.RO
[ "cs.RO" ]
Experimental detectability of spin current shot noise Sebastian T. B. Goennenwein^1 August 12, 2023 ===================================================== § CODE AND MULTIMEDIA MATERIAL Code and data can be found at <https://agilicious.dev>. A video of the experiments can be found at <https://youtu.be/fNYxPLyJ5YY>. § INTRODUCTION Quadrotors are extremely agile vehicles. Exploiting their agility in combination with full autonomy is crucial for time-critical missions, such as search and rescue, aerial delivery, and even flying cars. For this reason, over the past decade, research on autonomous, agile quadrotor flight has continually pushed platforms to higher levels of speed and agility <cit.>. To further advance the field, several competitions have been organized—such as the autonomous drone racing series at the recent IROS and NeurIPS conferences <cit.> and the AlphaPilot challenge <cit.>—with the goal to develop autonomous systems that will eventually outperform expert human pilots. Million-dollar projects, such as AgileFlight <cit.> and Fast Lightweight Autonomy (FLA) <cit.>, have also been funded by the European Research Council and the United States government, respectively, to further push research. Agile flight comes with ever-increasing engineering challenges since performing faster maneuvers with an autonomous system requires more capable algorithms, specialized hardware, and proficiency in system integration. As a result, only a small number of research groups have undertaken the significant overhead of hardware and software engineering, and have developed the expertise and resources to design quadrotor platforms that fulfill the requirements on weight, sensing, and computational budget necessary for autonomous agile flight. This work aims to bridge this gap through an open-source agile flight platform, enabling everyone to work on agile autonomy with minimal engineering overhead. The platforms and software stacks developed by research groups <cit.> vary strongly in their choice of hardware and software tools. This is expected, as optimizing a robot with respect to different tasks based on individual experience in a closed-source research environment leads to a fragmentation of the research community. For example, even though many research groups use the ROS middleware to accelerate development, publications are often difficult to reproduce or verify since they build on a plethora of previous implementations of the authoring research group. In the worst case, building on an imperfect or even faulty closed-source foundation can lead to wrong or non-reproducible conclusions, slowing down research progress. To break this vicious cycle and to democratize research on fast autonomous flight, the robotics community needs an open-source and open-hardware quadrotor platform that provides the versatility and performance needed for a wide range of agile flight tasks. Such an open and agile platform does not yet exist, which is why we present Agilicious, an open-source and open-hardware agile quadrotor flight stack summarized in Figure <ref>. To reach the goal of creating an agile, autonomous, and versatile quadrotor research platform, two main design requirements must be met by the quadrotor: it must carry the required compute hardware needed for autonomous operation, and it must be capable of agile flight. To meet the first requirement on computing resources needed for true autonomy, a quadrotor should carry sufficient compute capability to concurrently run estimation, planning, and control algorithms onboard. With the emergence of learning-based methods, also efficient hardware acceleration for neural network inference is required. To enable agile flight, the platform must deliver an adequate TWR and TIR. While the TWR can often be enhanced using more powerful motors, which in turn require larger propellers and thus a larger size of the platform. However, the TIR typically decreases with higher weight and size, since the moment of inertia increases quadratic with the size, and linearly with the weight. As a result, it is desirable to design a lightweight and small platform <cit.> to maximize agility (i.e. maximize both TWR and TIR). Therefore the platform should meet the best trade-off, since maximizing compute resources competes against maximizing the flight performance. Apart from hardware design considerations, a quadrotor research platform needs to provide the software framework for flexible usage and reproducible research. This entails the abstraction of hardware interfaces and a general co-design of software and hardware necessary to exploit the platform's full potential. Such co-design must account for the capabilities and limitations of each system component, such as the complementary real-time capabilities of common operating systems and embedded systems, communication latencies and bandwidths, system dynamics bandwidth limitations, and efficient usage of hardware accelerators. In addition to optimally using hardware resources, the software should be built in a modular fashion to enable rapid prototyping through a simple exchange of components, both in simulation and real-world applications. This modularity enables researchers to test and experiment with their research code, without the requirement to develop an entire flight stack, accelerating time to development and facilitating reproducibility of results. Finally, the software stack should run on a broad set of computing boards, be efficient, easy to transfer and adapt by having minimal dependencies and provide known interfaces, such as the widely-used Robot Operating System (ROS). The complex set of constraints and design objectives is difficult to meet. There exists a variety of previously published open-source research platforms, which, while well designed for low-agility tasks, could only satisfy a subset of the aforementioned hardware and software constraints. In the following section, we list and analyze prominent examples such as the FLA platform <cit.>, the MRS quadrotor <cit.>, the ASL-Flight <cit.>, the MIT-Quad <cit.>, the GRASP-Quad <cit.>, or our previous work <cit.>. The FLA platform <cit.> relies on many sensors, including Lidars and laser-range finders in conjunction with a powerful onboard computer. While this platform can easily meet autonomous flight computation and sensing requirements, it does not allow to perform agile flight beyond 2.4g of thrust, limiting the flight envelope to near-hover flight. The MRS platform <cit.> provides an accompanying software stack and features a variety of sensors. Even though this hardware and software solution allows fully autonomous flight, the actuation renders the system not agile with a maximum thrust-to-weight of 2.5. The ASL-Flight <cit.> is built on the DJI Matrice 100 platform and features an Intel NUC as the main compute resource. Similarly to the MRS platform, the ASL-Flight has very limited agility due to its weight being on the edge of the platform's takeoff capability. The comparably smaller GRASP-Quad proposed in <cit.> operates with only onboard IMU and monocular camera while having a weight of only 250. Nevertheless, the Qualcomm Snapdragon board installed on this platform lacks computational power and also the actuation constrains the maximal accelerations below 1.5g. Motivated by drone racing, the MIT-Quad <cit.> reported accelerations of up to 2.1g while it was further equipped with NVIDIA Jetson TX2 in <cit.>, however, it does not reach the agility of Agilicious and contains proprietary electronics. Finally, the quadrotor proposed in <cit.> is a research platform designed explicitly for agile flight. Although the quadrotor featured a high thrust-to-weight ratio of up to 4, its compute resources are very limited, prohibiting truly autonomous operation. All these platforms are optimized for either relatively heavy sensor setups or for agile flight in non-autonomous settings. While the former platforms lack the required actuation power to push the state of the art in autonomous agile flight, the latter have insufficient compute resources to achieve true autonomy. Finally, several mentioned platforms rely on either Pixhawk-PX4 <cit.>, the Parrot <cit.> or DJI <cit.> low-level controllers, which are mostly treated as blackboxes. This, together with the proprietary nature of the DJI systems, limits control over the low-level flight characteristics, which not only limits interpretability of results, but also negatively impacts agility. Full control over the complete pipeline is necessary to truly understand aerodynamic and high-frequency effects, model and control them, and exploit the platform to its full potential. Apart from platforms mainly developed by research labs, several quadrotor designs are proposed by industry (Skydio <cit.>, DJI <cit.>, Parrot <cit.>) and open-source projects (PX4 <cit.>, Paparazzi <cit.>, Crazyflie <cit.>). While Skydio <cit.> and DJI <cit.> both develop platforms featuring a high level of autonomy, they do not support interfacing with custom code and therefore are of limited value for research and development purposes. Parrot <cit.> provides a set of quadrotor platforms tailored for inspection and surveillance tasks that are accompanied by limited software development kits that allow researchers to program custom flight missions. In contrast, PX4 <cit.> provides an entire ecosystem of open-source software and hardware as well as simulation. While these features are extremely valuable especially for low-speed flight, both cross-platform hardware and software are not suited to push the quadrotor to agile maneuvers. Similarly, Paparazzi <cit.> is an open-source project for drones, which supports various hardware platforms. However, the supported autopilots have very limited onboard compute capability, rendering them unsuited for agile autonomous flight. The Crazyflie <cit.> is an extremely lightweight quadrotor platform with a takeoff weight of only 27. The minimal hardware setup leaves no margin for additional sensing or computation, prohibiting any non-trivial navigation task. To address the requirements of agile flight, the shortcomings of existing works, and to enable the research community to progress fast towards agile flight, we present an open-source and open-hardware quadrotor platform for agile flight at more than 5 acceleration while providing substantial onboard compute and versatile software. The hardware design leverages recent advances in motor, battery, and frame design initiated by the FPV racing community. The design objectives resulted in creating a lightweight 750 platform with maximal speed of 131. This high-performance drone hardware is combined with a powerful onboard computer that features an integrated GPU, enabling complex neural network architectures to run at high frequency while concurrently running optimization-based control algorithms at low latency onboard the drone. The most important features of the Agilicious framework are summarized and compared with relevant research and industrial platforms in Figure <ref>. A qualitative comparison of mutually contradicting onboard computational power and agility is presented in Figure <ref>. In co-design with the hardware, we complete the drone design with a modular and versatile software stack, called Agilicious. It provides a software architecture that allows to easily transfer algorithms from prototyping in simulation to real-world deployment in instrumented experiment setups, and even pure onboard-sensing applications in unknown and unstructured environments. This modularity is key for fast development and high-quality research, since it allows to quickly substitute existing components with new prototypes and enables all software components to be used in standalone testing applications, experiments, benchmarks, or even completely different applications. The hardware and software presented in this work have been developed, tested, and refined throughout a series of research publications <cit.>. All these publications share the ambition to push autonomous drones to their physical limits. The experiments, performed in a diverse set of environments demonstrate the versatility of Agilicious by deploying different estimation, control, and planning strategies on a single physical platform. The flexibility to easily combine and replace both hard- and software components in the flight stack while operating on a standardized platform facilitates testing new algorithms and accelerates research on fast autonomous flight. § RESULTS Our experiments, conducted in simulation and in the real world, demonstrate that Agilicious can be used to perform cutting-edge research in the fields of agile quadrotor control, quadrotor trajectory planning, and learning-based quadrotor research. We evaluate the capabilities of the Agilicious software and hardware stack in a large set of experiments that investigate trajectory tracking performance, latency of the pipeline, combinations of Agilicious with a set of commercially or openly available vision-based state estimators. Finally, we present two demonstrators of recent research projects that build on Agilicious. §.§ Trajectory Tracking Performance In this section, we demonstrate the tracking performance of our platform by flying an aggressive time-optimal trajectory in a drone racing scenario. Additionally, to benchmark our planning and control algorithms, we compete against a world-class drone racing pilot FPV pilot, reported in <cit.>. As illustrated in Figure <ref>, our drone racing track consists of seven gates that need to be traversed in a pre-defined order as fast as possible. The trajectory used for this evaluation reaches speeds of 60 and accelerations of 4 . Flying through gates at such high speed requires precise state estimates, which is still an open challenge using vision-based state estimators <cit.>. For this reason, we conduct these experiments in an instrumented flight volume of 30×30 × 8 (7200), equipped with 36 VICON cameras that provide precise pose measurements at 400. However, even when provided with precise state estimation, accurately tracking such aggressive trajectories poses considerable challenges with respect to the controller design, which usually requires several iterations of algorithm development and substantial tuning effort. The proposed Agilicious flight stack allows us to easily design, test, and deploy different control methods by first verifying them in simulation and then fine-tuning them in the real-world. The transition from simulation to real-world deployment requires no source-code changes or adaptions, which reduces the risk of crashing expensive hardware, and is one of the major features of Agilicious accelerating rapid-prototyping. Figure <ref> includes a simulated flight that shows similar characteristics and error statistics compared to the real-world flights described next. We evaluate three different system and control approaches including onboard computation with an off-the-shelf BetaFlight <cit.> flight controller, our custom open-source controller, and an offboard-control scenario. These three system configurations represent various use cases of Agilicious, such as running state-of-the-art single-rotor control onboard the drone using our described in Sec. <ref>, or simple remote control by executing Agilicious on a desktop computer and forwarding the commands to the drone. All configurations use the motion capture state estimate and our single-rotor MPC described in Section <ref> <cit.> as high-level controller. We use the single-rotor thrust formulation to correctly account for actuation limits, but use bodyrates and collective thrust as command modality. The first configuration runs completely onboard the drone with an additional low-level controller in the form of INDI as described in Section <ref> <cit.>. It uses the MPC's output to compute refined single-rotor thrust commands using INDI, to reduce the sensitivity to model inaccuracies. These single-rotor commands are executed using our flight controller with closed-loop motor speed tracking. The second configuration also runs onboard and directly forwards the bodyrate and collective thrust command from the MPC to a BetaFlight <cit.> controller. This represents the most simplistic system which does not require flashing the flight controller and is compatible with a wide range of readily available off-the-shelf hobbyist drone components. However, in this configuration, the user does not get any IMU or motor speed feedback, as those are not streamed by BetaFlight. The third configuration is equal to the second configuration with the difference that the Agilicious flight stack runs offboard on a desktop or laptop computer. The bodyrate and collective thrust commands from the MPC are streamed to the drone using a serial wireless link implemented through LAIRD <cit.>. This configuration allows to run computationally demanding algorithms, such as GPU-accelerated neural networks, with minimal modifications. However, due to the additional wireless command transmission, there is a higher latency which can potentially degrade the control performance. Finally, the Agilicious simulation is executed using the same setup as the first configuration. It uses accurate models for the quadrotor and motor dynamics, as well as a BEM aerodynamic model as described in <cit.>. Figure <ref>A,B depict the results of these trajectory tracking experiments. Our first proposed configuration (i.e. with onboard computation and the custom flight controller) achieves the best overall tracking performance with the lowest average positional RMSE of just 0.322 at up to 60  and 4 . Next up is the second configuration with BetaFlight, still achieving less than 0.385 average positional RMSE. Finally, the third configuration with offboard control exhibits higher latency, leading to an increased average positional RMSE of 0.474. As can be seen, our simulation closely matches the performance observed in real world in the first configuration. The simulated tracking results in slightly lower errors, 0.320 RMSE, since even the state-of-the-art aerodynamic models <cit.> fail to reproduce the highly non-linear and chaotic real aerodynamics. This simulation accuracy allows a seamless transition from simulation prototyping to real-world verification, and is one of the prominent advantages of Agilicious. Additional experiments motivating the choice of MPC as outer-loop controller and its combination with INDI can be found in <cit.>, details on the planning of the time-optimal reference trajectory are elaborated on in <cit.>, and additional extensions to the provided MPC for fast flight are in <cit.> and for rotor-failure MPC in <cit.>. These related publications also showcase performance at even higher speeds of up to 70  and accelerations reaching 5 . The following section gives further insights into the latency of the three configurations tested herein, including on- and offboard control architecture, as well as BetaFlight and flight controllers. §.§ Control Latency All real systems with finite resources suffer from communication and computation delays, while dynamic systems and even filters can introduce additional latency and bandwidth limitations. Analyzing and minimizing these delays is fundamental for the performance in any control task, especially when tracking agile and fast trajectories in the presence of model mismatch, disturbances and actuator limitations. In this section we conduct a series of experiments that aim to analyse and determine the control latency, from command to actuation, of the proposed architecture for the three different choices of low level configurations: our , BetaFlight, and BetaFlight with offboard control. For this experiment, the quadrotor has been mounted on a load cell (ATI Mini 40 SI-20-1 <cit.>) measuring the force and moment acting on the platform. To measure the latency, a collective thrust step command of 12 is sent to the corresponding low level controller, while measuring the exerted force on the drone. These force sensor measurements are time-synchronized with the collective thrust commands, and fitted through a first-order system representing to motor dynamics. The measured delays are reported in Figure <ref>C as the difference between the time at which the high level controller sends the collective thrust command, and the time at which the measured force effectively starts changing. The results show that both has the lowest latency at 35, with BetaFlight slightly slower at 40.15 . A large delay can be observed when using offboard control and sending the commands via Laird connection to the drone, in which case the latency rises to more than 75. The impact of these latencies is also reflected in the tracking error in Figure <ref>A,B. To put the measured delays into perspective, the motor's time constant of 39.1, which dictates the actuator bandwidth limitations, is indicated in Figure <ref>C. Finally, Section <ref> gives some insight into the latencies introduced when using Agilicious together with Flightmare <cit.> in a hardware-in-the-loop setup. §.§ Visual-Inertial State Estimation Deploying agile quadrotors outside of instrumented environments requires access to onboard state estimation. There exist many different approaches including GPS, lidar, and vision-based solutions. However, for size and weight constrained aerial vehicles, visual-inertial odometry has proven to be the go-to solution because of the sensors' complementary measurement modalities, low cost, mechanical simplicity, and reusability for other purposes, such as depth estimation for obstacle avoidance. The Agilicious platform provides a versatile sensor mount that is compatible with different sensors and can be easily adapted to fit custom sensor setups. In this work, two different VIO solutions are evaluated: (i) the proprietary, off-the-shelf Intel RealSense T265 and (ii) a simple camera together with the onboard flight controller IMU and an open-source VIO pipeline in the form of SVO Pro <cit.> with its sliding-window backend. While the RealSense T265 performs all computation on-chip and directly provides a state estimate via USB3.0, the alternative VIO solution uses the Jetson TX2 to run the VIO software and allows researchers to interface and modify the state estimation software. Specifically, for sensor setup (ii), a single Sony IMX-287 camera at 30 with a 165° diagonal field-of-view is used, combined with the IMU measurements of the flight controller at 500, calibrated using the Kalibr toolbox <cit.>. To verify their usability, a direct comparison of both VIO solutions with respect to ground-truth is provided. Performance is evaluated based on the estimation error <cit.> obtained on two trajectories flown with Agilicious. The flown trajectories consist of a circle trajectory with radius of 4 at a speed of 5, and a Lemniscate trajectory with an amplitude of 5 at a speed of up to 7. Figure <ref> shows the performance of both VIO solutions in an xy-overview of the trajectories together with their ATE RMSE. Both approaches perform well on both trajectories, with the Intel RealSense achieving slightly better accuracy according to the ATE of 0.151 on the circle and 0.114 on the Lemniscate, compared to the monocular SVO approach with 0.217 and 0.131, respectively. This is expected since the Intel RealSense uses a stereo camera plus IMU setup and is a fully integrated solution, while sensor setup (ii) aims at minimal cost by only adding a single camera and otherwise exploiting the existing flight-controller IMU and onboard compute resources. However, at the timing of writing this manuscript, the Intel RealSense T265 is being discontinued. Other possible solutions include camera sensors such as SevenSense <cit.>, MYNT EYE <cit.>, or MatrixVision <cit.>, and other stand-alone cameras, combined with software frameworks like ArtiSense <cit.>, SlamCore <cit.>, or open-source frameworks like VINSmono <cit.>, OpenVINS <cit.>, or SVO Pro <cit.>, evaluated in <cit.>. Furthermore, there are other fully integrated alternatives to the Intel RealSense <cit.>, including the Roboception <cit.> and the ModalAI Voxl CAM <cit.>. §.§ Demonstrators The Agilicious software and hardware stack is intended as a flexible research platform. To illustrate its broad applicability, this section showcases a set of research projects that have been enabled through Agilicious. Specifically, we demonstrate the performance of our platform in two different experimental setups covering HIL simulation and autonomous flight in the wild using only onboard sensing. §.§.§ Hardware in the Loop Simulation Developing vision-based perception and navigation algorithms for agile flight is not only slow and time-consuming, due to the large amount of data required to train and test perception algorithms in diverse settings, but it progressively becomes less safe and more expensive as more aggressive flights can lead to devastating crashes. This motivates the Agilicious framework to support hardware-in-the-loop simulation, which consists of flying a physical quadrotor in a motion-capture system while observing virtual photorealistic environments, as previously shown in <cit.>. The key advantage of hardware-in-the-loop simulation over classical synthetic experiments <cit.> is the usage of real-world dynamics and proprioceptive sensors, instead of idealized virtual devices, combined with the ability to simulate arbitrarily sparse or dense environments without the risk of crashing into real obstacles. The simulation of complex 3D environments and realistic exteroceptive sensors is achieved using our high-fidelity quadrotor simulator <cit.> built on Unity <cit.>. The simulator can offer a rich and configurable sensor suite, including RGB cameras, depth sensors, optical flow, and image segmentation, combined with variable sensor noise levels, motion blur, distortion and diverse environmental factors such as wind and rain. The simulator achieves this by introducing only minimal delays (see Figure <ref>A), ranging from 13 for 640×480 VGA resolution to 22 for 1920×1080 full HD images, when rendered on a NVIDIA RTX 2080 GPU. Overall, the integration of our agile quadrotor platform and high-fidelity visual simulation provides an efficient framework for the rapid development of vision-based navigation systems in complex and unstructured environments. §.§.§ Vision-based Agile Flight with Onboard Sensing and Compute When a quadrotor can only rely on onboard vision and computation, perception needs to be effective, low-latency, and robust to disturbances. Violating this requirement may lead to crashes, especially during agile and fast maneuvers where latency has to be low and robustness to perception disturbances and noise must be high. However, vision systems either exhibit reduced accuracy or completely fail at high speeds due to motion blur, large pixel displacements, and quick illumination changes <cit.>. To overcome these challenges, vision-based navigation systems generally build upon two different paradigms. The first uses the traditional perception-planning-and-control pipeline, represented by standalone blocks which are executed in sequence and designed independently <cit.>. Works in the second category substitute either parts or the complete perception-planning-and-control pipeline with learning-based methods <cit.>. The Agilicious flight stack supports both paradigms and has been used to compare traditional and learning-based methods on agile navigation tasks in unstructured and structured environments (see Figure <ref>). Specifically, Agilicious facilitated quantitative analyses of approaches for autonomous acrobatic maneuvers <cit.>(Fig. <ref>B-D) and high-speed obstacle avoidance in previously unknown environments <cit.>(Fig. <ref>E-G). Both comparisons feature a rich set of approaches consisting of traditional planning and control <cit.> as well as learning-based methods <cit.> with different input and output modalities. Thanks to its flexibility, Agilicious enables an objective comparison of these approaches on a unified control stack, without biasing results due to different low-level control strategies. § DISCUSSION The presented Agilicious framework substantially advances the published state of the art in autonomous quadrotor platform research. It offers advanced computing capabilities combined with the most powerful open-source and open-hardware quadrotor platform created to date, opening the door for research on the next generation of autonomous robots. We see three main axes for future research based on our work. First, we hypothesize that future flying robots will be smaller, lighter, cheaper, and consuming less power than what is possible today, increasing battery life, crash-resilience, as well as TWR and TIR <cit.>. This miniaturization is evident in state-of-the-art research towards direct hardware implementations of modern algorithms in the form of ASICs, such as the Navion <cit.>, the Movidius <cit.>, or the PULP processor <cit.>. These highly specialized in-silicon implementations are typically magnitudes smaller and more efficient than general compute units. Their success is rooted in the specific structure many algorithmic problems exhibit, such as the parallel nature of image data or the factor-graph representations used in estimation, planning, and control algorithms, like SLAM, MPC, and neural network inference. Second, the presented framework was mainly demonstrated with fixed-shape quadrotors. This is an advantage as the platform is easier to model and control, and less susceptible to hardware failures. Nevertheless, platforms with a dynamic morphology are by design more adaptable to the environment and potentially more power efficient <cit.>. For example, to increase flight time, a quadrotor might transform to a fixed-wing aircraft <cit.>. Due to its flexibility, Agilicious is the ideal tool for the future development of morphable and soft aerial systems. Finally, vision-based agile flight is still in the early stages and has not yet reached the performance of professional human pilots. The main challenges lie in handling complex aerodynamics, e.g. transient torques or rotor inflow, low-latency perception and state estimation, and recovery from failures at high speeds. In the last few years, considerable progress has been made by leveraging data-driven algorithms <cit.> and novel sensors as event-based cameras <cit.>, that provide a high dynamic range, low latency, and low battery consumption <cit.>. A major opportunity for future work is to complement the existing capabilities of Agilicious with novel compute devices such as the Intel Loihi <cit.> or SynSense Dynap <cit.> neuromorphic processing architecture, which are specifically designed to operate in an event-driven compute scheme. Due to the modular nature of Agilicious, individual software components can be replaced by these novel computing architectures, supporting rapid iteration and testing. In summary, Agilicious offers a unique quadrotor testbed to accelerate current and future research on fast autonomous flight. Its versatility in both hardware and software allows deployment in a wide variety of tasks, ranging from exploration or search and rescue missions to acrobatic flight. Furthermore, the modularity of the hardware setup allows integrating novel sensors or even novel compute hardware, enabling to test such hardware on an autonomous agile vehicle. By open-sourcing Agilicious, we provide the research and industrial community access to a highly agile, versatile, and extendable quadrotor platform. § MATERIALS AND METHODS Designing a versatile and agile quadrotor research platform requires to co-design its hardware and software, while carefully trading off competing design objectives such as onboard computing capacity and platform agility. In the following, the design choices that resulted in the flight hardware, compute hardware, and software design of Agilicious (see Figure <ref>) are explained in detail. §.§ Compute Hardware To exploit the full potential of highly unstable quadrotor dynamics, a high-frequency low-latency controller is needed. Both of these requirements are difficult to meet with general-purpose operating systems, which typically come without any real-time execution guarantees. Therefore, we deploy a low-level controller with limited compute capabilities but reliable real-time performance, which stabilizes high-bandwidth dynamics, such as the motor speeds or the vehicle's bodyrate. This allows complementing the system with a general-purpose high-level compute unit that can run Linux for versatile software deployment, with significantly relaxed real-time requirements. High-Level Compute Board The high-level of the system architecture provides all the necessary compute performance to run the full flight stack, including estimation, planning, optimization-based control, neural network inference, or other demanding experimental applications. Therefore, the main goal is to provide general-purpose computing power, while complying with the strict size and weight limits. We evaluate a multitude of different compute modules made from system-on-a-chip (SoC) solutions since they allow inherently small footprints. An overview is shown in Tab. <ref>. We exclude the evaluation of two popular contenders: (a) the Intel NUC platform, since it neither provides any size and weight advantage over the Jetson Xavier AGX nor provides a general-purpose GPU; and (b) the Raspberry Pi compute modules since they do not offer any compute advantages over the Odroid and UpBoard, and no size and weight advantage over the NanoPi product family. As we target general flight applications, fast prototyping, and experimentation, it is important to support a wide variety of software, which is why we chose a Linux-based system. TensorFlow <cit.> and PyTorch <cit.> are some of the most prominent frameworks with hardware-accelerated neural network inference. Both of them support accelerated inference on the Nvidia CUDA general-purpose GPU architecture, which renders nVidia products favorable, as other products have no or poorly-supported accelerators. Therefore, four valid options remain, listed in the second row of Tab. <ref>. While the Jetson Xavier AGX is beyond our size and weight goals, the Jetson Nano provides no advantage over the Xavier NX, rendering both the Jetson TX2 and Xavier NX viable solutions. Since these two CUDA-enabled compute modules require breakout boards to connect to peripherals, our first choice is the TX2 due to the better availability and diversity of such adapter boards and their smaller footprint. For the breakout board we recommend the ConnectTech Quasar <cit.>, providing multiple USB ports, Ethernet, serial ports, and other interfaces for sensors and cameras. Low-Level Flight Controller The Low-Level Flight Controller provides real-time low-latency interfacing and control. A simple and widespread option is the open-source BetaFlight <cit.> software which runs on many commercially available flight controllers, such as the Radix<cit.>. However, BetaFlight is made for human-piloted drones and optimized for a good human flight feeling, but not for autonomous operation. Furthermore, even though it uses high-speed IMU readings for the control loop, it only provides very limited sensor readings at only 10. Therefore, Agilicious provides its own low-level flight controller implementation called "", reusing the same hardware as the BetaFlight controllers. This means that the wide variety of commercially available products can be bought and reflashed with to provide a low-level controller suited for autonomous agile flight. In particular, we recommend using the BrainFPV Radix <cit.> controller, to deploy our software. The is based on the open-source NuttX <cit.> real-time operating system, optimized to run on embedded microcontrollers such as the STM32F4 used in many BetaFlight products. Our implementation interfaces with the motors' electronic speed controller (ESC) over the digital bi-directional DShot protocol, allowing not only to command the motors, but also receive individual rotor speed feedback. This feedback is provided to the high-level controller together with IMU, battery voltage, and flight mode information over a 1 serial bus at 500. The also provides closed-loop motor speed control, bodyrate control, and measurement time synchronization, allowing estimation and control algorithms to take full advantage of the available hardware. §.§ Flight Hardware To maximize the agility of the drone, it needs to be designed as lightweight and small as possible <cit.> while still being able to accommodate the Jetson TX2 compute unit. With this goal in mind, we provide a selection of cheap off-the-shelf drone components summarized in Tab. <ref>. The Armattan Chameleon 6 frame is used as a base since it is one of the smallest frames with ample space for the compute hardware. Being made out of carbon fiber, it is durable and lightweight. The other structural parts of the quadrotor are custom-designed plastic parts (PLA and TPU material) and produced using a 3D printer. Most components are made out of PLA which is stiffer and only parts that act as impact protectors or as predetermined breaking points are made out of TPU. For propulsion, a 5.1 three-bladed propeller is used in combination with a fast-spinning brushless DC motor rated at a high maximum power of 758. The chosen motor-propeller combination achieves a continuous static thrust of 4× 9.5 on the quadrotor and consumes about 400 of power per motor. To match the high power demand of the motors, a lithium-polymer battery with 1800 and a rating of 120C is used. Therefore, the total peak current of 110 is well within the 216 limit of the battery. The motors are powered by an electronic speed controller in the form of the Hobbywing XRotor ESC, due to its compact form factor, its high continuous current rating (60 per motor), and support of the DShot protocol supporting motor speed feedback. §.§ Sensors To navigate arbitrary uninstrumented environments, drones need means to measure their absolute or relative location and state. Due to the size and weight constraints of aerial vehicles, and especially the direct impact of weight and inertia on the agility of the vehicle, VIO has proven to be the go-to solution for aerial navigation. The complementary sensing modality of cameras and IMUs, their low price and excellent availability, together with the depth-sensing capabilities of stereo camera configurations allow for a simple, compact, and complete perception setup. Furthermore, our flight controller already provides high-rate filtered inertial measurements and can be combined with any off-the-shelf camera <cit.> and open-source <cit.> or commercial <cit.> software to implement a VIO pipeline. Additionally, there exist multiple fully integrated products providing out-of-the-box VIO solutions, such as the Intel RealSense <cit.>, the Roboception rc_visard <cit.>, and the ModalAI Voxl CAM <cit.>. §.§ The Agilicious Flight Stack Software To exploit the full potential of our platform and enable fast prototyping, we provide the Agilicious flight stack as an open-source software package. The main development goals for Agilicious are aligned with our overall design goals: high versatility and modularity, low latency for agile flight, and transferability between simulation and multiple real-world platforms. These goals are met by splitting the software stack into two parts. The core library, called "agilib", is built with minimal dependencies but provides all functionality needed for agile flight, implemented as individual modules (illustrated in Figure <ref>). It can be deployed on a large range of computing platforms, from lightweight low-power devices to parallel neural network training farms built on heterogeneous server architectures. This is enabled by avoiding dependencies on other software components that could introduce compatibility issues and rely only on the core C++-17 standard and the Eigen library for linear algebra. Additionally, agilib includes a standalone set of unit tests and benchmarks that can be run independently, with minimal dependencies, and in a self-contained manner. To provide compatibility to existing systems and software, the second component is a ROS-wrapper, called "agiros", which enables networked communication, data logging, provides a simple GUI interface to command the vehicle and allows for integration with other software components. This abstraction between "agiros" and the core library "agilib" allows a more flexible deployment on systems or in environments where ROS is not available, not needed, or communication overhead must be avoided. On the other hand, the ROS-enabled Agilicious provides versatility and modularity due to a vast number of open-source ROS packages provided by the research community. For flexible and fast development, "agilib" uses modular software components unified in an umbrella structure called "pipeline" and orchestrated by a control logic, called "pilot". The modules consist of an "estimator", "sampler", "controllers", and a "bridge", all working together to track a so-called "reference". These modules are executed in sequential order (illustrated in Figure <ref>) within a forward pass of the pipeline, corresponding to one control cycle. However, each module can spawn its individual threads to perform parallel tasks, e.g. asynchronous sensor data processing. Agilicious provides a collection of state-of-the-art implementations for each module inherited from base classes, allowing to create new implementations outside of the core library, and linking them into the pilot at runtime. Moreover, Agilicious is not only capable to control a drone when running onboard the vehicle, but can also run offboard on computationally more capable hardware and send commands to the drone over low-latency wireless serial interfaces. Finally, the core library is completed by a physics simulator. While this might seem redundant due to the vast variety of simulation pipelines available <cit.>, it allows to use high-fidelity models (e.g. BEM <cit.> for aerodynamics), evaluate software prototypes without having to interface with other frameworks, avoids dependencies, and enables even simulation-based continuous integration testing that can run on literally any platform. The pilot, software modules, and simulator are all described in the following sections. §.§.§ Pilot The pilot contains the main logic needed for flight operation, handling of the individual modules, and interfaces to manage references and task commands. In its core, it loads and configures the software modules according to YAML <cit.> parameter files, runs the control loop, and provides simplified user interfaces to manage flight tasks, such as position and velocity control or trajectory tracking. For all state descriptions, we use a right-handed coordinate system located in the center of gravity, with the _Be_z pointing in body-relative upward thrust direction, and _Be_x pointing along with the drone's forward direction. Motion is represented with respect to an inertial world frame with _Ie_z pointing against the gravity direction, where translational derivatives (e.g. velocity) are expressed in the world frame and rotational derivatives (e.g. bodyrate) are expressed in the body frame. Pipeline The pipeline is a distinct configuration of the sequentially processed modules. These pipeline configurations can be switched at runtime by the pilot or the user, allowing to switch to backup configurations in an emergency, or quickly alternate between different prototyping configurations. Estimator The first module in the pipeline is the estimator, which provides a time-stamped state estimate for the subsequent software modules in the control cycle. A state estimate x = [p, q, v, ω, a, τ, j, s, b_ω, b_a, f_d, f], represents position p, orientation unit quaternion q, velocity v, bodyrate ω, linear a and angular τ accelerations, jerk j, snap s, gyroscope and accelerometer bias b_ω and b_a, and desired and actual single rotor thrusts f_d and f. Agilicious provides a feed-through estimator to include external estimates or ground-truth from a simulation, as well as two extended Kalman filters, one with IMU filtering, and one using the IMU as propagation model. These estimators can easily be replaced or extended to work with additional measurement sources, such as GPS or altimeters, other estimation systems, or even implement complex localization pipelines such as visual-inertial odometry. Sampler For trajectory tracking using a state estimate from the aforementioned estimator, the controller module needs to be provided with a subset of points of the trajectory that encode the desired progress along it, provided by the sampler. Agilicious implements two types of samplers: a time-based sampling scheme that computes progress along the trajectory based on the time since trajectory start, and a position-based sampling scheme that selects trajectory progress based on the current position of the platform, trading off temporally accurate tracking for higher robustness and lower positional tracking error. Controller To control the vehicle along the sampled reference setpoints, a multitude of controllers are available, which provide the closed-loop commands for the low-level controller. We provide a state-of-the-art MPC that uses the full non-linear model of the platform and which allows to track highly agile references using single-rotor thrust commands or bodyrate control. Additionally, we include a cascaded geometric controller based on the quadrotor's differential flatness <cit.>. The pipeline can cascade two controllers, which even allows combining the aforementioned MPC <cit.> or geometric approaches with an intermediate controller for which we provide an L1 adaptive controller <cit.> and an incremental nonlinear dynamic inversion controller <cit.>. Bridge A bridge serves as an interface to hardware or software by sending control commands to a low-level controller or other means of communication sinks. Low-level commands can either be single rotor thrusts or bodyrates in combination with a collective thrust. Agilicious provides a set of bridges to communicate via commonly used protocols such as ROS, SBUS, and serial. While the ROS-bridge can be used to easily integrate Agilicious in an existing software stack that relies on ROS, the SBUS protocol is a widely used standard in the FPV community and therefore allows to interface Agilicious to off-the-shelf flight controllers such as BetaFlight <cit.>. For simple simulation, there is a specific bridge to interface with the popular RotorS <cit.> simulator, which is however less accurate than our own simulation described in Sec. <ref>. As Agilicious is written in a general abstract way, it runs on onboard compute modules and offboard, for which case we provide a bridge to interface with the LAIRD<cit.> wireless serial interface. Finally, Agilicious also provides a bridge to communicate to the custom low-level controller described in Sec. <ref>. This provides the advantage of gaining access to closed-loop single rotor speed control, high-frequency IMU, rotor speed, and voltage measurements at 500, all provided to the user through the bridge. References References are used in conjunction with a controller to encode the desired flight path of a quadrotor. In Agilicious, a reference is fed to the sampler, which generates a receding-horizon vector of setpoints that are then passed to the controller. The software stack implements a set of reference types, consisting of Hover, Velocity, Polynomial, and Sampled. While Hover references are uniquely defined by a reference position and a yaw angle, a Velocity reference specifies a desired linear velocity with a yaw rate. By exploiting the differential flatness of the quadrotor platform, Polynomial references describe the position and yaw of the quadrotor as polynomial functions of time. Sampled references provide the most general reference representations. Agilicious provides interfaces to generate, and receive such sampled references and also defines a message and file format to store references to a file. By defining such formats, a wide variety of trajectories can be generated, communicated, saved, and executed using Python or other languages. Finally, to simplify the integration and deployment of other control approaches, Agilicious also exposes a command feedthrough, that allows taking direct control over the applied low-level commands. For safety, even when command feedthrough is used, Agilicious provides readily available back-up control that can take over on user request or on timeout. Guard To further support users in fast prototyping, Agilicious provides a so-called guard. This guard uses the quadrotor's state-estimate or an alternative estimate (e.g. from motion capture when flying with VIO prototypes) together with a user-defined spatial bounding box to detect unexpected deviations from the planned flight path. Further detection metrics can be implemented by the user. Upon violation of e.g. the spatial bounding box, the guard can switch control to an alternative pipeline using a backup estimate and control configuration. This safety pipeline can e.g. use a motion capture system and a simple geometric controller, while the main pipeline runs a VIO estimator, an MPC, reinforcement learning control strategies, or other software prototypes. By providing this measure of backup, Agilicious significantly reduces the risk of crashes when testing novel algorithms, and allows to iterate over research prototypes faster. §.§.§ Simulation The Agilicious software stack includes a simulator that allows simulating quadrotor dynamics at various levels of fidelity to accelerate prototyping and testing. Specifically, Agilicious models motor dynamics and aerodynamics acting on the platform. To also incorporate the different, possibly off-the-shelf, low-level controllers that can be used on the quadrotors, the simulator can optionally simulate the behavior of low-level controllers. One simulator update, typically called at 1, includes a call to the simulated low-level controller, the motor model, the aerodynamics model, and the rigid body dynamics model in a sequential fashion. Each of these components is explained in the following. Low-Level Controller & Motor Model Simulated low-level controllers run at simulation frequency and convert collective thrust and bodyrate commands into individual motor speed commands. The usage of a simulated low-level controller is optional if the computed control commands are already in the form of individual rotor thrusts. In this case, the thrusts are mapped to motor speed commands and then directly fed to the simulated motor model. The motors are modeled as a first-order system with a time constant which can be identified on a thrust test stand. Aerodynamics The simulated aerodynamics model lift and drag produced by the rotors from the current ego-motion of the platform and the individual rotor speeds. Agilicious implements two rotor models: Quadratic and BEM. The Quadratic model implements a simple quadratic mapping from rotor rotational speed to produced thrust, as commonly done in quadrotor simulators <cit.>. While such a model does not account for effects imposed by the movement of a rotor through the air, it is highly efficient to compute. In contrast, the BEM model leverages Blade-Element-Momentum-Theory (BEM) to account for the effects of varying relative airspeed on the rotor thrust. To further increase the fidelity of the simulation, a neural network predicting the residual forces and torques (e.g. unmodeled rotor to rotor interactions and turbulence) can be integrated into the aerodynamics model. For details regarding the BEM model and the neural network augmentation, we refer the reader to <cit.>. Rigid Body Dynamics Provided with a model of the forces and torques acting on the platform predicted by the aerodynamics model, the system dynamics of the quadrotor are integrated using a 4th order Runge-Kutta scheme with a step size of 1. Agilicious also implements different integrators such as explicit Euler or symplectic Euler. Apart from providing its own state-of-the-art quadrotor simulator, Agilicious can also be interfaced with external simulators. Interfaces to the widely-used RotorS quadrotor simulator <cit.> and Flightmare <cit.>, including the HIL simulator, are already provided in the software stack. § ACKNOWLEDGMENTS Author Contribution P.F. developed the Agilicious software concepts and architecture, contributed to the Agilicious implementation, helped with the experiments, and wrote the manuscript. E.K. contributed to the Agilicious implementation, designed the Agilicious hardware, helped with the experiments, and wrote the manuscript. A.R., R.P., S.S., and L.B. contributed to the Agilicious implementation, helped with the experiments, and wrote the manuscript. T.L. evaluated the hardware components, designed and built the Agilicious hardware, helped with the experiments, and wrote the manuscript. Y.S. contributed to the Agilicious implementation, helped with the experiments, and wrote the manuscript. A.L. helped with the experiments and wrote the manuscript. D.S. provided funding, contributed to the design and analysis of the experiments, and revised the manuscript. Funding This work was supported by the National Centre of Competence in Research (NCCR) Robotics through the Swiss National Science Foundation (SNSF) and the European Union’s Horizon 2020 Research and Innovation Program under grant agreement No. 871479 (AERIAL-CORE) and the European Research Council (ERC) under grant agreement No. 864042 (AGILEFLIGHT). Data and Materials The main purpose of this paper is to share our data and materials. Therefore all materials, both software as well as hardware designs, are open-source accessible at <https://agilicious.dev> under the GPL v3.0 license.
http://arxiv.org/abs/2307.04490v1
20230710112803
A symmetry and Noether charge preserving discretization of initial value problems
[ "Alexander Rothkopf", "Jan Nordström" ]
math.NA
[ "math.NA", "cs.NA", "hep-lat", "physics.comp-ph" ]
PreprintFP addressref=aff1, corref=aff1, [email protected] ]ARAlexander Rothkopf addressref=aff2,aff3, [email protected] ]JNJan Nordström [id=aff1] Faculty of Science and Technology, University of Stavanger, 4021, Stavanger, Norway [id=aff2] Department of Mathematics, Linköping University, SE-581 83, Linköping, Sweden [id=aff3] Department of Mathematics and Applied Mathematics, University of Johannesburg, P.O. Box 524, Auckland Park 2006, Johannesburg, South Africa Taking insight from the theory of general relativity, where space and time are treated on the same footing, we develop a novel geometric variational discretization for second order initial value problems (IVPs). By discretizing the dynamics along a world-line parameter, instead of physical time directly, we retain manifest translation symmetry and conservation of the associated continuum Noether charge. A non-equidistant time discretization emerges dynamically, realizing a form of automatic adaptive mesh refinement (AMR), guided by the system symmetries. Using appropriately regularized summation by parts finite difference operators, the continuum Noether charge, defined via the Killing vector associated with translation symmetry, is shown to be exactly preserved in the interior of the simulated time interval. The convergence properties of the approach are demonstrated with two explicit examples. Initial Value Problem, Summation By Parts, Time-Translation Invariance, Conserved Noether Charge, Adaptive Mesh Refinement § INTRODUCTION Symmetries play a central role in our understanding of dynamical processes in both classical <cit.> and quantum <cit.> physics. Emmy Noether achieved groundbreaking insight, when she proved that the presence of a global continuous symmetry in the action S of a system implies the existence of a conserved current, whenever the equations of motions are fulfilled <cit.>. Via such a Noether current, one can define a quantity, which remains unchanged during the evolution of the system and which is referred to as Noether charge. Noether's theorem thus offers a fundamental understanding of central tenets of classical physics, such as energy and momentum conservation, which it relates to the invariance of physics under translations in time and space respectively. In quantum theory, the presence of symmetries limits the type of quantum fluctuations which may occur <cit.>, with measurable consequences for the spectrum of elementary particles and their bound states. The four Noether currents associated with space and time translations are conventionally summarized in a quantity called the energy-momentum tensor T^μν(x), where μ and ν refer to spatial and temporal components. It offers access to vital properties of a system, one pertinent example being the energy density profile <cit.> of a static charge distribution via the ε(x)=T^00(x) component or the corresponding electric field-line configuration via the spatial components T_ij(x)=F_iμF^μ_j-1/4δ_ijF_μν^2 of the electromagnetic field F_μν, referred to as the Maxwell stress tensor (see e.g. <cit.>). The simulation of dynamical phenomena in classical and quantum systems is often performed after discretizing space and time on a finite mesh (for a discussion of discretization in functional spaces see e.g <cit.>). Finite difference schemes, formulated in their modern summation-by-parts (SBP) form (for reviews see e.g. <cit.>) offer both conceptual and practical benefits. The SBP approach in both space and time <cit.> offers proofs of stability based on the so-called energy method, which can be extended to high-order schemes in a straight forward fashion. Not only do SBP operators mimic integration by parts (IBP) exactly in the discretized setting, but in addition they constitute a cost effective approximation to differential operators on many mesh types. The discretization of space and time in its conventional form, i.e. considering x and t as independent variables, necessarily affects the symmetry properties of the system at hand (see e.g. the discussion in <cit.>). Where the continuum theory e.g. admits translations of any magnitude, i.e. in particular also infinitesimal ones, the discretized theory on a space-time mesh with grid spacing Δ_μ only allows one to shift space and time by that finite amount. In general this entails that a central condition of Noether's theorem, the presence of a continuous symmetry, does not hold and the corresponding continuum Noether charge fails to remain constant over time. This is particularly concerning with regards to time translation symmetry and energy conservation, which are closely related to the stability of the simulation. Artificial loss of energy is often considered benign, as it is simply a matter of loosing accuracy. An artificial increase of energy will, as energy is not bounded from above, eventually lead to a divergence of the simulated dynamics, characteristic of an unstable scheme. On the other hand, if energy is conserved, it puts stringent bounds on the growth of the solution. In the context of symplectic schemes, which conserve energy on average, one can relate energy conservation directly to the stability of the numerical scheme (see e.g. <cit.> and also <cit.>). One strategy to retain energy conservation for systems with second order governing equations is to go over to a Hamiltonian approach, where only space is discretized, while time remains continuous. One converts the equation of motion of the Lagrange formalism, which is second order in the time derivative into a set of two equations of motion of first order, after replacing velocities with the so-called canonical momentum. After this step, a discrete phase-space volume preserving time stepping may be implemented (c.f. Verlet-Størmer <cit.>). This approach crucially hinges on the availability of a Hamiltonian picture, i.e. whether the canonical momenta can be defined, which may face difficulties in systems with inherent constraints or requires the choice of a particular gauge, as in Maxwell's electrodynamics <cit.>. Another strategy is to determine whether Noether's theorem may be salvaged in the presence of a finite grid spacing <cit.>. One may e.g. consider modifications to the continuum energy expression, which remain conserved, given a particular choice of difference approximation. However, as the necessary schemes are not of SBP type, they do not mimic other relevant properties of the continuum theory. In this study we develop a generic approach to discretize second order IVPs on the level of the system Lagrangian, while retaining the manifest translation invariance of the continuum theory. In order to do so we will take inspiration from the general theory of relativity (for a textbook see e.g. <cit.>), where space and time are treated on the same footing. In this formalism the presence of translation symmetry is evident from the form of the Langrangian itself. We build upon our prior work on formulating IVPs directly via the action of the system, which allows us to avoid the need to derive their equation of motion. The action of the system is discretized using SBP finite difference operators with a physical null-space, developed in our previous paper <cit.>. These operators are crucial in mimicking the continuum derivation of Noether's theorem (and if one wishes to do so, the equations of motion). The central outcome of this proof-of-principle study is a prescription of how to discretize second order IVPs directly on the level of the Lagrangian, while retaining the continuum time translation symmetry and thus exact conservation of the corresponding Noether charge. No reference to a Hamiltonian is required. We observe that a non-equidistant discretization emerges in the time coordinate, which represents a form of automatic adaptive mesh refinement (AMR) <cit.>, guided by the inherent symmetries of the system. Our results open up a novel route for obtaining optimal AMR procedures, where clustering and coarsening emerge as part of the solution process, thus avoiding the conventional use of sensors (see e.g. <cit.>), adjoint techniques (see e.g. <cit.>) or error estimates (see e.g. <cit.>). In <ref> we discuss the continuum formulation of our geometrized variational approach with time considered as dependent variable. In <ref> the discretized formalism is introduced and we present its efficacy in <ref> using different example systems. We close with a summary and outlook in <ref>. § CONTINUUM FORMALISM WITH MANIFEST TRANSLATION SYMMETRY The common starting point for the formulation of the variational principle in classical point mechanics is to consider the dynamics of a system as boundary value problem (BVP). The system, which takes on position x_i at t_i evolves to position x_f at t_f and we wish to determine the trajectory it follows. Obviously this formulation is not causal, as we already need to know the end-point of the dynamics to determine the trajectory. As discussed in <cit.> and in our previous study <cit.> it is possible to formulate the variational problem as a genuine initial value problem through a doubling of the degrees of freedom of the system. In order to focus on the qualitatively novel ingredients of our variational approach, we first introduce it in the standard context of point mechanics as a BVP. The implementation for a genuine IVP is given in the subsequent subsection. §.§ Boundary value problem formulation Symmetry is a central mathematical pillar of the theory of relativity. In the special theory of relativity one formulates the laws of physics in a way that remains invariant under so-called Lorentz transformations of the coordinates, while in general relativity one constructs a description, which is invariant under an even larger class of transformations. Such a theory, invariant under arbitrary differentiable coordinate transformations, is called reparametrization invariant. Reparameterization invariance is achieved by considering both space and time as dynamical degrees of freedom. In this study we are not interested in determining the dynamical evolution of space-time itself but will simply borrow this reparametrization invariant formalism of general relativity for our purposes of obtaining a symmetry preserving discretization. As our prime example, we set out to describe the dynamics of a point mass in the presence of a potential. The first step is to convert this physics question into a purely geometric problem. In general relativity, the trajectory of a particle, traveling freely in (a not necessarily flat) space-time described by the metric tensor g_μν, is given by a path that generalizes the notion of the shortest path on the corresponding space-time manifold. This path is called a geodesic. While the particle may move in a (1+d) dimensional space-time with d space and one time direction, its path traces out a one-dimensional submanifold, which we can parameterize with a single, so called world-line parameter, denoted in the following by γ. We will restrict ourselves here to two dimensions, i.e. d=1, a system with one spatial and one temporal direction expressed in coordinates as x(γ)=(t(γ),x(γ)). A geodesic may be obtained from a variational principle <cit.>, which asks for the critical point of the following action functional that measures the length of the path between two space-time points x(γ_i) and x(γ_f) S= ∫_γ_i^γ_f dγ (-mc)√( g_μνdx^μ/dγdx^ν/dγ), x(γ_i)= x_i, x(γ_f)= x_f. Here Einstein's summation convention has been adopted and we have included the dimensionful prefactor mc, which, as we will show explicitly below, allows us to recover the usual action in the non-relativistic limit from <ref>. We refer to time t(γ) as the zeroth component x^0 of the vector x and to the spatial coordinate x(γ) as the first component x^1. Note that this functional is reparametrization invariant under any differentiable redefinition of the parameter γ. I.e. when converting from γ→γ^' the conversion of differentials under the square root produces terms dγ^'/dγ that cancel with the conversion factor of the measure. The geodesics of flat space-time, described by the diagonal metric tensor g= diag[c^2,-1], which arise from the critical point of the action functional S_ flat=∫_γ_i^γ_f dγ (-mc)√(c^2(dt/dγ)^2-(dx/dγ)^2), are straight lines, which are traversed with constant speed (see chapter 3.4 of <cit.>), in agreement with Newtonian mechanics. It is important to note that while our intuition of the concept of shortest path relies on geometries with positive definite metrics (Riemannian geometry), physical spacetime, as confirmed by experiment, has a metric with both positive and negative eigenvalues (pseudo-Riemannian geometry). In such a geometry the shortest path between two points can denote a saddle point of the action functional instead of a genuine minimum, as the temporal and spatial components enter relation (<ref>) with opposite sign. To describe the presence of an external force acting on a point particle in flat spacetime, one conventionally amends the action S_ flat simply by adding the potential term V(x) responsible for generating that force (see chapter 7.9 in <cit.>). Let us now discuss how we can exploit the formalism of general relativity to re-express the evolution of a particle in flat spacetime in the presence of an external force, instead as an evolution of a free particle in a non-flat spacetime. In the presence of an external force, encoded in a potential term V(x), the particle trajectory in flat space-time will deviate from the straight line. A standard procedure in the study of weak-field gravity is to reinterpret the change in the particle trajectory due to a potential, instead, as the effect of a non-flat space-time without a potential present (see e.g. chapter 8 of <cit.>). This reinterpretation is possible, as long as the values of the potential are smaller than the rest energy (mc^2) of the point mass, a condition which is very well fulfilled for the non-relativistic systems we are interested in solving. As we will see in the following, one can introduce the effects of a potential V(x) on a point particle with mass m in the weak-field limit of general relativity by modifying the temporal component g_00 of the diagonal metric tensor g_00=c^2+2V(x)/m, while keeping g_11=-1. I.e. one endows the metric with a non-trivial dependence on the spatial coordinate, trading the absence of an explicit external force for a non-flat spacetime. Let us now show that such a modification of the metric indeed recovers the non-relativistic action of a particle in the presence of the potential V(x). To this end we insert the modified metric <ref> into the geodesic action <ref>: S= ∫_γ_i^γ_f dγ (-mc)√( g_00(dt/dγ)^2 - (dx/dγ)^2 ) g_00>0= ∫_γ_i^γ_f dγ (-mc) √( g_00(dt/dγ)^2 )√( 1 - 1/g_00(dx/dγ)^2 (dt/dγ)^-2_(dx/dt)^2) dx/dt^2 ≪ g_00∼ c^2= ∫_γ_i^γ_f dγ |dt/dγ| (-mc) √( g_00)( 1 - 1/21/g_00(dx/dγ)^2 (dt/dγ)^-2 + O( 1/g_00^2(dx/dt)^4 )) V/m ≪ c^2= ∫_γ_i^γ_f dγ dt/dγ( -mc^2 + 1/2 m (dx/dγ)^2 (dt/dγ)^-2 - V(x) + O((V/mc^2)^2) + O( 1/c^2(dx/dt)^4 ) ) = ∫_t_i^t_f dt ( -mc^2 + 1/2 m (dx/dt)^2 - V(x) ). In the third line we have expanded the rightmost square root in <ref>, assuming that the square of the physical velocity (dx/dt)^2 is much smaller than g_00, which is to say that the particle velocity dx/dt itself is much smaller than the speed of light c. To go from the third to the fourth line, we have in addition assumed that the potential is much smaller than the rest energy of the point particle, which allows us to expand the term √(g_00)=√(c^2+2V(x)/m) in terms of V(x)/mc^2. We will look for solutions where time flows forward and thus have dropped the absolute value around dt/dγ at the beginning of the second to last line. Note that <ref> is nothing but the standard non-relativistic action <cit.> for a point particle in the presence of an arbitrary potential term with the rest energy mc^2 included. We have thus successfully related the (artificially constructed) fully geometric description of the particle in a non-flat spacetime in <ref> with the standard description of a particle propagating in flat spacetime in the presence of an external potential in <ref> in the non-relativistic limit. We see in <ref> that time emerges naturally as the independent variable in which the action integral is formulated. Of course, choosing time as independent variable hides the inherent reparametrization invariance, which persists even in the non-relativistic limit in <ref>. Interestingly it turns out that <ref> is a generalization of the ad-hoc construction of a reparametrization invariant non-relativistic action, discussed in standard textbooks on the calculus of variations (see e.g. <cit.>). <Ref> includes the rest mass term -mc^2 (dt/dγ), which is missing in the standard derivation and which in the absence of a potential contributes a dependence on (dt/dγ) that plays a role in obtaining a well-defined critical point for the time degree of freedom. The reward for our efforts lies in the fact that <ref> is manifestly invariant under the space-time symmetries of our (1+1) dimensional system. If V(x)=0 only the derivatives dt/dγ and dx/dγ but not t and x itself appear in the action functional <ref>. In turn adding a constant shift to either t or x as in x→ x+ s leaves the action invariant. In the presence of a spatially dependent potential V(x), g_00(x) too becomes dependent on space x and only time translation invariance remains (as the force induced by V(x) changes the momentum of the point particle). Proving time translation invariance in the conventional action <ref> is much more involved, as one needs to consider how x as a function of t changes under such translations and in addition the boundaries of the action integral themselves are affected by the shift. None of these complications arise in <ref>[That the derivatives of space and time occur in eq. (<ref>) as squares under the square root with a relative minus sign (hiding in g_11) also entails that the action is manifestly invariant under so called Lorentz boosts. These transformations mix space and time components and are related to changes between inertial coordinate systems.]. In the calculus of variations it is known that the critical point of the action S can be obtained by solving certain differential equations, the so called geodesic equations <cit.>. It follows from considering the variation of the action in all of its dependent variables t, ṫ=dt/dγ, x and ẋ=dx/dγ δ S[t,ṫ, x,ẋ]= ∫_γ_i^γ_f dγ{∂ L/∂ tδ t + ∂ L/∂ṫδṫ + ∂ L/∂ xδ x + ∂ L/∂ẋδẋ} = ∫_γ_i^γ_f dγ{( ∂ L/∂ t - d/dγ∂ L/∂ṫ)δ t + ( ∂ L/∂ x - d/dγ∂ L/∂ẋ)δ x} + .[ ∂ L/∂ṫδ t ]|_γ_i^γ_f+ .[ ∂ L/∂ẋδ x ]|_γ_i^γ_f. where in the second line we have integrated by parts. As we are considering the variational problem as boundary value problem with the coordinates t and x fixed at the start and end points of the trajectory x(γ_i)= x_i, x(γ_f)= x_f, also the variations δ t and δ x on the boundary vanish and so do the two boundary terms above. Note that we consider t and x as distinct degrees of freedom, so that the terms in the parentheses, multiplying the arbitrary variations δ x and δ t, must vanish each independently at the stationary point δ S=0. By deriving the Euler-Lagrange equations of the system in the spirit of the standard BVP treatment of classical mechanics, the above derivation tells us that we may locate the classical trajectory of a non-relativistic particle under the influence of a potential, by finding the critical point of the action <ref> with modified g_00 component of the metric, while keeping the start and end coordinates x(γ_i) and x(γ_f) fixed. Note that there exist infinitely many different parameterizations of the trajectory described by δ S=0, which all differ by the velocity in γ, in which this trajectory is traversed. In practice these different stationary points of S lead to difficulties in numerical optimization and we therefore follow the standard practice (see e.g. discussion in <cit.> or <cit.>) of selecting a particular parameterization by choosing instead of S the variations of the functional E_ BVP=∫_γ_i^γ_fdγ E_ BVP[t,ṫ,x,ẋ]=∫_γ_i^γ_fdγ1/2( g_00(dt/dγ)^2 + g_11(dx/dγ)^2 ). It differs from S via squaring the integrand and replacing the pre-factor -mc by 1/2. These are both irrelevant changes with respect to the classical equation of motion. Since E_ BVP and S differ by a monotonous function applied to their integrands, formally the same critical point ensues. I.e. the variation of E_ BVP is given by δ L=δ√(E_ BVP)=δ E_ BVP/2√(E_ BVP)=0, so that the trajectory that extremizes E_ BVP agrees with that for S at the critical point. Note that the functional E_ BVP is not reparametrization invariant anymore. The derivative terms enter quadratically, and produce a conversion factor (dγ^'/dγ)^2, which cannot be absorbed by the measure dγ alone. Let us compute the Euler-Lagrange equations (the geodesic equations) for time t and space x following from the variation of <ref> δ E_ BVP[t,ṫ, x,ẋ] = ∫_γ_i^γ_f dγ{∂ E_ BVP/∂ tδ t + ∂ E_ BVP/∂ṫδṫ + ∂ E_ BVP/∂ xδ x + ∂ E_ BVP/∂ẋδẋ} = ∫_γ_i^γ_f dγ{( ∂ E_ BVP/∂ t - d/dγ∂ E_ BVP/∂ṫ)δ t + ( ∂ E_ BVP/∂ x - d/dγ∂ E_ BVP/∂ẋ)δ x} + .[ ∂ E_ BVP/∂ṫδ t ]|_γ_i^γ_f+ .[ ∂ E_ BVP/∂ẋδ x ]|_γ_i^γ_f. As the above boundary terms vanish, we are left with evaluating the individual expressions appearing in the parentheses of <ref>. Below we evaluate each of these terms individually ∂ E_ BVP/∂ t=0, ∂ E_ BVP/∂ṫ=g_00(x)dt/dγ, ∂ E_ BVP/∂ x=1/2∂ g_00(x)/∂ x(dt/dγ)^2, ∂ E_ BVP/∂ẋ=g_11dx/dγ=-dx/dγ, making explicit the ingredients to the geodesic equations for the temporal and spatial degrees of freedom d/dγ(g_00dt/dγ)=0, d/dγ(dx/dγ) + 1/2∂ g_00/∂ x( dt/dγ)^2=0. The attentive reader will have recognized that <ref> constitutes a conservation equation for the expression inside the parenthesis. In the next chapter we will show that this quantity indeed is the conserved charge associated with the time translation symmetry of our system. In general the geodesic equations do not single out the conserved quantities in such a simple fashion. There however exists an systematic procedure to identify the space-time symmetries of the system in the form of different so-called Killing vectors, each of which leads to one conserved quantity (see <ref>). Note that the geodesic equations <ref> are often written in a more concise fashion in the general relativity literature (see e.g. <cit.>). They are expressed for a general metric using the so-called Christoffel symbols Γ^α_μν=1/2g^αβ( ∂ g_βμ/∂ x_ν + ∂ g_βν/∂ x_μ - ∂ g_μν/∂ x_β), where g^αβ refers to the components of the inverse of the metric g_αβ. One obtains in short hand notation with Einstein summation implied d^2 x^α/d γ^2+Γ^α_μνd x^μ/d γd x^ν/d γ = 0 . It is important to note that the derivation of the above expression involves application of the product rule, which in the discrete setting is not valid. Therefore even though in the continuum <ref> and <ref> are equivalent, we will work solely with the former, as only integration by parts (which is exactly mimicked by summation by parts) has been used in their derivation. §.§ Conserved quantities, Noether's theorem and stability Conservation of momentum and energy in general relativity is conceptually more involved compared to flat space-time, since the comparison of two quantities at different space-time points becomes a non-trivial operation due to the effects of a non-flat metric. However there may exist a vector field K^μ(x) along which transported quantities remain constant. These vector fields are known as Killing[For completeness we note that a Killing vector field K_μ is defined as solution to the Killing equation (∂ K_μ/∂ x^ν-Γ^α_μνK_α) + (∂ K_ν/∂ x^μ-Γ^α_νμK_α) =0.] vector fields K^μ(x). The Killing vector fields are generators of infinitesimal isometries of the space-time manifold. Moving all points of the manifold in the direction of the Killing field leaves the manifold unchanged. As discussed in standard literature on general relativity (see e.g. chapter 3.8 of <cit.>), each Killing vector field K^μ can be used to define a conserved quantity Q_K via the expression Q_K=g_αβK^αẋ^β. Computing the change of Q_K along a geodesic, parameterized by γ, one finds from combining <ref> and the equation that defines the Killing vector that dQ_K/dγ=0, i.e. it vanishes. We will give an explicit example of such a conserved quantity below. More intuitively, one can think of the role of K^μ as pointing out directions along which the metric g of spacetime in our system remains constant. In the spirit of Noether's theorem, assume that the integrand E_ BVP of our action functional E_ BVP in <ref> remains unchanged under infinitesimal translations with magnitude ϵ in the direction of K^μ. The change in coordinates under such a shift is δ x^μ=ϵ K^μ. Noether's theorem tells us that the conserved quantity corresponding to δ x^μ is given by J=δ x^μ∂ E/∂ẋ^μ, which, when written explicitly as ϵ K^α g_αβdx^β/dγ, turns out to just be ϵ Q_K. In case of our geometrized problem of determining the dynamics of a point particle under the influence of a potential V(x), the metric remains independent of time t. Thus the vector K_t=(1,0) constitutes a Killing vector associated with time translation symmetry. The conservation of the associated conserved quantity Q_t=K_t^μ g_μνẋ^ν= g_00ṫ follows straight forwardly from the geodesic equation for t d/dγ Q_t K_t=(1,0)<ref>=d/dγ( g_00ṫ) <ref>=0 , i.e. the quantity Q_t remains constant along the geodesic. Note that this quantity is different from the usual energy considered in the non-relativistic formalism. Turning to the question of stability, let us show next that as a consequence of the presence of a conserved quantity together with the form of the geodesic equations and the reasonable assumption that the potential of the system is bounded from below, it is possible to provide an upper bound on the derivatives of the trajectories obtained as critical point of the functional <ref>. In an analogy to the construction of a Hamiltonian from a Lagrangian, we define the following H_ BVP =∫_γ_i^γ_fdγ1/2( g_00(x)(dt/dγ)^2 - g_11(dx/dγ)^2 )_H_ BVP, =∫_γ_i^γ_fdγ1/2( (c^2+2V(x)/m)(dt/dγ)^2 + (dx/dγ)^2 ). Due to the flipped sign in front of g_11, compared to the action <ref>, this quantity is actually positive definite, as long as V(x) is bounded from below[Since physical forces arise from the derivative of the potential, we may always add a constant to a bounded potential that will make g_00 positive.]. H_ BVP thus provides a norm on the function space in which t(γ) and x(γ) reside. Now let us inspect the evolution of the integrand H_ BVP d H_ BVP/dγ =1/2dg_00/dγ(dt/dγ)^2+ g_00dt/dγd^2t/dγ^2+dx/dγd^2x/dγ^2, = dx/dγ[ 1/2∂ g_00/∂ x(dt/dγ)^2 +d^2x/dγ^2] + g_00dt/dγd^2t/dγ^2, <ref>=g_00dt/dγ_Q_t const.d^2t/dγ^2. To arrive at the final expression in <ref>, we use the fact that one can rewrite dg_00/dγ=(∂ g_00(x)/∂ x) ẋ and combine the first and third term to apply <ref>. This simplification tells us that the change in H_ BVP is given solely by the second derivative of time with respect to the world-line parameter. Now we can integrate up twice H_ BVP= ∫_γ_i^γ_fdγ∫_γ_i^γdγ^' (dH_ BVP(γ^')/dγ^') to get H_ BVP = m g_00(x_i)ṫ(γ_i) ∫_γ_i^γ_f dγ( ṫ(γ)-ṫ(γ_i)), = g_00(x_i)ṫ(γ_i) ( -ṫ(γ_i)(γ_f-γ_i) + ( t(γ_f)-t(γ_i) ) ), ≤ g_00(x_i)ṫ(γ_i)( t(γ_f)-t(γ_i) ) . For the last inequality we use the fact that the world-line is parameterized by an increasing γ and correspondingly time moves forward along the world-line. In the BVP setting, where both t(γ_i) and t(γ_f) are given apriori, <ref> constitutes a proof that the norm H_ BVP defined on the derivatives of the solution t and x grows at most linearly with time, precluding the occurrence of exponentially increasing behavior that would signal an instability, in turn establishing stability of the geometric approach. §.§ Initial value formulation So far we have shown how the geodesic equations <ref> can be obtained from a variational principle formulated as a boundary value problem in time. However for a causal description as an initial value problem, we must be able to determine the dynamics of the particle without knowledge of the final point of the trajectory. If one wishes to prescribe only initial values, i.e. positions and derivatives at γ_i, then the variations δ x^μ in <ref> do not vanish at the end of the particle world line, i.e. at γ_f. In turn the equivalence between the critical point of S and the Euler-Lagrange equations in <ref> does not hold. As discussed by <cit.> and put into practice in our previous publication <cit.> one can overcome this issue by constructing an action with doubled degrees of freedom, living on a closed contour with a forward and backward branch in γ. Since both time and space constitute dependent degrees of freedom in our approach, we need to introduce both forward and backward variants of each of them x_1(γ), x_2(γ) and t_1(γ), t_2(γ). The degrees of freedom on the forward contour enter the action functional with the usual Lagrangian, while those on the backward contour are assigned the negative Lagrangian. Choosing to build the doubled formalism based on the action E_ BVP we obtain E_ IVP =∫_γ_i^γ_f dγ E_ IVP[t_1,ṫ_1,x_1,ẋ_1,t_2,ṫ_2,x_2,ẋ_2], =∫_γ_i^γ_f dγ{ E_ BVP[t_1,ṫ_1,x_1,ẋ_1] - E_ BVP[t_2,ṫ_2,x_2,ẋ_2] }. As discussed in detail in <cit.>, the inner workings of the doubled formalism become more transparent, once we go over to expressing the action E_ IVP in terms of the central and difference coordinates x_+=1/2(x_1+x_2) and x_-=x_1-x_2 and t_+=1/2(t_1+t_2) and t_-=t_1-t_2 respectively. The variation now proceeds in the independent degrees of freedom x_± and t_± and yields δ E_ IVP[t_±,ṫ_±, x_±,ẋ_±]= ∫_γ_i^γ_f dγ{∂ E_ IVP/∂ t_+δ t_+ + ∂ E_ IVP/∂ṫ_+δṫ_+ + ∂ E_ IVP/∂ t_-δ t_- + ∂ E_ IVP/∂ṫ_-δṫ_- + ∂ E_ IVP/∂ x_+δ x_+ + ∂ E_ IVP/∂ẋ_+δẋ_+ + ∂ E_ IVP/∂ x_-δ x_- + ∂ E_ IVP/∂ẋ_-δẋ_- } = ∫_γ_i^γ_f dγ{( ∂ E_ IVP/∂ t_+ - d/dγ∂ E_ IVP/∂ṫ_+)δ t_+ +( ∂ E_ IVP/∂ t_- - d/dγ∂ E_ IVP/∂ṫ_-)δ t_- + ( ∂ E_ IVP/∂ x_+ - d/dγ∂ E_ IVP/∂ẋ_+)δ x_+ + ( ∂ E_ IVP/∂ x_- - d/dγ∂ E_ IVP/∂ẋ_-)δ x_-} + .[ ∂ E_ IVP/∂ṫ_+δ t_+ ]|_γ_i^γ_f + .[ ∂ E_ IVP/∂ṫ_-δ t_- ]|_γ_i^γ_f + .[ ∂ E_ IVP/∂ẋ_+δ x_+ ]|_γ_i^γ_f + .[ ∂ E_ IVP/∂ẋ_-δ x_- ]|_γ_i^γ_f. To arrive at <ref> we have carried out four integrations by parts. As the next step, we consider under which conditions the boundary terms in the above expression vanish. Since we prescribe fixed initial values for both time and space, the variations δ t_±(γ_i)=0 and δ x_±(γ_i)=0 vanish. What about the variations at the end of the forward and backward world-line? As long as we require that x_2(γ_f)=x_1(γ_f), t_2(γ_f)=t_1(γ_f), it follows that δ x_-(γ_f) and δ t_-(γ_f) vanish and with it the corresponding boundary terms. The only remaining terms are those at γ_f which feature δ x_+ and δ t_+. As these variations do not vanish, we instead inspect the terms multiplying them, i.e. ∂ E_ IVP/∂ṫ_+ and ∂ E_ IVP/∂ẋ_+. Using the definition x_1=x_+ + 1/2 x_- and x_2=x_+ - 1/2 x_- and correspondingly for t_1,2, we find from the defining equation for E_ IVP <ref> d E_ IVP/d ẋ_+ = ∂ E_ IVP[t_1,2,ṫ_1,2,x_1,2,ẋ_1,2]/∂ẋ_1d ẋ_1/d ẋ_+ + ∂ E_ IVP[t_1,2,ṫ_1,2,x_1,2,ẋ_1,2]/∂ẋ_2d ẋ_2/dẋ_+, = ∂ E_ BVP[t_1,ṫ_1,x_1,ẋ_1]/∂ẋ_1d ẋ_1/d ẋ_+ - ∂ E_ BVP[t_2,ṫ_2,x_2,ẋ_2]/∂ẋ_2d ẋ_2/dẋ_+, = g_11(x_1)ẋ_1-g_11(x_2)ẋ_2=-ẋ_1+ẋ_2. Similarly one obtains d E_ IVP/d ṫ_+ = g_00(x_1)ṫ_1-g_00(x_2)ṫ_2. Together with condition <ref> that the values of x_1,2 and t_1,2 must agree at γ_f, this result tells us that in order for the two remaining boundary terms to vanish, we need to also identify the derivatives of x_1,2 and t_1,2 at the point γ_f ẋ_2(γ_f)=ẋ_1(γ_f), ṫ_2(γ_f)=ṫ_1(γ_f). Note that we have now managed to remove the boundary terms without the need for specifying the concrete value of t's and x's at the final point γ_f. This is the central contribution of the forward-backward construction. The last remaining step is to undo the proliferation of degrees of freedom that occurred when introducing the forward-backward construction. It has been shown <cit.> that taking the so-called physical limit achieves this goal, where the constraints x_1(γ)-x_2(γ)=x_-(γ)=0 and t_1(γ)-t_2(γ)=t_-(γ)=0 are enforced. The remaining x_+ and t_+ are identified with the true classical geodesics. In terms of the Euler-Lagrange equations in parentheses in <ref> ∂ E_ IVP/∂ x_± - d/dγ∂ E_ IVP/∂ẋ_±=0, ∂ E_ IVP/∂ t_± - d/dγ∂ E_ IVP/∂ṫ_±=0, the physical limit entails that only those equations independent of x_- and t_- survive. With the construction of the action E_ IVP= E_ BVP[x_1,ẋ_1,t_1,ṫ_1] - E_ BVP[x_2,ẋ_2,t_2,ṫ_2] from a difference of the E_ BVP functionals, there will appear at least a linear dependence on the minus degrees of freedom. Hence in the physical limit only those Euler-Lagrange equations linear in x_- and t_- will survive, where the minus degrees of freedom have been removed by taking the derivative with respect to x_- or t_-. Note that we have decided to not only specify the value and derivative of x at initial γ_i but also those of t. As we wish to determine the dynamics of a point particle in the presence of a potential with given x(t_i) and dx/dt(t_i), there remains a freedom in choosing ẋ(γ_i) and ṫ(γ_i), since only their ratio needs to be fixed dx/dt(t=t_0)=ẋ(γ_i)/ṫ (γ_i). The end of the time interval traversed by the world line parameter γ, will consequently depend on the value prescribed to ṫ(γ_i) and emerges dynamically from the combined evolution of x and t. At this point we have formulated a manifest time translation symmetric variational principle that encodes the dynamics of a point particle evolving in the presence of a non-relativistic potential as initial value problem. Our next goal is to discretize the action functional E_ IVP in <ref> using SBP finite difference operators. Since all derivations of the Euler-Lagrange equations, as well as that of the conserved quantity Q_t have made ample reference to integration by parts, it is paramount to use such a discretization technique, which faithfully mimics this continuum property on a finite mesh. § DISCRETIZED FORMALISM FOR IVPS The central novelty we introduce in this section is related to the fact that the discretization of the action functional takes place in the world-line parameter γ and not in the time variable t, as in conventional discretization prescriptions. I.e. the values of both time t(γ) and position x(γ) remain continuous and in turn we achieve preservation of the continuum space-time symmetries even after discretization. In the presence of a potential that depends on x but not on t, the invariance under infinitesimal constant shifts in time is hence retained. This comes about, since the metric remains invariant under changes in t, which in turn leads to a simple form of the corresponding Killing equation, which shows that K_t=(1,0) indeed is a Killing vector. The symmetry of the metric under time translation is intimately related to energy conservation via Q_t and thus the stability of the simulation. In the absence of a potential, when the metric does not depend on neither t nor x, our discretized approach, in addition to K_t=(1,0), retains the continuum invariance under shifts in x via the Killing vector K_x=(0,1), as well as the invariance under boosts via the Killing vector K_η=(x,t). We will give numerical evidence that we achieve exact conservation of Q_t in the interior of the simulated domain, even in the case of highly non-harmonic motion. In contrast to other formally energy preserving schemes, such as the leap-frog, our approach, using SBP operators, is consistent with the continuum formulation, in that it only requires the actual initial conditions of the system at hand, avoiding the need to stagger the degrees of freedom (also known as insertion of dummy points). After introducing the discretization on the level of the underlying action functional, we will obtain the classical trajectory by numerically finding the critical point of that functional without the need to derive the corresponding equations of motion. To make sure that the solution of the discretized variational principle mimics as accurately as possible the continuum theory, we deploy summation-by-parts finite difference operators <cit.>. Note that we are discretizing the world-line parameter γ with equidistant steps, whereas both the values of t and x arise dynamically from the evolution of the simulation along γ. I.e. a not necessarily equidistant discretization of the time coordinate emerges dynamically in our approach. As we will see in <ref> this dynamical time discretization realizes a one-dimensional form of automatic adaptive mesh refinement, guided by the symmetries of the system. I.e. the non-equidistant discretization in t plays a crucial role in guaranteeing that the Noether charge Q_t remains conserved. Another non-standard feature of our technique is the departure from the conventional notion of carrying out a simulation on a predefined time interval. We instead provide the initial time and its velocity with respect to γ, so that the end-point of the simulation too emerges dynamically. In the following we will consider the trajectory of a point particle propagating under the influence of an arbitrary x but not t dependent potential V(x). We begin by discretizing the action functional E_ IVP of <ref> along the world-line parameter γ between γ_i and γ_f with N_γ steps, leading to a step-size of dγ=(γ_f-γ_i)/(N_γ-1). We will add to E_ IVP Lagrange multipliers to explicitly account for both the initial conditions and the connecting conditions required by doubling of the degrees of freedom. The forward and backward paths x_1,2 and times t_1,2 are described by x_1,2=(x_1,2(0),x_1,2(Δγ),x_1,2(2Δγ),…,x_1,2((N_γ-1)ΔΓ))^ T and t_1,2=(t_1,2(0),t_1,2(Δγ),t_1,2(2Δγ),…,t_1,2((N_γ-1)Δγ))^ T respectively. The integral in E_ IVP is approximated with a quadrature rule, consistent with our choice of finite difference operator, in the form of a diagonal positive definite matrix H. The inner product on discretized paths and times thus reads ( x, x^')= x^ TH x^'. With integration by parts being a central element in establishing both equations of motion and the existence of conserved quantities, we must use a discretization that mimics IBP exactly, which is achieved by deploying summation-by-parts (SBP) operators D with the defining properties D=H^-1Q, Q^ T+Q=E_N-E_0= diag[-1,0,…,0,1]. In this study we consider both the lowest order SBP discretization scheme, referred to as and the next higher order scheme . The former is second order in the interior and exhibits one order less on the boundary. Using the trapezoidal rule for integration one has H^[2,1]=Δγ[ [ 1/2 ; 1 ; ⋱ ; 1; 1/2 ]], D^[2,1]= 1/2 Δγ[ [ -2 2 ; -1 0 1 ; ⋱ ; -1 0 1; -2 2 ]]. The scheme achieves fourth order accuracy in the interior, which reduces to second order on the boundary H^[4,2]=Δγ[ [ 17/48 ; 59/48 ; 43/48 ; 49/48 ; 1 ; ⋱; ]], D^[4,2]= 1/Δγ[ [ -24/17 59/34 -4/17 -3/34 ; -1/2 0 1/2 0 ; 4/43 -59/86 0 59/86 -4/43 ; 3/98 0 -59/86 0 32/49 -4/49 ; 1/12 -2/3 0 2/3 -1/12 ; ⋱ ]]. The SBP operators defined above are not yet ready for duty in our variational approach, as they allow for non-physical zero modes. As discussed in detail in <cit.>, we can construct null-space consistent[Note that in the context of PDE's, SBP operators are considered null-space consistent by construction, as only their right eigenvectors play a role in the equation of motion. Here due to the presence of D^ T in the action functional, also the left eigenvectors contribute, among which a highly oscillating null-mode (the so-called π-mode) can be identified (see ref. <cit.>)] SBP operators D̅ from the conventional D by deploying affine coordinates and by absorbing penalty terms, inspired by the simultaneous-approximation terms (SAT) technique <cit.>, used to regularize SBP operators. A brief overview of this regularization is given in <ref>. The idea behind the penalty term construction is that we are assigning a penalty to all functions that do not fulfill the initial conditions in t and x, which includes the non-physical zero mode of D. In turn, when we will be searching for the critical point of the discretized action functional E_ IVP the minimizer will approach the correct solution globally and the presence of the penalty term effectively prevents contamination of the correct solution by the non-constant zero mode. Explicitly our regularized and null-space consistent operators read D̅^ R,[2,1]_t= [ [ -1/Δγ + 2/Δγ 1/Δγ -2/Δγ t_i; -1/2Δγ 0 1/2Δγ 0; ⋱ ⋮; -1/2Δγ 0 1/2Δγ 0; -1/Δγ 1/Δγ 0; 0 … 0 1; ]] D̅^ R,[2,1]_x= [ [ -1/Δγ + 2/Δγ 1/Δγ - 2/Δγ x_i; -1/2Δγ 0 1/2Δγ 0; ⋱ ⋮; -1/2Δγ 0 1/2Δγ 0; -1/Δγ 1/Δγ 0; 0 … 0 1; ]]. Using the operators defined above, we can now write the discretized action functional in the following fashion E_ IVP= 1/2{ (D̅^ R_t t_1)^ T𝕕[c^2+2 V( x_1)/m] H̅ (D̅^ R_t t_1) - (D̅^ R_x x_1)^ TH̅ (D̅^ R_x x_1)} - 1/2{(D̅^ R_t t_2)^ T𝕕[c^2+2 V( x_2)/m] H̅ (D̅^ R_t t_2) - (D̅^ R_x x_2)^ TH̅ (D̅^ R_x x_2)} + λ_1( t_1[1]-t_i)+λ_2((D t_1)[1]-ṫ_i)+λ_3( x_1[1]-x_i) + λ_4((D x_1)[1]-ẋ_i) + λ_5( t_1[N_γ]- t_2[N_γ]) + λ_6( x_1[N_γ]- x_2[N_γ]) + λ_7( (D t_1)[N_γ]- (D t_2)[N_γ])+λ_8( (D x_1)[N_γ]- (D x_2)[N_γ]). Conventional matrix vector multiplication is implied in the above expression, whenever a matrix quantity such as H̅ or D̅ acts on a vector x_1,2 or t_1,2. The matrix denoted by 𝕕[f( x)] contains on its diagonal the values 𝕕_kk=f( x(γ_k)) and zero otherwise. We deploy an appropriately modified matrix H̅ for the inner product in the presence of the affine-coordinate regularized SBP operators (see <ref>). The initial conditions we supply are the values of the spatial and temporal coordinate x_i, t_i, as well as the initial velocities with respect to the world line parameter γ, i.e. ẋ_i and ṫ_i. Since our physical problem is formulated as an initial value problem, given t_i, x_i and the physical velocity v_i=dx/dt, there exists a freedom to choose ṫ_i and ẋ_i, as only their ratio is fixed v_i=ẋ_i/ṫ_i. We have added eight Lagrange multipliers, whose role is to explicitly implement the initial conditions (λ_1-4) and the connecting conditions at the end of the forward and backward branches of our doubled degree of freedom construction (λ_5-8). Once the action functional has been formulated in its discrete form, changing from to only requires replacement of the corresponding difference operator D and quadrature matrix H but no further changes to the functional itself. This concludes the description of our novel variational approach and we proceed to evaluate its properties and performance based on two concrete numerical examples. § NUMERICAL RESULTS In this section we will present explicit results for the numerically obtained classical trajectory of a point particle in the presence of two different potentials, V_1(x)=α x and V_4(x)=κ x^4. These two choices correspond to a model of a point mass falling in a constant gravitational field and carrying out highly-nonlinear anharmonic motion. We set the mass of the particle to unity, as well as adopt without loss of generality the convention that the speed of light c=1, which simply amounts to a particular choice of units for length and time. Let us stress again that while standard numerical methods exist to solve the equations of motion for each of these systems, the novelty of the approach presented here lies in the fact that we retain the continuum time shift invariance of the system and thus achieve exact conservation of Q_t in the interior of the simulated time domain. In addition we determine the classical trajectory directly from the action functional of the geometrized problem, without the need to derive the equation of motion. We implement the action functional <ref> in the Mathematica language[The code using both the or operator is available under open access on the Zenodo repository <cit.>.]. As the critical point of the action may be a saddle point, instead of an actual minimum, we must be careful in deploying established numerical optimization algorithms in the dynamical degrees of freedom d={ t_1,2, x_1,2,λ_1-8}. Instead of minimizing E_ IVP directly, we will minimize the Euclidean norm of the gradient |∇_ dE_ IVP|^2. Via this detour, a saddle point is converted into a minimum. In practice we deploy a chain of minimization algorithms. We start with a preconditioning based on the LBFGS quasi-Newton algorithm, which features cost efficient iteration steps, when far away from the true critical point. It is followed by further iterations based on the full Newton method, which exhibits a faster convergence rate than the LBFGS algorithm when close to the critical point. Once the critical point has been approached to at least floating point precision we switch to the interior point optimization, which showed reliable performance in identifying the critical point to any desired tolerance. For our numerical tests in Mathematica, we used of 40 and of 40. The figures shown in the following are based on results from the operator and include the outcomes from the operators when indicated in the text. §.§ Linear potential case We discretize the continuous action functional E^ lin_ IVP= ∫_γ_i^γ_fdγ1/2{( 1+2α x_1(γ) ) (d t_1/dγ)^2-(d x_1/dγ)^2} - ∫_γ_i^γ_fdγ1/2{( 1+2α x_2(γ) ) (d t_2/dγ)^2-(d x_2/dγ)^2} + λ_1(t(γ_i)-t_i)+λ_2(ṫ_1(γ_i)-ṫ_i)+λ_3(x_1(γ_i)-x_i)+λ_4(ẋ_1(γ_i)-ẋ_i) + λ_5(t_1(γ_f)-t_2(γ_2)) +λ_6(ṫ_1(γ_f)-ṫ_2(γ_2)) + λ_7(x_1(γ_f)-x_2(γ_2)) +λ_8(ẋ_1(γ_f)-ẋ_2(γ_2)) along the world-line of the particle motion between γ_i=0 and γ_f=1 with N_γ=32 points. Without loss of generality, we arbitrarily set the starting time to t_i=0 and the starting position to x_i=1. To obtain an initial velocity v_i=1/10 we choose ṫ=1 and ẋ=v_i. Note that we do not fix the value of t_f but only the initial velocity of time with respect to γ. The choice of ṫ=1 will lead to dynamics, such that t_f will be of the order of one. (In the next subsection we will also provide results for different choices of ṫ_i.) As strength for the linear potential we choose α=1/4. The corresponding discrete action functional reads explicitly E^ lin_ IVP= 1/2{ (D̅^ R_t t_1)^ T𝕕[1+2 α x_1]H̅ (D̅^ R_t t_1) - (D̅^ R_x x_1)^ TH̅ (D̅^ R_x x_1)} - 1/2{ (D̅^ R_t t_2)^ T𝕕[1+2 α x_2] H̅ (D̅^ R_t t_2) - (D̅^ R_x x_2)^ TH̅ (D̅^ R_x x_2)} + λ_1( t_1[1]-t_i)+λ_2((D t_1)[1]-ṫ_i) + λ_3( x_1[1]-x_i)+λ_4((D x_1)[1]-ẋ_i) + λ_5( t_1[N_γ]- t_2[N_γ]) + λ_6( x_1[N_γ]- x_2[N_γ]) + λ_7( (D t_1)[N_γ]- (D t_2)[N_γ])+λ_8( (D x_1)[N_γ]- (D x_2)[N_γ]). Let us take a look in <ref> at the raw results for the forward and backward time and spatial coordinates, as obtained from the critical point of E^ lin_ IVP with V(x)=α x. In the top panel, we show t_1(γ_i) as red circles and t_2(γ_i) as blue crosses, while in the bottom panel these symbols denote the spatial coordinate of the point particle trajectory x_1(γ_i) and x_2(γ_i) respectively. As required by the physical limit (discussed in <ref>), we find that the values of the doubled degrees of freedom coincide at the critical point. The solution of the corresponding continuum geodesic equations, obtained via the algorithm of Mathematica's command is shown as gray solid line and excellent agreement is observed. Note that due to our choice of ṫ_i=1 the maximum time traversed by the simulation is close to one. At first sight it appears that an equidistant discretization of time in γ emerges, but an inspection of the velocity of time with respect to γ in <ref> reveals that the time spacing dynamically adapts to the behavior observed in the spatial coordinate x. Close to the maximum of x(γ) at around γ=0.4 the temporal spacing e.g. has a minimum. This dynamically emerging time discretization constitutes an automatically generated non-trivial mesh for the time coordinate and arises naturally in our formalism. In fact an automatic AMR procedure results. Let us plot next in <ref>, the results from our geometrized formalism as physical trajectory, i.e. as x_1,2(t_1,2) (red circles and blue crosses). This allows us to compare the outcome to the solution one would obtain by following the conventional approach in the literature (see e.g. chapter 7.9 in <cit.>). There one considers time as independent variable and simply adds a potential term to the free relativistic action <ref> before deriving the corresponding Euler-Lagrange equation, which for the linear potential reads d^2x/dt^2 = -(α)(1-(dx/dt)^2)^(3/2). Using the algorithm of Mathematica's command, we compute the solution of this equation of motion and plot it as gray solid line. Excellent agreement with the solution from our variational approach is observed, indicating that the geometrization strategy indeed reproduces the solution of the physical problem at hand. Note that the change in the velocity of the time coordinate manifests itself here as a slightly denser time grid around the maximum of the trajectory. After this qualitative visual inspection, let us take a closer look at the properties of the obtained solution. The first question we may ask is how well quantitatively the solution follows the naively discretized geodesic equations for time <ref> and space <ref> respectively. The continuum geodesic equations for the system at hand read d/dγ(g_00dt/dγ)=d/dγ( (1+2 α x) dt/dγ) =0, d/dγ(dx/dγ) + 1/2∂ g_00/∂ x( dt/dγ)^2=d^2x/dγ^2 + α( dt/dγ)^2=0. When deriving these equations of motion from the continuum action functional <ref> we have only used integration by parts. This motivates us to proceed, considering them naively discretized by replacing the derivatives with SBP finite difference operators D( (1+2α x )∘(D t) )=Δ G^t, DD x + α (D t)∘ (D t)=Δ G^x. Here element-wise multiplication of entries of vector quantities is explicitly denoted by the symbol ∘, which implements e.g. x_1∘ x_2= ( x_1(0)x_2(0), x_1(Δγ)x_2(Δγ),…, x_1(Δγ(N_γ-1))x_2(Δγ(N_γ-1)))^ T. Note that we have introduced on the right of the above equations two quantities Δ G^x and Δ G^t, which denote the deviation from the value zero, to which the equations of motion evaluate in the continuum. By inspecting Δ G^x and Δ G^t for the trajectories x_1,2 and t_1,2 obtained from the critical point of the discretized action functional E^ lin_ IVP, we can obtain first quantitative insight into the performance of our variational approach. We plot the values of both quantities Δ G^x and Δ G^t in the top panel of <ref>. At first sight we find that deviations from the naively discretized geodesic equations are minute, except for the two last points. Note that the plot is given in logarithmic scale. Since we use a minimizer in Mathematica with set to 40, the values of <10^-30 reflect a true zero. It is apparent that both the naively discretized geodesic equation for x and t are fulfilled down to machine precision. Let us proceed to the central quantity of interest in this study Q_t, defined in <ref>, which in the continuum represents the conserved quantity associated with the time-translation symmetry of the system. We again consider its naively discretized form in the following Q_t=(D t)∘( 1 + 2α x). With the discrete action functional E^ lin_ IVP retaining manifest invariance under shifts in the time coordinates t_1,2 we wish to investigate whether also the discretized Q_t retains its role as conserved Noether charge. To this end let us focus here on the deviation Δ E of Q_t from its continuum value Δ E = Q_t -Q_t = (D t)∘( 1 + 2α x) - ṫ_i (1+2α x_i). Note that Q_t takes on the continuum value by construction at the first point in γ, as there it is defined by the initial conditions. The values obtained for Δ E from the critical point of E^ lin_ IVP using either the (red circles) or operator (blue crosses) are shown in the bottom panel of <ref>. There are two important observations to be made. First, the discretized quantity Q_t is exactly conserved in the discrete setting in the interior of the simulated time domain and only at the final point γ_f it deviates from that constant. While the deviation Δ E(γ_f) in case of the operator is already smaller than two permille, it reduces even further to a value of 10^-6 when deploying the operator. We have investigated various potential reasons for the slight difference at the final point, such as a potential over-constraint from the connecting conditions in <ref>, but we have not identified the source as of yet. One venue to explore in the future is whether the exact enforcement of the connecting conditions plays a role, which however requires the development of a genuinely weak formulation of our approach without the use of Lagrange multipliers. It is important to point out that, as we will show explicitly below, the presence of this final differing point does not spoil the convergence to the correct continuum limit. Secondly, the value of Q_t that remains conserved in the interior agrees with the true continuum value, prescribed by the initial conditions, within machine precision. This is a highly non-trivial result, as even in energy preserving schemes, such as the leap-frog, the conserved quantities do not necessarily agree with the continuum ones. We surmise that it is the interplay of a manifest time-translation invariant formulation of the action functional, together with the resulting dynamically emerging time discretization, which achieves the conservation of the discrete Q_t at its continuum value in the interior of the simulation domain. The presence of two points that deviate from the naively discretized continuum geodesic equations may appear troublesome. However as we show in <ref> these points do not spoil the convergence to the correct continuum limit under grid refinement. In the top panel of <ref>, we select the apparently most disadvantageous points for our convergence study, i.e. we compare the deviation from the continuum geodesic equations ϵ(γ_f)_x=| x[N_γ]-x_ true(1)| and ϵ(γ_f)_t=| t[N_γ]-t_ true(1)| at γ_f, exactly where the deviation from the continuum result was maximal in the top panel of <ref>. Grid refinement is carried out and we provide the results for both the lowest order operator and the next higher operator. Even in this disadvantaged scenario, we find that under grid refinement, the discrete solution approaches the true continuum values as expected from a scheme that is second order in the interior. Taking the results, the best fit to ϵ_x reveals a scaling with Δγ^2.08, while for ϵ_t an virtually identical Δγ^2.07 ensues. Going over to the results we find that the convergence is in line with expectations for an SBP operator of 4th order in the interior with ϵ_x exhibiting a scaling of Δγ^3.07 and ϵ_t a somewhat better value of Δγ^3.48. In the bottom panel of <ref> we instead investigate the global convergence of our approach using the L_2 norm ϵ(L_2)_x=√(( x- x_ true)^ T.H.( x- x_ true)) and ϵ(L_2)_t=√(( t- t_ true)^ T.H.( t- t_ true)), where x_ true and t_ true are taken from the numerical solution of the geodesic equations, used for comparison in <ref>. We find that similar convergence rates ensue, where shows scaling Δγ^β with exponent β≥2 and shows scaling with exponent β≥ 3. These convergence result agrees with the findings of our previous study <cit.>, where the standard action functional was discretized with time as independent parameter. §.§ Quartic potential After considering the simplest possible non-trivial scenario with a linear potential, we now turn to a system with a quartic potential and the following continuum action functional E^ qrt_ IVP= ∫_γ_i^γ_fdγ1/2{( 1+2κ x_1^4(γ) ) (d t_1/dγ)^2-(d x_1/dγ)^2} - ∫_γ_i^γ_fdγ1/2{( 1+2α x_2^4(γ) ) (d t_2/dγ)^2-(d x_2/dγ)^2} + λ_1(t(γ_i)-t_i)+λ_2(ṫ_1(γ_i)-ṫ_i)+λ_3(x_1(γ_i)-x_i)+λ_4(ẋ_1(γ_i)-ẋ_i) + λ_5(t_1(γ_f)-t_2(γ_2)) +λ_6(ṫ_1(γ_f)-ṫ_2(γ_2)) + λ_7(x_1(γ_f)-x_2(γ_2)) +λ_8(ẋ_1(γ_f)-ẋ_2(γ_2)). Again we discretize along N_γ=32 in the world-line parameter γ. Using κ=1/2 in the potential V(x)=κ x^4 leads to dynamics that already in the small time regime considered here are distinctly anharmonic. As in the previous subsection we discretize the world-line of the particle motion between γ_i=0 and γ_f=1, set the starting time to t_i=0 and the starting position to x_i=1. For our choice of v_i=1/10 we again decide on ṫ=1 and ẋ=v_i. The discretized action functional thus reads E^ qrt_ IVP= 1/2{ (D̅^ R_t t_1)^ T𝕕[1+2 κ x^4_1] H̅ (D̅^ R_t t_1) - (D̅^ R_x x_1)^ TH̅ (D̅^ R_x x_1)} - 1/2{ (D̅^ R_t t_2)^ T𝕕[1+2 κ x^4_2] H̅ (D̅^ R_t t_2) - (D̅^ R_x x_2)^ TH̅ (D̅^ R_x x_2)} + λ_1( t_1[1]-t_i)+λ_2((D t_1)[1]-ṫ_i) + λ_3( x_1[1]-x_i)+λ_4((D x_1)[1]-ẋ_i) + λ_5( t_1[N_γ]- t_2[N_γ]) + λ_6( x_1[N_γ]- x_2[N_γ]) + λ_7( (D t_1)[N_γ]- (D t_2)[N_γ])+λ_8( (D x_1)[N_γ]- (D x_2)[N_γ]) and taking the fourth power of the x_1,2 vector is to be understood in an element wise fashion. While for the linear potential, the time geodesic appeared to depend almost linearly on γ, we find that here a distinct curvature along γ emerges, as shown in the top panel of <ref>. We plot the values of t_1(γ_i) as red circles and t_2(γ_i) as blue crosses and show as gray solid line the solution of the corresponding geodesic equation, obtained from the algorithm of Mathematica's command. Again the physical limit of equal values t_1(γ)= t_2(γ) is realized. The values of the spatial coordinate x_1(γ_i) and x_2(γ_i) as obtained from the critical point of E^ qrt_ IVP with V(x)=κ x^4 are plotted in the bottom panel of <ref> with the direct numerical solution of the geodesic equation added as gray solid line. Note that even though we have provided an initial velocity of the time along γ again with value ṫ_i=1, the final time reached by the simulation now lies at t[N_γ]=1.47. Similarly one finds that that a dynamical discretization in t emerges, which, as shown in <ref>, varies from the initial values ṫ_i=1 to (D t)[N_γ]=2.06. This behavior can be understood when realizing that the trajectory x(t) in the non-linear case shows a stronger curvature close to t=0 than at later times. I.e. we find again that the automatically generated non-trivial mesh (through automatic AMR) for the time coordinate adapts to the dynamics, by exhibiting a finer spacing at initial times. Let us take a look at the results from our geometrized formalism as physical trajectory in <ref>, i.e. plotted as x_1,2(t_1,2) (red circles and blue crosses). They are compared to the solution of the conventional equation of motion, obtained from treating time as independent variable d^2x/dt^2 = -(4κ x^3 )(1-(dx/dt)^2)^(3/2), computed via the algorithm of Mathematica's command (gray solid line) in the range t∈[0,1]. We find that within this range the solution from our geometrized discrete approach shows excellent agreement. Note that due to the non-equidistant emergent time discretization, the physical trajectory x(t), shown in <ref> extends beyond the point t=1. As for the linear potential, let us investigate quantitatively the properties of the trajectories t(γ_i) and x(γ_i) by inserting them into the naively discretized geodesic equations. For the quartic potential, the continuum geodesic equations for the temporal and spatial coordinate read d/dγ(g_00dt/dγ)=d/dγ( (1+2 κ x^4) dt/dγ) =0, d/dγ(dx/dγ) + 1/2∂ g_00/∂ x( dt/dγ)^2=d^2x/dγ^2 + 4κ x^3( dt/dγ)^2=0. Naively discretizing these equations by replacing derivatives with SBP operators leads to the following discrete geodesic equations D( (1+2κ x^4 )∘D t)=Δ G^t, DD x + (4κ x^3) ∘ (D t)∘ (D t)=Δ G^x. where again taking a power of the x_1,2 vector is to be understood in an element wise fashion. To evaluate how well the solution obtained from the critical point of E^ qrt_ IVP fulfills the naive discretized geodesic equations we have again introduced the quantities Δ G^t and Δ G^x above. As shown in <ref> also here in the highly non-linear scenario, we find that the values of both x (red circles) and t (blue crosses) follow the discretized geodesic equations excellently, except for the last two points. The most important question however remains whether in the non-linear discretized system, the continuum quantity Q_t from <ref> also remains conserved. Its naively discretized counterpart here reads Q_t=(D t)∘( 1 + 2κ x^4), and we define its deviation from the continuum result via the difference Δ E = Q_t -Q_t = (D t)∘( 1 + 2 κ x^4) - ṫ_i (1+2κ x_i^4), which we plot in the bottom panel of <ref> using the operator (red circles) and the operator (blue crosses). We find also in the case of a non-linear potential that Q_t is preserved exactly in the interior of the simulation time domain. Up to machine precision its values in the interior also take on the correct continuum value. Similar to what we saw in the linear case, the last point deviates from the continuum value. It is reassuring to see that the absolute deviation at γ_f reduces already by an order of magnitude when going from a to an operator. One may now ask whether the deviation of Δ E from its continuum value at γ_f is in some way related to the fact that we use N_γ=32 points to discretize the world-line parameter. The answer is negative, as demonstrated in <ref>. Three different datasets are shown in <ref>, where for fixed ṫ_i the grid spacing in γ is changed. The green triangles denote the results for Δ E when using N_γ=16, the red circles N_γ=32 and the blue crosses N_γ=64. We have confirmed explicitly that in all cases the values of Q_t are preserved up to machine precision in the interior of the simulated time domain. It is indeed only the last point that shows a deviation and we see that the absolute magnitude of the deviation reduces as the grid is refined. For the next test, we instead increase N_γ together with ṫ_i to let the simulation proceed to larger values of time t. In the top panel of <ref> we plot the deviation of Q_t from its continuum value for three choices ṫ_i=1, N_γ=16 (green triangles), ṫ_i=4, N_γ=32 (red circles) and ṫ_i=8,N_γ=64 (blue crosses). As seen before in the interior of the simulated time domain, the values of Q_t remain exactly preserved and only the last point deviates. We find that the magnitude of the deviation in the last point changes only marginally with the length of the simulated trajectory. For completeness the corresponding trajectories x(t) are plotted in the bottom panel of <ref>. Again let us emphasize that, as we will show below, the presence of this single deviating point does not spoil the convergence to the correct solution under grid refinement. The exact conservation of the quantity Q_t in the interior is remarkable, as e.g. the trajectory in the bottom panel of <ref> for ṫ_i=8,N_γ=64 shows sizable discretization artifacts (which disappear under grid refinement). We believe that it is due to the manifest time-translation invariance of the underlying action functional that the combined dynamics of x(γ) and t(γ), including the automatically generated non-equidistant time mesh, achieve conservation of the continuum quantity. The fact that the solutions we obtain fulfill the naively discretized geodesic equations and provide exact conservation of the continuum conserved charge in the interior of the simulated domain (see <ref>) bodes well for establishing its stability. Since in the IVP setting t(γ_f) is not given but emerges dynamically we cannot directly apply <ref> as proof of stability. However, as long as we can assume that the simulated time range (given a certain ṫ(γ_i) is finite, the linear bound of <ref> on the norm H_ BVP holds in the discrete setting. In turn we deduce that the solution cannot exhibit stronger than linear rise of the derivatives of either t(γ) or x(γ), implying stability of the approach. Let us now quantify the convergence properties of our variational approach using the results from the lowest order operator and those coming from the operator in <ref>. As in the linear potential case, in the top panel of <ref>, we select the most disadvantageous points for our convergence study, i.e. we compare the deviation from the continuum geodesic equations ϵ(γ_f)_x=| x[N_γ]-x_ true(1)| and ϵ(γ_f)_t=| t[N_γ]-t_ true(1)| at γ_f, exactly where the deviation from the continuum result was maximal in the top panel of <ref>. Also in the non-linear scenario we find that under refinement of the γ grid, the discrete solution monotonously approaches the true continuum values. Taking the results, the best fit to ϵ(γ_f)_x reveals a scaling with Δγ^2.08, while for ϵ(γ_f)_t an virtually identical Δγ^2.06 is obtained. For , we find that the convergence is slightly worse than in the linear potential case. As seen in the green circles plotted in <ref>, the asymptotic convergence regime is reached for 32 <N_γ <64. Once we are in that regime, we find that ϵ(γ_f)_x exhibits a scaling of Δγ^2.84, close to the expected value of three. On the other hand ϵ(γ_f)_t shows a consistent performance with a scaling of Δγ^3.13 already at N_γ=32. Let us now investigate the global convergence in the bottom panel of <ref> using the L_2 norm ϵ(L_2)_x=√(( x- x_ true)^ T.H.( x- x_ true)) and correspondingly ϵ(L_2)_t=√(( t- t_ true)^ T.H.( t- t_ true)), where x_ true and t_ true are taken from the numerical solution of the geodesic equations, used for comparison in <ref>. Reassuringly we find that the global convergence properties of our approach are better than indicated by those of the most disadvantaged point in the top panel of <ref>. Indeed we find that for the operators, the global scaling regime is reached already at N_γ=32, similarly to the case. In addition, the global convergence rate Δγ^β for operators lies consistently above β≥3 for both the x and t degrees of freedom. Again, these convergence result are in good agreement with those of our previous study <cit.>, where the standard action functional was discretized with time as independent parameter. § SUMMARY AND OUTLOOK In this study we have put forward a novel geometric variational approach for solving a large class of initial value problems, associated with the dynamics of point particles evolving under a generic x dependent potential V(x). Taking inspiration from the general theory of relativity, we consider both time and spatial coordinates of the point particle as dependent variables of a world-line parameter γ. We select a continuum action functional, which in the non-relativistic limit reduces to the standard action of point mechanics and whose critical point encodes a set of geodesic equations for x(γ) and t(γ). After doubling the degrees of freedom t_1,2 and x_1,2 we can relate the critical point of the corresponding doubled d.o.f. action with the classical trajectory. Using the concept of Killing vectors we identify conserved quantities, e.g. related to the continuum time translation invariance of the action. Deploying the regularized SBP operators originally introduced in <cit.>, we discretize the continuum action and add Lagrange multipliers to enforce the initial and connecting conditions between the doubled t_1,2 and x_1,2. The main novelty of our approach is that the discretized action retains the continuum symmetries, in particular the invariance under time translations. Exactly mimicking integration by part through the use of SBP finite difference operators entails that the derivation of the conserved charges associated with the Killing vectors of the system is also exactly mimicked in the discrete setting. I.e. the continuum conserved quantities Q_K retain their role even after discretization. The numerical results we obtain for both a linear and highly non-linear potential show that a discretization of time t now indeed emerges dynamically, adapting to the behavior of the spatial coordinate x. This is a concrete realization of an automatically generated non-equidistant mesh for the time coordinate, guided by our action functional with manifest continuum translation symmetry, i.e. an automatic AMR procedure. We have shown that except for the last two points along the discrete γ, the solution we obtain follows the naively discretized geodesic equations excellently. Even more importantly, the naively discretized counterpart Q_t of the continuum conserved quantity Q_t remains exactly preserved in the interior of the simulated time domain, where it even retains its continuum value exactly within machine precision. A small deviation from the values in the interior for Q_t is observed at the last step γ_f. This deviation however decreases both under grid refinement, as well as when increasing the order of the SBP operator. Point-wise, as well as global scaling analyses under grid refinement show that even in the presence of two points deviating from the naively discretized geodesic equations at the last two γ steps, the solution monotonously improves and manages to approach the true solution. When deploying the operator, we achieve consistent scaling in Δγ^β with β≳ 2 for both the linear and non-linear potential. For in case of a linear potential the dependence on the grid spacing follows the expected power law Δγ^β with β≳3 for all values of N_γ we inspected. For the non-linear potential, the scaling regime for point-wise convergence at the last point γ_f is reached with for 32<N_γ<64 with a slightly worse scaling of 2.84 ≤β≤ 3.13. Global convergence on the other hand shows consistent scaling at all N_γ we considered, with exponents β≥3, in agreement with the findings in our previous paper <cit.>, where the standard action functional was discretized with time as independent variable. This study presents a proof of principle that initial value problems can be discretized, while retaining continuum symmetries. Three future directions will be explored: we may ask how we can capture systems of ordinary differential equations that e.g. contain a term that is proportional to a first derivative in x with respect to time? To this end we must exploit the versatility of the doubled d.o.f. approach more thoroughly. Furthermore we will explore how the reparametrization invariant formulation can be applied to partial differential equations in higher dimensions, taking insight from how the non-relativistic action emerges from our relativistic starting point in <ref>. In addition, to better understand the origin of the single deviating value in the otherwise exactly preserved Q_t, we will develop a genuinely weak formulation of our approach, devoid of Langrange multipliers for enforcing initial and connecting conditions. We believe that the quest for retention of defining continuum properties in discretized systems is both conceptually and practically valuable. Not only does the preservation of symmetries place powerful physical constraints on the solution but in addition offers a mechanism for the automatic generation of optimal discrete spacetime grids to ensure conservation of the Noether charges associated with these symmetries. We hope that this study provides the community with a novel impulse in this direction. § ACKNOWLEDGEMENTS A. R. thanks Will Horowitz for inspiring and insightful discussions and Alex Nielsen for valuable insight on the general theory of relativity. A. R.  gladly acknowledges support by the Research Council of Norway under the FRIPRO Young Research Talent grant 286883. J. N. was supported by the Swedish Research Council grant nr. 2021-05484. The study has benefited from computing resources provided by UNINETT Sigma2 - the National Infrastructure for High Performance Computing and Data Storage in Norway under project NN9578K-QCDrtX "Real-time dynamics of nuclear matter under extreme conditions" § REGULARIZED SBP OPERATORS IN AFFINE COORDINATES We here briefly review the idea and some technical aspects of constructing null-space consistent regularized SBP operators using affine coordinates, developed in our study <cit.>. The goal of regularizing conventional SBP operators D, such as those defined e.g. in <ref> and <ref>, lies in removing their unphysical zero modes. These may appear as highly oscillatory eigenfunctions to D^ T with zero eigenvalue. To this end we take inspiration from regularization techniques developed for partial differential equations. There the concept of null-space consistent SBP operators has been discussed in detail (see e.g. <cit.>). For a differential equation, the boundary conditions may be enforced in the weak sense by adding a simultaneous approximation penalty term (SAT) <cit.>, which can be partially absorbed into the finite difference operator, lifting its zero modes. Take for example a simple discretized first order differential equation D u = λ u + σ_0 H^-1E_0( u - g), where the SAT penalty term has been added to the right-hand side. It features the matrix E_0= diag[1,0,…,0] that makes reference only to the first entry in the discretized functions u and g, the latter of which contains the initial value in its first entry g=(u_0,0,…,0). The SAT term also contains H^-1, i.e. Δ t^-1, which increases the strength of the penalty as Δ t→0. The parameter σ_0 in the SBP-SAT approach is tuned to satisfy stability properties and its optimal value is found to be σ_0=-1 (see e.g. ref. <cit.>), a choice we adopt in the following. In the differential equation context one conventionally absorbs the term proportional to u into a new D̃=D - σ_0 H^-1 E_0. This new operator is devoid of zero modes <cit.> and may be inverted to obtain the solution u. In the context of an action functional, such as <ref>, we do not have an equal sign around which we can move the SAT term. Instead we must incorporate the whole of the penalty term directly in a modified SBP operator. Since the penalty term in our example <ref> contains both a contribution that is proportional to the function u and a constant shift g it amounts to an affine transformation on u, which can be captured efficiently using affine coordinates. To this end let us write A̅[ b] x̅ = A x+ b, where A̅[ b] refers to a matrix A extended by an additional row and column with the value 1 placed in the lower right corner. The new column available in A̅[ b] is populated with the entries of b. The vector x̅ is nothing but x extended by one more entry with value unity. We will use this construction principle to define a regularized D̅ from our conventional SBP operator D. Since we have both x and t as independent degrees of freedom each with independent initial conditions x_i and t_i, we must define different shifts b^x and b^t respectively and thus end up with two different regularized SBP operators D̅_t and D̅_x. The shift terms are nothing but the constant part of the corresponding SAT term, absorbed into the SBP operator b^x= σ_0 H^-1 E_0 g^x, b^t= σ_0 H^-1 E_0 g^t. Here g^x= diag[x_i,0,⋯,0] and g^t= diag[t_i,0,⋯,0] encode the initial values for x and t respectively. As mentioned before, we choose the parameter σ_0=-1, whenever a penalty term is incorporated in D̅, motivated by the fact that in the conventional treatment of IVPs using the SBP-SAT approach, this value leads to a minimal discretization error (see e.g. ref. <cit.>). The resulting regularized SBP operators to be deployed on t_1,2 or x_1,2, are given explicitly in <ref> and <ref> respectively. Consistent with the affine coordinates used in the newly defined D̅_t and D̅_x, we also amend the discretized trajectories t_1,2 and x_1,2 by one more entry that is given the value one. In order to compute inner products in the space of discretized functions, we also have to modify the quadrature matrix H→H̅ by amending it by one row and column filled with zeros. We do not include the value one in the lower right corner in order to correctly account for the fact that the vectors appearing as arguments to the inner product contain an auxiliary final entry, which does not contribute to the value of the inner product and only facilitates the efficient implementation of shift operations. For more details on the affine coordinate regularization technique see <cit.>. § COMPETING INTERESTS The authors declare that they have no competing interests. § AUTHOR'S CONTRIBUTIONS * A. Rothkopf: formulation of the geometric variational approach, literature review, numerical experiments, writing, editing * J. Nordstöm: guidance on the formulation and implementation of SBP based discretization schemes, literature review, editing stavanger-mathphys
http://arxiv.org/abs/2307.04352v1
20230710052537
Phase Diagram and Crossover Phases of Topologically Ordered Graphene Zigzag Nanoribbons: Role of Localization Effects
[ "Hoang Anh Le", "In Hwan Lee", "Young Heon Kim", "S. -R. Eric Yang" ]
cond-mat.str-el
[ "cond-mat.str-el", "cond-mat.stat-mech", "quant-ph" ]
On Sufficient Graphical Models Bing Li [email protected] Department of Statistics, Pennsylvania State University 326 Thomas Building, University Park, PA 16802 Kyongwon Kim [email protected] Department of Statistics, Ewha Womans University 52 Ewhayeodae-gil, Seodaemun-gu, Seoul, Republic of Korea, 03760 August 12, 2023 ======================================================================================================================================================================================================================================================================================================================= empty Is the formation of a fractional charge <cit.> a necessary and sufficient condition<cit.> for topologically ordered insulators  <cit.> such as fractional quantum Hall systems <cit.> and interacting disordered zigzag graphene nanoribbons <cit.> (ZGNRs) with anyonic fractional charges? This issue is related to whether the electron localization effects of doped systems destroy or enhance the topological order <cit.> and quantization of fractional charges. In a Laughlin state on a disordered sphere (no edges are present), the added electrons fractionalize and form a quasi-degenerate peak in the gap of the tunneling density of states (DOS). Electron localization <cit.> is expected to suppress the quantum fluctuations of these fractional charges of the quasi-degenerate gap states because these localized quasi-degenerate energy states are spatially separated from each other, as explained in Ref. <cit.> (if fractional charges are delocalized they overlap and become ill-defined). However, excessive disorder is considered detrimental to topological order. In this study, we investigate similar issues with ZGNRs <cit.>. A recent study showed that weak randomness (disorder) in ZGNRs can generate e^-/2 fractional charges <cit.>, which is a disorder effect closely related to the change in the disorder-free symmetry-protected topological insulator of ZGNRs to a topologically ordered <cit.> Mott-Anderson insulator <cit.>. These systems have a universal value for topological entanglement entropy (TEE) <cit.> in the weak-disorder regime <cit.>. The shape of entanglement spectrum is also found <cit.> to be similar to the DOS of the edge states, as expected of topologically ordered systems <cit.>. In interacting disordered ZGNRs, the gap is filled further by edge states  <cit.> with an increasing strength of the disorder potential. (We call these states gap-edge states.) The ground states have the opposite edge site spins in the absence of disorder <cit.>. In the presence of disorder a spin reconstruction of the zigzag edges can take place <cit.>. Nonetheless, a topologically ordered ZGNR has two degenerate ground states, see Fig. <ref>(a). Mixed chiral edge states play an important role in this effect. A short-range disorder potential couples two nearly chiral gap-edge states residing on opposite zigzag edges <cit.>, and mixed chiral gap-edge states with split probability densities may form to display e^-/2 fractional semion charges <cit.> (see Fig.<ref>(b))(these states with midgap energies are solitonic with the half of the spectral weight originating from the conduction band and the other half from the valence band <cit.>). Note that a mixed chiral gap-edge state has a nonzero fractional probability at the A- and B-carbon sites. In other words, it is split into two nonlocal parts, each residing on the edges of the A or B sublattice. The formation of mixed chiral gap-edge states is a nonperturbative instanton effect <cit.>. (They are similar to the bonding and antibonding states of a double quantum well.) It should be noted that well-defined e^-/2 fractional charges in the weak-disorder regime are emergent particles, i.e., they have new qualitative features and appear only in sufficiently long ribbons. In a weak-disorder regime, the number of fractional charges is proportional to the length of zigzag edges. Although weak disorder leads to formation of fractional charges strong disorder may destroy them. Similar to fractional quantum Hall systems, the topological order of a ZGNR is not immediately destroyed upon doping because electron localization partially suppresses quantum fluctuations between quasi-degenerate mid-gap states. The system may still be an insulator with a fractional charge. However, in the presence of strong disorder or doping, zigzag edge antiferromagnetism is expected to diminish, and thereby, the topological order. (Away from the low doping region, a disordered anyon phase with a distorted edge spin density wave was found  <cit.>.) These results suggest that there may be several topological phase transitions in the zigzag ribbons. What is the nature of these topological phase transitions and the physical properties of the ground states? Does the presence of a fractional charge imply a universal value of the TEE? Does the TEE become nonuniversal and vary <cit.> with an increase in the disorder strength or doping level? We explored the phase diagram of ZGNRs in the parameter space comprising on-site repulsion (U), disorder strength (Γ), and doping concentration (δ N/N_s) (δ N and N_s are, respectively, the number of doped electrons and the total number of sites in the ribbon). The competition between localization and electron interactions can have detrimental effects on the topological order and lead to several different phases, which includes crossover phases. We found a number of different phases with a topological order, quasi-topological order, and no order. Each of these phases is defined by the value of TEE β and its variance. These properties of β are related to the presence or absence of charge fractionalization and charge transfer correlations between zigzag edges. When both of these properties are present, in addition to correlations leading to spin-charge separation, β is universal, with small variances. In low-doped ZGNRs the interplay between electron localization and on-site repulsion contributes to the spatial separation of quasi-degenerate gap-edge states and protects the charge fractionalization against quantum fluctuations. There are two other types of phases with a quasi-topological order. We refer to these phases as crossover phases, in which the variance of β is significant. In one of these phases, both e^-/2 fractional charges and spin-charge separation are absent; however, the charge transfer (± e^-/2) correlations exist between the zigzag edges. Another phase may contain stable e^-/2 fractional charges but no charge transfer correlations between the zigzag edges. The ground state and zigzag edge properties of the various crossover and nontopological phases are explored. § MODEL HAMILTONIAN The following mechanisms can all lead to fractional charges: the coupling between the valleys mediated by short-range scatterers  <cit.> and the sublattice mixing facilitated by alternation of the nearest neighbor hopping parameters <cit.>. Here we will consider only the effect of short-range scatterers. The self-consistent Hartree-Fock (HF) approximation works well for graphene systems <cit.>. The HF Hamiltonian of a ZGNR with length L and width W is H_MF=-t∑_n.n.,σ c^†_i,σc_j,σ +∑_i,σ V_ic_i,σ^†c_i,σ +U∑_i[ n_i,↑⟨ n_i,↓⟩ +n_i,↓⟨ n_i,↑⟩-⟨ n_i,↓⟩⟨ n_i,↑⟩] + ∑_i [s_ix⟨ h_ix⟩+s_iy⟨ h_iy⟩], where the site index is given by i=(k,l) (k labels sites along the ribbon direction and l along the width), c^†_i, σ and n_i_,σ represent creation and occupation operators at site i with spin σ = {↑, ↓}, respectively (periodic boundary conditions are used along the ribbon direction). The site spin operators are given by s_i x (y) = 1/2( c^†_i, ↑, c^†_i, ↓) σ^x (y) ( c_i, ↑, c_i, ↓)^T, where σ^x (y) is the conventional Pauli matrix. The first term represents the kinetic energy with hopping parameter t, n.n implies the summation over the nearest-neighbor sites. The second term represents the short-range impurities parameterized by V_i, which is randomly chosen from the energy interval [ -Γ, Γ]. Throughout this study, the density of the impure sites is fixed at 10 %. U denotes the on-site repulsive strength. The last term in Eq. (<ref>) represents self-consistent “magnetic fields", where ⟨ h_i x⟩ = -2 U ⟨ s_i x⟩ and ⟨ h_i y⟩ = -2 U ⟨ s_i y⟩. (These fields are present only in doped ZGNRs. In the initial stage of the HF iteration, the values of ⟨ h_i x⟩ and ⟨ h_i y⟩ can be selected from small random numbers). In the presence of these fields, the HF eigenstates are mixed spin states. The HF single-particle states |k⟩ (k=1,2,…,2N_s) can be written as a linear combination of site states |i,σ⟩. In the language of second quantization this is equivalent to a_k=∑_i,σ A_k,i,σc_i,σ. These magnetic fields are rather small for the disorder strength and doping level considered in this study. There may be several nearly degenerate HF ground states. We select the HF initial ground state such that ⟨ n_i,σ⟩ represents a paramagnetic state with a small spin splitting. In addition, we choose small random numbers of ⟨ h_i x⟩ and ⟨ h_i y⟩ (they do not significantly affect the final results). The HF matrix dimension scales with the number of carbon atoms, which is typically <50000. The HF eigenstates and eigenenergies are self-consistently computed (this requires approximately 20 iterations). The TEE is computed using the disorder-averaging results of numerous disorder realizations. Here, we used gpu to speed up the solution of the HF matrix. The gpu calculations were intensive and performed on a supercomputer. In the presence of disorder and in the low-doping region, the obtained HF ground-state properties with solitons are in qualitative agreement with those of the density matrix renormalization group (DMRG) in the matrix product representation <cit.>. (In this work we do not investigate the high doping region. The DMRG result is difficult to obtain in this region because the computation is rather time consuming, and therefore, it is not possible to determine which nearly degenerate HF ground state is the true ground state.) Note that the Mott gap Δ is well-developed only when L≫ W (the excitation spectrum of a ribbon with L∼ W is similar to that of a gapless two-dimensional graphene sheet <cit.>). The localization properties of ZGNRs are unusual because both localized and delocalized states can exist <cit.>. Gap-edge states with energy |E|∼Δ/2 (Δ represents the Mott gap in the absence of disorder) can have localization lengths ∼ W and overlap significantly with each other. § EFFECTS OF ANDERSON LOCALIZATION Anderson localization plays a crucial role in the quantization of fractional charges <cit.>. The effects of Anderson localization can be described using self-consistent Hartree–Fock approximation (HFA)  <cit.>. The first important effect is as follows: Anderson localization reduces the correlation length. Therefore, in comparison to that of the nondisordered case, we can use a smaller Wilson loop <cit.> to calculate the TEE of disordered interacting ZGNRs . The correlation length may be computed from the entanglement entropy of an area A. It is computed from the correlation function, which is also known as the reduced density matrix of region A. The HF correlation function <cit.> between i∈ A and j∈ A decays exponentially as a function of distance x between i and j C(x) = C_i↑, j↑ = ⟨Ψ| c^†_i↑ c_j↑|Ψ⟩∼exp(- | x|/ξ), where Ψ represents the HF ground state of the ZGNR and ξ represents the correlation length. By inverting the relation given in Eq.(<ref>), we can write c_iσ as a linear combination of a_k. This makes it straightforward to compute C(x). To compute it accurately, the area must be larger than the correlation length <cit.>. We compute the correlation function and determine the correlation length ξ, as shown in Fig. <ref>. For the disordered case Γ≠ 0, the correlation length is obtained by averaging over several disorder realizations. The Anderson localization reduces the correlation length compared to that of disorder-free ribbons, as shown in Fig. <ref>(b). (Disorder-free ZGNRs have a large correlation length for small U.) In contrast, doping increases the correlation length, as shown in Fig. <ref>(c). Another important effect of Anderson localization in the presence of on-site repulsion is that quasi-degenerate localized states are spatially separated <cit.>, leading to well-defined fractional charges <cit.>. (In low-doped ZGNRs added electrons fractionalize and form a narrow peak in the DOS near E=0 consisting of quasi-degenerate localized states  <cit.>.) The probability densities of such two mid-gap states carrying fractional charges are shown in Fig. <ref>(d). These gap-edge states are mixed chiral states <cit.>, whose probability densities peak at the two edges and rapidly decays inside the ribbon. Note that these states do not overlap with each other. Non-interacting electrons of disordered ZGNRs also display mixed chiral states near E=0. However, although the overlap between nearly degenerate states in the weak-disorder regime is small, it is not negligible. Thus well-defined fractional charges do not readily form in non-interacting disordered ZGNRs. § PHASE DIAGRAM Topological order can be detected by investigating the TEE (β) <cit.> within the HFA <cit.>. We first select a set of values for (L, W, w, l_zig,l_arm), as defined in Fig. <ref>(a), to compute β. Next, these quantities are increased by the same ratio and a new β is computed. This process is repeated several times (see Ref.  <cit.> for details). We apply finite-size scaling analysis to extract the value of the TEE in the limit L→∞ (see Fig. <ref>(b)). We divide the parameter space (Γ,U,δ N) into three-dimensional grid points, and at each grid point, we compute β (see Fig. <ref>(c)). The three-dimensional phase diagram obtained is shown in Fig. <ref>(d). We find that β can have three types of values: (i) A universal value in the topologically ordered phase, (ii) nonuniversal values of β with large variances in the crossover phases, and (iii) a zero value of β in the normal-disordered phase. Projections of the phase diagram, namely U-Γ, Γ-δ N, and U–δ N planes, are shown in Figs. <ref>(e)-(g). In undoped ZGNRs a TO phase is found in regions Γ/U≲ 1 and U≲ t, see Fig. <ref>(c). The topological phase transition into the symmetric protected phase at Γ=0 is abrupt, consistent with the result of Ref. <cit.> (The TEE of the symmetric phase is zero). There are also other topological phase transitions but they are smooth transitions with crossover regions <cit.>. Figs. <ref>(e)-(g) display the presence of crossover regimes lying beyond the TO phase with an increase in the disorder, doping, and interaction strength. The phase boundaries between topologically ordered and normal phases are “blurred”, which indicate the presence of crossover phases (there are two types of crossover (CO) phases, labeled COI and COII). The numerical results of the TEE is shown in Fig. <ref>(c). The error bars in this figure include, besides random fluctuations caused by disorder, the uncertainties that occur in the extrapolation process of the finite scaling analysis. As Γ/U increases, β decreases (see the red line in Fig. <ref>(c)). The value of the TEE thus changes across a crossover phase. In such a phase, β has a large variance, but the average values are not zero, which implies that the topological order is not completely destroyed. In this regime, the TEE becomes nonuniversal and decays. In crossover phases charge transfer correlations between the opposite zigzag edges are present but fractional charges are not well defined, or vice versa. One can use a different but equivalent procedure to determine the phase diagram. We verified that the same phase diagram can be obtained by analyzing the presence of fractional charges and nonlocal correlations between the opposite zigzag edges. At each grid point (Γ,U,δ N) in the parameter space, we find the ground state and investigate whether the gap-edge states display fractional charges and whether nonlocal correlations exist between the opposite zigzag edges. By ultilizing this method, we have successfully recovered the phase diagram shown in Fig. <ref>(d). § TOPOLOGICALLY ORDERED PHASE The universal region was investigated in Ref.  <cit.>, and therefore, we do not describe this phase in detail here. However, we would like to mention some new results. We elucidate the nature of correlations in topologically ordered ZGNRs. A ZGNR is shown in Fig. <ref>(a). It consists of 8 carbon lines labeled l = 1, 2, 3, 4, 5, 6, 7, 8. In each pair of carbons lines (1, 8), (2, 7), (3, 6), and (4, 5), an increase/decrease in the occupation number of one line is correlated with a decrease/increase in that of the other line (see, for example, lines 1 and 8 in Fig. <ref>(c)). It is not only the zigzag edges that are correlated in this way, but also other carbon lines inside the ribbon that are away from the edges. The corresponding site spins of the ribbon are shown in Fig.<ref> (d). Mixed chiral gap states contribute to this effect (these gap-edge states can decay slowly from the zigzag edges, unlike the fractional edge states. A schematic picture of a mixed chiral state is shown in Fig. <ref>(b)). Changes in the occupation numbers δ n_i,↑ and δ n_i,↓ of an edge often coincide at nearly the same values as k, which labels the site position along the ribbon direction. This effect can lead to n_i,↑≈ n_i,↓ of the occupation numbers in the presence of disorder, resulting in s_i≈ 0, i.e., the appearance of spin-charge separation around a site on one of the edges <cit.>. The following points should be also noted. The results in Fig.<ref>(c) show that the variance of β decreases in the singular limit Γ/U→ 0. (Additional numerical results confirm this conclusion.) This result is consistent with the previous finding that fractional charge of a midgap state becomes accurate in the weak disorder regime and in the thermodynamic limit (see Ref.<cit.>). In the opposite limit Γ/U ≫ 1, the value of the TEE is non-universal and decreases with increasing U (see Fig. <ref>(c)). In addition, the functional dependence of DOS on E in the universal region is given by an exponentially suppressed function, a linear function <cit.>, or something in between. The actual shape of the DOS is determined by the competition between the strength of disorder and the on-site repulsion <cit.>; for example, the DOS is linear for (U,Γ, δ N) = (2t, 0.5t, 0), but it is exponentially suppressed in the weak disorder limit. § CROSSOVER PHASE I We describe in detail the properties of undoped ZGNRs in the COI phase, where U ≳ t and U≳Γ (the on-site repulsion U is the dominant energy in this phase). The TO phase gradually changes into the COI phase as U increases, as illustrated in Fig. <ref>(e). In this phase, β is nonzero, but its variance is significant, as shown in Fig <ref>(a). This phase has the following properties: (i) The disorder-induced change in the edge occupation numbers δ n_i,↑= 1/2 for one type of spin σ is entirely transferred to the opposite edge, i.e., the zigzag edges are correlated in a nontrivial manner (see Fig. <ref>(c) for Γ/U = 0.17). However, the site positions k on the opposite zigzag edges, where changes in δ n_i,↑ and δ n_i,↓ occur , do not coincide. (In contrast, these positions are correlated at the nearly same values of k in the TO phase, as we mentioned before.) We believe that these edge transfer correlations between zigzag edges change the ground state entanglement pattern and yield a nonzero fluctuating TEE. The edge charge transfer correlations become weaker when the disorder is stronger (see Fig. <ref>(d) for Γ/t=4), leading to a smaller value of β (see Fig. <ref>(c)). (ii) Although the zigzag edge changes are fractional, δ n_i,↑= 1/2 (Figs. <ref>(b) and (c)), the A- and B-probability densities of the mixed chiral states responsible for this feature overlap, see Fig. <ref>(c). Thus, fractional charges are ill-defined. (iii) For spin-charge separation to be present, charge transfers for both spins must occur at the same values k. These effects are not observed in the COI phase. Note that, the condition S_z=1/2(n_i,↑-n_i,↓)=0 at site i is not sufficient for spin-charge separation. To fulfill the conditions, well-defined fractional charges must exist. § CROSSOVER PHASE II For undoped ZGNRs, there is another CO phase for Γ≫ U but U/t≲ 1. We call this phase COII where the disorder strength Γ is the dominant energy. As Γ increases, the TO phase undergoes a gradual transition into the COII phase, as demonstrated in Figure <ref>(e). Concurrently, the gap is progressively filled with states, as depicted in the upper graph of Figure <ref>(a). Similar to the COI phase (Fig. <ref>(a)) β is finite with a significant variance. But there are no charge-transfer correlations between the zigzag edges (see the lower graph of Figure <ref>(a)). However, some fractional charges may exist, see Fig. <ref>(b). This is consistent with the following obtained results: (i) Some changes in the edge occupation number are δ n_i,σ≈±1/2. (ii) There are gap-edge states with q_A ≈ 1/2. (Here q_A=∑_i∈ A|ψ_iσ(E)|^2, where ψ_iσ(E) is the HF eigenstate with energy E, see Ref.<cit.>. The probability densities are summed over all sites of the A sublattice.) However, the variance of q_A in the energy interval [E-δ E,E+δ E] is large because q_A varies substantially in this interval, as shown in Fig. <ref>(a). (But the disorder averaged mean charge value of the states in this energy interval is e^-/2.) Despite this, a fractional charge of a state in the interval [E-δ E,E+δ E] near E=0 does not overlap significantly with the probability densities of other fractional and non-fractional states in the same energy interval, provided that δ E is small (for U∼ t and Γ∼ t this happens when δ E∼ 0.01t). We believe that the interplay between localization and on-site repulsion is responsible for this effect. However, since a gap is absent, the fractional charges are less stable compared to the TO phase. We checked that several states in the same value of δ E overlap in the absence of on-site repulsion. Thus far, we investigated the undoped case. Upon doping, the disorder-free ZGNRs exhibit edge spin density waves instead of edge ferromagnetism of undoped ribbons. If disorder is added to a doped ZGNR, the spin waves become distorted <cit.>: There is a topological phase transition from modulated ferromagnetic edges at zero doping to distorted spin-wave edges at finite doping. Our results indicated that, when doping is substantial, this phase is also a COII phase. The dependence of the mean value of q_A of the states in the mid-gap peak on the number of doped electrons is shown in Fig. <ref>(d) (the DOS shows a sharp peak at the mid-gap energy, see Fig. <ref>(c)). At a low doping concentration, the disorder-averaged value of q_A is close to 0.5. The states in the mid-gap peak display well-defined fractional charges, as we discussed below Fig.<ref> (note that the width of the midgap peak is δ E∼ 0.005t). As the doping concentration increases further, q_A significantly deviates from 0.5, and simultaneously, the DOS mid-gap peak starts to decrease <cit.>. These findings imply that even though fractional charges can be found, their number decreases with an increase in doping. The gradual change in q_A as a function of δ N / N_sindicates that the transition from the phase of the distorted ferromagnetic edge to the phase of the distorted edge spin-wave is not sharp. Figure <ref>(e) shows how β decreased with an increase in δ N / N_s. For a large δ N / N_s, it is computationally demanding to calculate β because the correlation length is expected to be longer (see Fig. <ref>(c)) in comparison to that of undoped ZGNRs. § STRONGLY DISORDERED AND STRONGLY REPULSIVE PHASES We discuss the strongly disordered phase in region Γ/U≫ 1 (see Fig. <ref>(e)). The topological order is destroyed once the disorder strength reaches a sufficiently large value (e.g., β = 0 at (U,Γ, δ N) = (t,15t, 0)). In this region the edge charge-transfer correlations and charge fractionalization are not well-defined, which implies that TEE is zero. In Fig. <ref>(a), site occupation numbers in weak (Γ = 0.03 t) and strong (Γ = 15 t) disorder regimes are shown side by side to highlight the difference, where the ones in strong disorder regime highly fluctuate from site to site. Edge magnetization is zero almost everywhere (see Fig. <ref>(b)). The occupation numbers display sharp values of n_i,σ=1 at some sites (see Fig. <ref>(b)) (they were also present in the DMRG calculations of disordered ribbons, see Ref.<cit.>). Another phase, that is, the strongly repulsive phase (U≫ t) with no fractional charges and zigzag edge correlations, is shown in Fig. <ref>(e). In this case, β≈ 0. The q_A–E diagram in Fig. <ref>(c) displays the nonperturbative nature of disorder in this regime: the values q_A are scattered between 0 and 1 in the limit Γ→0, whereas they are restricted to the four solid lines at Γ = 0. Also, the (E, q_A) distribution indicates that in a strongly repulsive regime, even with the presence of disorder, there is still a large energy gap. There are no states with q_A≈ 1/2 near the mid-gap energy, as shown in Fig. <ref>(c). The A and B components of the wave function of the states with q_A≈ 1/2 near the gap edges ±Δ/2 overlap (see Fig. <ref>(d)). For a stronger disorder (larger values of Γ), the gap is filled with states such that the DOS is finite at E = 0, as shown in Fig. <ref>(e). The main physics of this phase is illustrated by investigating the zigzag edge structure: the occupation numbers are n_i,σ=1 or 0 so charge transfers are one (δ n_i,σ=± 1) in the strongly repulsive phase (the total site occupation number of each site is n_i≈ 2, 1, or 0 despite a strong U). No transfer of fractional charges was observed between zigzag edges. This is because mixed chiral gap edge states are not present. Note that the edge magnetization displays sharp domain walls, as indicated in Figs. <ref>(f)-(g). § SUMMARY AND DISCUSSION We computed the phase diagram of zigzag graphene nanoribbons as a function of the on-site repulsion U, doping δ N, and disorder strength Γ. We identified the universal, crossover, strongly disordered, and strongly repulsive phases. Each phase of the phase diagram was defined by the TEE value and its variance. We also investigated how the values of the TEE are related to the following physical properties: the presence of charge fractionalization and the edge charge transfer correlations between the opposite zigzag edges. When both properties are present, in addition to correlations leading to spin-charge separation, β was universal. If only one of these properties is present, β was nonuniversal and its variance was significant. However, when both were absent, β was approximately zero. In addition, we found a strongly repulsive phase with zero TEE in large on-site repulsion and weak disorder limits. Its ground state contains abrupt kinks in zigzag edge magnetizations without charge fractionalization, which is a consequence of the singular perturbative nature of the disorder potential. There is another phase with zero TEE, i.e., the strongly disordered phase in regime Γ≫ U. In this phase, the edge site occupation numbers fluctuate highly from site to site, and antiferromagnetic coupling between the two edges are nearly destroyed. Each phase of the phase diagram has a different zigzag-edge structure. We also investigated the effect of the interplay between localization and on-site repulsion on the charge quantization. In low-doped and/or weakly disordered ZGNRs this interplay contributes to the spatial separation of quasi-degenerate gap-edge states and protects the charge fractionalization against quantum fluctuations. Even in the presence of moderately strong disorder charge fractionalization is not completely destroyed. We briefly discuss some experimental implications. It would be interesting to observe the presence of nonlocal charge transfers between the zigzag edges of the COI phase. This can be investigated by measuring correlations between the edge site occupation numbers using a scanning tunneling microscope <cit.>. In the COII phase, fractional e^-/2 edge charges are present; however, unusual transport and magnetic susceptibility properties are not expected because spin-charge separation is not present. (In contrast, the TO phase is expected to display unusual transport and magnetic susceptibility because of spin-charge separation <cit.>.) Similarly, scanning tunneling microscopy can be used to verify the predicted edge occupation numbers of the strongly repulsive and strongly disordered phases. In addition, investigation of tunneling between zigzag edges, as in fractional quantum Hall bar systems <cit.>, may be fruitful. 10 rm url<#>1urlprefixURL doiprefixDOI: Lei13 authorLeinaas, J. M. & authorMyrheim, J. journaltitleOn the theory of identical particles. Nuovo Cimento B volume37, pages1–23, <10.1007/BF02727953> (year1977). Wilczek03 authorWilczek, F. journaltitleQuantum mechanics of fractional-spin particles. Phys. Rev. Lett. volume49, pages957–959, <10.1103/PhysRevLett.49.957> (year1982). Arovas authorArovas, D., authorSchrieffer, J. R. & authorWilczek, F. journaltitleFractional statistics and the quantum hall effect. Phys. Rev. Lett. volume53, pages722–723, <10.1103/PhysRevLett.53.722> (year1984). Nakamura01 authorNakamura, J., authorLiang, S., authorGardner, G. C. & authorManfra, M. J. journaltitleDirect observation of anyonic braiding statistics. Nature Physics volume16, pages931–936, <https://doi.org/10.1038/s41567-020-1019-1> (year2020). Barto1 authorBartolomei, H. et al. journaltitleFractional statistics in anyon collisions. Science volume368, pages173–177, <10.1126/science.aaz5601> (year2020). GV2019 authorGirvin, S. M. & authorYang, K. titleModern condensed matter physics (publisherCambridge University Press, addressCambridge, year2019). Pach authorPachos, J. K. titleIntroduction to Topological Quantum Computation (publisherCambridge University Press, addressCambridge, year2012). Wen11 authorWen, X.-G. journaltitleColloquium: Zoo of quantum-topological phases of matter. Rev. Mod. Phys. volume89, pages041004, <10.1103/RevModPhys.89.041004> (year2017). NPlaugh authorLaughlin, R. B. journaltitleAnomalous quantum hall effect: An incompressible quantum fluid with fractionally charged excitations. Phys. Rev. Lett. volume50, pages1395–1398, <10.1103/PhysRevLett.50.1395> (year1983). Yang authorS.-R. Eric Yang. titleTopologically Ordered Zigzag Nanoribbon (publisherWorld Scientific, Singapore, year2023). Kitaev11 authorKitaev, A. & authorPreskill, J. journaltitleTopological entanglement entropy. Phys. Rev. Lett. volume96, pages110404, <10.1103/PhysRevLett.96.110404> (year2006). Levin11 authorLevin, M. & authorWen, X.-G. journaltitleDetecting topological order in a ground state wave function. Phys. Rev. Lett. volume96, pages110405, <10.1103/PhysRevLett.96.110405> (year2006). Haldane191 authorLi, H. & authorHaldane, F. D. M. journaltitleEntanglement spectrum as a generalization of entanglement entropy: Identification of topological order in non-abelian fractional quantum hall effect states. Phys. Rev. Lett. volume101, pages010504, <10.1103/PhysRevLett.101.010504> (year2008). Altshuler authorAltshuler, B. titleInductory anderson localization. In booktitleAdvanced Workshop on Anderson Localization, Nonlinearity and Turbulence: a Cross-Fertilization (organizationInternational Centre for Theoretical Physics, year2010). GV2000 authorGirvin, S. M. titleThe quantum hall effect: novel excitations and broken symmetries. In booktitleAspects topologiques de la physique en basse dimension. Topological aspects of low dimensional systems, pages53–175 (publisherSpringer, year1999). Fujita authorFujita, M., authorWakabayashi, K., authorNakada, K. & authorKusakabe, K. journaltitlePeculiar localized state at zigzag graphite edge. J. Phys. Soc. Jpn. volume65, pages1920–1923, <10.1143/JPSJ.65.1920> (year1996). Brey2006 authorBrey, L. & authorFertig, H. A. journaltitleElectronic states of graphene nanoribbons studied with the dirac equation. Phys. Rev. B volume73, pages235411, <10.1103/PhysRevB.73.235411> (year2006). Lyang authorYang, L., authorPark, C.-H., authorSon, Y.-W., authorCohen, M. L. & authorLouie, S. G. journaltitleQuasiparticle energies and band gaps in graphene nanoribbons. Phys. Rev. Lett. volume99, pages186801, <10.1103/PhysRevLett.99.186801> (year2007). Pisa1 authorPisani, L., authorChan, J. A., authorMontanari, B. & authorHarrison, N. M. journaltitleElectronic structure and magnetic properties of graphitic ribbons. Phys. Rev. B volume75, pages064418, <10.1103/PhysRevB.75.064418> (year2007). Cai2 authorRuffieux, P. et al. journaltitleOn-surface synthesis of graphene nanoribbons with zigzag edge topology. Nature volume531, pages489–492, <10.1038/nature17151> (year2016). Kolmer authorKolmer, M. et al. journaltitleRational synthesis of atomically precise graphene nanoribbons directly on metal oxide surfaces. Science volume369, pages571–575, <10.1126/science.abb8880> (year2020). Brey editorBrey, L., editorSeneor, P. & editorTejeda, A. (eds.) titleGraphene Nanoribbons. 2053-2563 (publisherIOP Publishing, year2019). Yang2019 authorJeong, Y. H., authorS.-R. Eric Yang & authorCha, M.-C. journaltitleSoliton fractional charge of disordered graphene nanoribbon. Journal of Physics: Condensed Matter volume31, pages265601, <10.1088/1361-648X/ab146b> (year2019). yang1 authorS.-R. Eric Yang. journaltitleSoliton fractional charges in graphene nanoribbon and polyacetylene: similarities and differences. Nanomaterials volume9, pages885, <10.3390/nano9060885> (year2019). Yang2020 authorS.-R. Eric Yang, Cha, M.-C., authorLee, H. J. & authorKim, Y. H. journaltitleTopologically ordered zigzag nanoribbon: e/2 fractional edge charge, spin-charge separation, and ground-state degeneracy. Phys. Rev. Research volume2, pages033109, <10.1103/PhysRevResearch.2.033109> (year2020). Dob authorDobrosavljevic, V., authorTrivedi, N. & authorValles, J. M., Jr. titleConductor-Insulator Quantum Phase Transitions (publisherOxford University Press, year2012). Belitz authorBelitz, D. & authorKirkpatrick, T. R. journaltitleThe anderson-mott transition. Rev. Mod. Phys. volume66, pages261–380, <10.1103/RevModPhys.66.261> (year1994). Byczuk authorByczuk, K., authorHofstetter, W. & authorVollhardt, D. journaltitleCompetition between anderson localization and antiferromagnetism in correlated lattice fermion systems with disorder. Phys. Rev. Lett. volume102, pages146403, <10.1103/PhysRevLett.102.146403> (year2009). Yang2021 authorKim, Y. H., authorLee, H. J. & authorS.-R. Eric Yang. journaltitleTopological entanglement entropy of interacting disordered zigzag graphene ribbons. Phys. Rev. B volume103, pages115151, <10.1103/PhysRevB.103.115151> (year2021). Yang2022 authorKim, Y. H., authorLee, H. J., authorLee, H.-Y. & authorS.-R. Eric Yang. journaltitleNew disordered anyon phase of doped graphene zigzag nanoribbon. Scientific Reports volume12, pages14551, <10.1038/s41598-022-18731-6> (year2022). Efros authorÉfros, A. L. & authorShklovskii, B. I. journaltitleCoulomb gap and low temperature conductivity of disordered systems. Journal of Physics C: Solid State Physics volume8, pagesL49–L51, <10.1088/0022-3719/8/4/003> (year1975). Lima2012 authorLima, L. R. F., authorPinheiro, F. A., authorCapaz, R. B., authorLewenkopf, C. H. & authorMucciolo, E. R. journaltitleEffects of disorder range and electronic energy on the perfect transmission in graphene nanoribbons. Phys. Rev. B volume86, pages205111, <10.1103/PhysRevB.86.205111> (year2012). Canri authorCanright, G. S., authorGirvin, S. M. & authorBrass, A. journaltitleSuperconductive pairing of fermions and semions in two dimensions. Phys. Rev. Lett. volume63, pages2295–2298, <10.1103/PhysRevLett.63.2295> (year1989). Heeger authorHeeger, A. J., authorKivelson, S., authorSchrieffer, J. R. & authorSu, W. P. journaltitleSolitons in conducting polymers. Rev. Mod. Phys. volume60, pages781–850, <10.1103/RevModPhys.60.781> (year1988). Zeng authorZeng, B., authorChen, X., authorZhou, D.-L. & authorWen, X.-G. titleQuantum Information Meets Quantum Matter (publisherSpringer New York, addressSpringer, year2019). Stau authorStauber, T. et al. journaltitleInteracting electrons in graphene: Fermi velocity renormalization and optical response. Phys. Rev. Lett. volume118, pages266801, <10.1103/PhysRevLett.118.266801> (year2017). Neto authorCastro Neto, A. H., authorGuinea, F., authorPeres, N. M. R., authorNovoselov, K. S. & authorGeim, A. K. journaltitleThe electronic properties of graphene. Rev. Mod. Phys. volume81, pages109–162, <10.1103/RevModPhys.81.109> (year2009). Yang1995 authorS.-R. Eric Yang, authorMacDonald, A. H. & authorHuckestein, B. journaltitleInteractions, localization, and the integer quantum hall effect. Phys. Rev. Lett. volume74, pages3229–3232, <10.1103/PhysRevLett.74.3229> (year1995). Mac1 authorS.-R. Eric Yang & authorMacDonald, A. H. journaltitleCoulomb gaps in a strong magnetic field. Phys. Rev. Lett. volume70, pages4110–4113, <10.1103/PhysRevLett.70.4110> (year1993). Peschel119 authorPeschel, I. journaltitleCalculation of reduced density matrices from correlation functions. Journal of Physics A: Mathematical and General volume36, pagesL205, <10.1088/0305-4470/36/14/101> (year2003). Bal authorJiang, H.-C., authorWang, Z. & authorBalents, L. journaltitleIdentifying topological order by entanglement entropy. Nature Phys volume8, pages902–905, <10.1038/nphys2465> (year2012). Andrei authorAndrei, E. Y., authorLi, G. & authorDu, X. journaltitleElectronic properties of graphene: a perspective from scanning tunneling microscopy and magnetotransport. Reports on Progress in Physics volume75, pages056501, <10.1088/0034-4885/75/5/056501> (year2012). Chung authorChung, T.-C., authorMoraes, F., authorFlood, J. D. & authorHeeger, A. J. journaltitleSolitons at high density in trans-(CH)_x: Collective transport by mobile, spinless charged solitons. Phys. Rev. B volume29, pages2341–2343, <10.1103/PhysRevB.29.2341> (year1984). Kang authorKang, W., authorStormer, H., authorPfeiffer, L., authorBaldwin, K. & authorWest, K. journaltitleTunnelling between the edges of two lateral quantum hall systems. Nature volume403, pages59–61, <https://doi.org/10.1038/47436> (year2000). Iyang authorYang, I., authorKang, W., authorBaldwin, K. W., authorPfeiffer, L. N. & authorWest, K. W. journaltitleCascade of quantum phase transitions in tunnel-coupled edge states. Phys. Rev. Lett. volume92, pages056802, <10.1103/PhysRevLett.92.056802> (year2004). § ACKNOWLEDGEMENTS S.R.E.Y. was supported by the Basic Science Research Program of the National Research Foundation of Korea (NRF), funded by the Ministry of Science and ICT (MSIT) NRF-2021R1F1A1047759. Two other grants are also acknowledged: BK21 FOUR (Fostering Outstanding Universities for Research) and KISTI Supercomputing Center with supercomputing resources including technical support KSC-2022-CRE-0345. § DATA AVAILABILITY On reasonable request, the corresponding author will provide all relevant data in this paper. § CODE AVAILABILITY On reasonable request, the corresponding author will provide all numerical codes in this paper. § AUTHOR CONTRIBUTIONS L.H.A., Y.H.K. and I.H.L. performed the HF calculations. S.R.E.Y. conceived the project and supervised the study. All authors contributed to the writing of the manuscript. § COMPETING INTERESTS The authors declare no competing interests.
http://arxiv.org/abs/2307.05248v1
20230711132918
Destructive effect of fluctuations on the performance of a Brownian gyrator
[ "Pascal Viot", "Aykut Argun", "Giovanni Volpe", "Alberto Imparato", "Lamberto Rondoni", "Gleb Oshanin" ]
cond-mat.stat-mech
[ "cond-mat.stat-mech" ]
Sorbonne Université, CNRS, Laboratoire de Physique Théorique de la Matière Condensée (UMR CNRS 7600), 4 Place Jussieu, 75252 Paris Cedex 05, France Physics Department, University of Gothenburg, 412 96 Gothenburg Sweden Physics Department, University of Gothenburg, 412 96 Gothenburg Sweden Department of Physics and Astronomy, University of Aarhus, Ny Munkegade, Building 1520, DK–8000 Aarhus C, Denmark Dipartimento di Scienze Matematiche, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Torino, Italy INFN, Sezione di Torino, Torino, Italy Sorbonne Université, CNRS, Laboratoire de Physique Théorique de la Matière Condensée (UMR CNRS 7600), 4 Place Jussieu, 75252 Paris Cedex 05, France The Brownian gyrator (BG) is a minimal model of a nano-engine performing a rotational motion, judging solely upon the fact that in non-equilibrium conditions its torque, angular momentum L and angular velocity W have non-zero mean values. For a time-discretized model, we calculate the previously unknown probability density functions (PDFs) of L and W. We find that when the time-step δ t → 0, both PDFs converge to uniform distributions with diverging variances. For finite δ t, the PDF of L has exponential tails and all moments, but its noise-to-signal ratio is generically much bigger than 1. The PDF of W exhibits heavy power-law tails and its mean W is the only existing moment. The BG is therefore not an engine in common sense: it does not exhibit regular rotations on each run and its fluctuations are not only a minor nuisance. Our theoretical predictions are confirmed by numerical simulations and experimental data. We discuss some improvements of the model which may result in a more systematic behavior. Destructive effect of fluctuations on the performance of a Brownian gyrator Gleb Oshanin August 12, 2023 =========================================================================== Introduction. - The fundamental working principles of macroscopic thermal engines, producing deterministic translational or rotational motion of large objects, are well-established in classical thermodynamics. At the same time, the latter does not explain workings of different kinds of microscopic motors, encountered in molecular or cellular biology, as well as in other rather diverse biophysical systems. Such tiny machines operate while effectively coupled to a single heat bath (or even several heat baths at the same time), making the concepts of macroscopic classical thermodynamics inapplicable. Nonetheless they work and are sufficiently efficient, e.g., to control transport of organelles throughout the cell and to produce rotary motion of bacterial flagella. Considerable progress in understanding general concepts and various aspects of the performance of microscopic motors, as well as necessary and sufficient conditions for their work has been achieved during the last decades <cit.>. In parallel, various case-by-case analyses elucidated precise mechanisms to rectify fluctuations produced by thermal baths and to convert their energy into useful work. However, most of the available analyses concern the average characteristics and performance of molecular motors, such as mean velocities or mean rotational frequencies. Still little is known about the magnitude of fluctuations around these values and their impact on the performance of microscopic motors. The Brownian gyrator (BG) is a minimal stochastic model of a microscopic heat engine that performs, on average, a rotational motion along closed orbits in non-equilibrium conditions. It consists, as realized experimentally in <cit.>, of an optically-trapped (see <cit.>) Brownian colloidal particle that is simultaneously coupled to two heat baths kept at different temperatures T_x and T_y, acting along perpendicular directions and maintaining a non-equilibrium steady-state. An alternative experimental realization of the BG using electric circuits is in <cit.>. First theoretically introduced for the analysis of effective temperatures in non-equilibrium systems <cit.>, this model was subsequently revisited in <cit.> arguing that a systematic torque on the particle is generated once T_x ≠ T_y. To support this claim, it was shown that indeed the mean torque is non-zero in such non-equilibrium conditions. Further analyses evaluated the mean angular velocity <cit.>, and examined various aspects of the dynamical and steady-state behaviors for delta-correlated in time noises (see, e.g., Refs. <cit.>), for the BG in the quantum regime <cit.>, and also for the noises with long-ranged temporal correlations <cit.>. The BG operates in heat baths and evidently the torque, angular momentum L and angular velocity W exhibit sample-to-sample fluctuations and also fluctuate during each given realization. The spread of fluctuations and their typical values are unknown, hence their actual impact on the BG performance remains elusive. In this regard, it is unclear to which extent, e.g., the torque can be called systematic <cit.>. In the quest for the answer, we study here the probability density functions (PDFs) of L and W for a time-discretized BG model. We proceed to show that both PDFs are effectively broad such that the spread of fluctuations around mean values of L and W is much bigger than these mean values themselves. This implies that non-zero values of first moments only indicate some trend in an ensemble of BGs, but no systematic behavior can be observed for individual realizations. Our findings signify that the BG cannot be called a motor in the common sense, and we suggest some improvements which may result in a more regular behavior. Our theoretical predictions are confirmed by numerical simulations and also appear consistent with experimental data in <cit.>. The Model. - The model consists of two Langevin equations for two linearly coupled Ornstein-Uhlenbeck processes, each living at its own temperature, T_x or T_y (both measured in units of the Boltzmann constant). In its simplest settings <cit.>, the BG model is defined by Ẋ_t = - X_t + u Y_t + ξ_x(t) , Ẏ_t = - Y_t + u X_t + ξ_y(t) , with |u| < 1, and ξ_x(t) and ξ_y(t) being independent Gaussian noises with zero mean and covariance function ξ_i(t) ξ_j(t') = 2 T_j δ_i,jδ(t-t') , i,j = x,y . In eqs. (<ref>), the overbar denotes averaging over realizations of noises, while δ_i,j and δ(t) are the Kronecker symbol and the delta-function, respectively. For u=0, eqs. (<ref>) describe two independent Ornstein-Uhlenbeck processes. The solution of eqs. (<ref>) and the position PDF are presented in the Supplemental Materials (SM). The solution for arbitrary coefficients is discussed in <cit.>. We focus here on the following two random variables: – the first is the magnitude L of the angular momentum, L = X_t Ẏ_t - Y_t Ẋ_t . – the second is the magnitude W of the angular velocity, W = X_t Ẏ_t - Y_t Ẋ_t/X^2_t+Y^2_t , where the term in the denominator is the moment of inertia I = X^2_t+Y^2_t. In both eqs. (<ref>) and (<ref>) we assumed that the mass m=1. For the standard discrete-time (with time-step δ t) analogue of eqs. (<ref>) (see eqs. (<ref>) in the SM), we present below exact expressions for the PDFs of the random variables L and W. Probability density function of the angular momentum. - As shown in the SM, the PDF of the magnitude L of the angular momentum is given by P( L) = δ( L - (X_t Ẏ_t - Y_t Ẋ_t)) =1/4 π d∫^2 π_0dθ/Λ(θ)exp(-Ξ(θ) | L| ) , where d = √(4 a b - c^2) with a, b and c defined in the SM, Ξ(θ) = (Λ(θ) - sign( L) u cos(2 θ) )/2 (T_y cos^2(θ) + T_x sin^2(θ)) δ t , and Λ(θ) = (u^2 cos^2(2 θ) + 4 (T_y cos^2(θ) + T_x sin^2(θ))/d^2 δ t ×(b cos^2(θ) + a sin^2(θ) + c cos(θ) sin(θ) ) )^1/2 . Except for the trivial case u=0, when P( L) obeys P_u=0( L) = √(δ t/8 T_x T_y)exp(- √(δ t/2 T_x T_y) | L| ) , the integral in eq. (<ref>) cannot be performed exactly and we evaluate it numerically. In Fig. <ref> we plot P( L) in eq. (<ref>) as function of L for unequal temperatures T_x and T_y, four values of the coupling parameter u at fixed small δ t (panel (a)), and three values of δ t at fixed u (panel (b)). This plot together with an asymptotic analysis permits us to make the following general statements: – P( L) is peaked at L = 0, i.e., for most of trajectories one should observe L=0. Hence, a non-zero value L = u (T_x - T_y) , of the first moment is supported by atypical trajectories, and stems from the asymmetry (see below) of the PDF. The first moment exists in the limit δ t → 0, which agrees with previous results. The variance of L obeys Var( L) = 2 u^2 (T_x - T_y)^2 + (1 + u^2 δ t) (4 T_x T_y + u^2 (T_x - T_y)^2) /(1 - u^2) δ t . The variance diverges in the continuous-time limit δ t → 0 meaning that the angular momentum is not self-averaging. Moreover, at finite δ t, even if the second term is dominant, the first term by itself is twice as large as L^2 such that that the noise-to-signal ratio is generically bigger that 2, i.e., the scatter of values of L calculated from different realizations is very large. – For finite δ t, the large-L tails of the PDF follow P( L) ≃exp(- | L|/L_±) , where the symbol ≃ denotes the leading in the large-L limit behavior, L_+ corresponds to the right (L > 0) and L_- - to the left (L < 0) tails of the PDF. Mathematically, L_± are solutions of the sixth-order algebraic equation that defines the maximal values of the function 1/Ξ(θ) and therefore are rather complicated functions of the system's parameters. In general, L_± are increasing functions of the coupling u (see Fig. <ref>(a)). – When δ t → 0, both L_± diverge, because L_±≃ 1/√(δ t). Hence, the PDF becomes L-independent constant of order O(√(δ t)) in the interval L∈ [-1/√(δ t),1/√(δ t)], P( L) ≃√(2 (1 - u^2) δ t)/4 π (T_x+T_y) f(u,Δ) , Δ = T_x - T_y/T_x + T_y , where f(u,Δ) is defined in the SM. That being, P( L) tends to a uniform distribution. Such a behavior is apparent in Fig. <ref>(b). Probability density function of the angular velocity. - The derivation of P( W) is presented in the SM. It is defined as the following integral P( W) = δ( W - (X_t Ẏ_t - Y_t Ẋ_t)/(X^2_t+Y^2_t)) = 1/2 π d δ t∫^2 π_0 (T_y cos^2(θ) + T_x sin^2(θ)) dθ/( W^2 - 2 u cos(2 θ) W + Λ^2(θ))^3/2 , with Λ(θ) being defined in eq. (<ref>). For u =0 the integral in eq. (<ref>) can be performed exactly (see the SM), while in the general case it is amenable only to a numerical and asymptotic analyses. The PDF in eq. (<ref>) is depicted in Fig. <ref> as function of W for unequal temperatures, several values of the coupling parameter and several values of δ t. On the basis of the numerical plots and asymptotic analysis, the following conclusions can be drawn : – The maximum of P( W) displaces from zero for u ≠ 0 and is rounded, in contrast to the cusp-like behavior of P( L) at L=0. Hence, P( W) is an analytic function in the vicinity of the most probable value. – The first moment of the PDF P( W) obeys (see the SM) W = u √(1-u^2) (T_x - T_y)/√(4 T_xT_y + u^2 (T_x - T_y)^2) , which is a well-known result for the BG evolving in continuous time (δ t → 0). We note that the difference between the most probable W and W is much less pronounced than the one observed for L. In particular, for T_x =1, T_y = 5, u=1/2 and δ t = 1, we have that the most probable W≈ - 0.38, while W≈ - 0.35. – For finite δ t, the large-W tails of the PDF obey P( W) ≃√(1 - u^2) (T_x + T_y)/√(4 T_x T_y + u^2 (T_x-T_y)^2)δ t1/| W|^3 , where the symbol ≃ signifies that we deal with the leading in this limit behavior. The salient feature is that, for finite δ t, the PDF is characterized by heavy power-law tails such that W in eq. (<ref>) is the only existing moment and its variance is infinitely large. Therefore, W does not have much of a physical significance but merely indicates some non-zero trend in a statistical ensemble of Brownian gyrators, when T_x ≠ T_y. For a single BG it is therefore unlikely to observe a regular rotation, i.e., a motor-like behavior. The asymptotic form in eq. (<ref>) for u=3/4 is depicted in Fig. <ref>(a) (solid black line) and we observe that it converges to P( W) in eq. (<ref>) for W≈ 100. Subdominant correction term to the asymptotic form in eq. (<ref>) is displayed in the SM. – When δ t → 0, P( W) converges to a uniform distribution on a growing interval [-1/√(δ),1/√(δ t)], P( W) ≃(4 T_x T_y + u^2 (T_x - T_y)^2)/8 π (T_x + T_y)^2√(2 (1 - u^2) δ t) g(u,Δ) , where g(u,Δ) in presented in the SM. In Fig. <ref>(b) we depict P( W) for progressively smaller values of δ t, highlighting a transition to a uniform distribution. Comparison with experimental results. – To perform a comparison with experiments, we employed the data acquired in <cit.>, which experimentally realized the BG model by placing a Brownian particle in an elliptical optical potential and simultaneously maintaining it in contact with two heat baths kept at different temperatures. This was achieved by using a single optically-trapped <cit.> colloidal particle (polystyrene particle of diameter 1.98 μ m) suspended in an aqueous solution at room temperature (292 K). The ellipticity of the potential was produced by altering the laser beam's intensity profile via a spatial light modulator. The isotropic thermal environment was made anisotropic via a fluctuating electric field with a near-white frequency spectrum applied along the x-direction. This electric fluctuating field was generated by two thin wires with electric white noise placed on either side of the optical trap and raised the bath noise temperature along the x-direction to 1750 K, while the bath noise temperature along the y-direction remained at 292 K (room temperature). In this way, unequal temperatures along the x- and y-directions were produced to establish a non-equilibrium steady-state. In Fig. <ref> we depict P( L) and P( W) evaluated from a statistical ensemble of discrete-time BG trajectories recorded in the above experimental set-up. We observe that for P( L) the predicted form in eq. (<ref>) is fully consistent with the experimental data; the tails are exponential and P( L) shows a cusp-like behavior at the most probable value L=0. Also our predictions that P( W) is centered around a non-zero most probable value of W and that the large-W decay is a power-law appear to be valid features of the BG model. However, we note a slight discrepancy between the predicted value of the exponent (= 3), and the values α_± deduced from the experimental data, which appear to be somewhat smaller and hence, the tails are somewhat heavier. A probable origin of such a discrepancy is discussed in the SM. Discussion. – To recapitulate, we revisited a well-studied minimalistic model of a molecular motor – the so-called Brownian gyrator – which is believed to perform a rotational motion in non-equilibrium conditions. While previous works concentrated exclusively on the analysis of the mean values of the characteristics of the gyration process (torque, angular momentum L and angular velocity W), here we looked on this model from a broader perspective and calculated exact probability density functions of these random variables for a discrete-time BG model with an increment δ t. This permitted us to obtain a fully comprehensive picture of the BG performance. We showed that δ t is a crucial parameter. For δ t → 0, i.e., in the continuous-time limit, both PDFs tend to a uniform distribution with a vanishing amplitude and diverging variance. For fixed δ t, the PDF of L has exponential tails which ensures the existence of all moments. However, the variance of L appears to be much larger than the squared first moment, meaning that here the noise is always bigger than the signal. In other words, the value of the angular momentum varies significantly from one realization of Gaussian noises to another. More strikingly, the PDF of the angular velocity has heavy power-law tails and, in fact, the mean angular velocity is the only existing moment. In conclusion, fluctuations play a very destructive role for the performance of the BG, such that it cannot be considered as a common sense motor for which only some small amount of noise is permitted. On the contrary, for the BG the noise prevails producing a very erratic behavior. While the existence of first moments indicates some trend in an ensemble of such Brownian gyrators, no systematic rotation will take place on the level of a single realization. Indeed, only a non-vanishing fraction of realizations needs to have this property for the whole statistic to be affected. Our theoretical predictions are confirmed by numerical simulations and are also consistent with the data drawn from the experimental realization of the BG model in <cit.>. We note parenthetically that although heavy-tailed PDFs are common in probability theory, the list of their occurrences in realistic physical systems is rather short. Our prediction for the PDF of the angular velocity in conjunction with the experimental data in <cit.> adds a valuable example to this list. Finally, we note that minimal models often play an important role in the analysis of the behavior of complex systems. While they take into account only some basic features discarding a majority of side-processes, they are nonetheless very instructive because they provide conceptual insights helping the understanding of the dominant mechanisms. This is the reason why the BG model has attracted such an attention in the past. In view of our results, the model as it stands does not appear to be really performant. Therefore, a legitimate question is how the model can be improved, still remaining at a minimalistic level, to show a better performance. One of the possible issues is the definition of noise. We observed that in the δ t → 0 limit, the PDFs become progressively more defocused which is clearly detrimental for the performance. Consequently, replacing a continuous-time white-noise by a discrete noise with a bounded amplitude may be beneficial. On the other hand, in the limit δ t →∞ the model becomes deterministic and rotational motion disappears, which suggests that there may be some optimal discretization of noise. Second, we remark that the power-law tails of the probability density function of the angular velocity stem from undesirable realizations of Gaussian noises that produce anomalously high positive or negative values of the angular velocity and have a big statistical weight. Inspecting the definition of the angular velocity, eq. (<ref>), one may conclude that such realizations correspond to trajectories with non-zero (positive or negative) values of the angular momentum and very small values of the moment of inertia I. In other words, high values of the angular velocity occur when the BG circulates along closed orbits that concentrate around the origin. The probability density function of the moment of inertia is presented in the SM and we observe that actually I=0 is the most probable value. Therefore, a physically plausible suggestion to devise a motor with a more regular behavior is to forbid the BG to enter a finite-radius ball centered at the origin. This can be realized in practice by designing a potential with a strong repulsion from the origin at short scales, and an attraction at larger scales, such that the closed orbits along which the BG circulates will have a finite length. Here, again some optimization with respect to the range of attractive/repulsive interactions may take place. While having a too short range of repulsive interactions is detrimental, as we have shown here, a too long range will also be detrimental for the performance because the closed orbits will be too long and hence, an angular velocity too small. Lastly, the model may apparently be improved to some extent along the lines suggested in <cit.>, i.e., by introducing inertial terms into eqs. (<ref>) which will bound the acceleration. Acknowledgements. – The authors wish to thank Olivier Bénichou and Luca Peliti for helpful discussions. This research was performed under the auspices of Italian National Group of Mathematical Physics (GNFM) of INdAM. 99 reimann P. Reimann, Phys. Rep. 361, 57 (2002) berg H. C. Berg, Annu. Rev. Biochem. 72, 19 (2003). haenggi P. Hänggi and F. Marchesoni, Rev. Mod. Phys. 81, 387 (2009). sekimoto K. Sekimoto, Stochastic Energetics, (Springer, Berlin, 2010). seifert U. Seifert, Rep. Prog. Phys. 75, 126001 (2012). volpe C. Bechinger, R. Di Leonardo, H. Löwen, C. Reichhardt, G. Volpe, and G. Volpe, Rev. Mod. Phys. 88, 045006 (2016). sergio S. Ciliberto, Phys. Rev. X 7, 021051 (2017). argun A. Argun, J. Soni, L. Dabelow, S. Bo, G. Pesce, R. Eichhorn, and G. Volpe, Phys. Rev. E 96, 052106 (2017). optical P. H. Jones, O. M. Maragó, and G. Volpe, Optical Tweezers: Principles and Applications, (Cambridge University Press, Cambridge, 2015). alberto S. Ciliberto, A. Imparato, A. Naert, and M. Tanase, Phys. Rev. Lett. 110, 180601 (2013); J. Stat. Mech. 2013, 12014 (2013). peliti R. Exartier and L. Peliti, Phys. Lett. A 261, 94 (1999). rei R. Filliger and P. Reimann, Phys. Rev. Lett. 99, 230602 (2007). dotsenko V. Dotsenko, A. Maciolek, O. Vasilyev, and G. Oshanin, Phys. Rev. E 87, 062130 (2013). pascal V. Mancois, B. Marcos, P. Viot, and D. Wilkowski, Phys. Rev. E 97, 052121 (2018). bae Y. Bae, S. Lee, J. Kim, and H. Jeong, Phys. Rev. E 103, 032148 (2021). crisanti A. Crisanti, A. Puglisi, and D. Villamaina, Phys. Rev. E 85, 061127 (2012). lahiri S. Lahiri, P. Nghe, S.J. Tans, M.L. Rosinberg, and D. Lacoste, PLoS One 12, 1 (2017). lamberto S. Cerasoli, V. Dotsenko, G. Oshanin, and L. Rondoni, Phys. Rev. E 98, 042149 (2018); J. Phys. A: Math. Theor. 54, 105002 (2021). tyagi N. Tyagi and B.J. Cherayil, J. Stat. Mech. 2020, 113204 (2020). Chang2021 H. Chang, C.-L. Lee, P.-Y. Lai, and Y.-F. Chen, Phys. Rev. E 103, 022128 (2021). NEXUS J.V. Siches, O.M. Miangolarra, A. Taghvaei, and T.T. Georgiou, PNAS Nexus 1, 1 (2022). viale D. Lucente, A. Baldassarri, A. Puglisi, A. Vulpiani, and M. Viale, Phys. Rev. Res. 4, 043103 (2022). Dotsenko2022 V. S. Dotsenko, P. Viot, A. Imparato, and G. Oshanin, J. Stat. Mech. 2022, 123211 (2022); Out-of-equilibrium dynamics of two interacting optically-trapped particles, to appear in Scipost: preprint arXiv : 2302.07716 Fogedby2018 H. C. Fogedby and A. Imparato, EPL 122, 10006 (2018). Nascimento2021 E.d.S. Nascimento and W.A.M. Morgado, J. Stat. Mech. 2021, 013301 (2021). Squarcini2022a A. Squarcini, A. Solon, P. Viot, and G. Oshanin, J. Phys. A: Math. Theor. 55, 485001 (2022). sara S. Cerasoli, S. Ciliberto, E. Marinari, G. Oshanin, L. Peliti, and L. Rondoni, Phys. Rev. E 106, 014137 (2022). zia J. B. Weiss, B. Fox-Kemper, D. Mandal, A. D. Nelson, and R. K. Zia, J. Stat. Phys. 179, 1010 (2020). gleb C. Mejía-Monasterio, G. Oshanin, and G. Schehr, J. Stat. Mech. 2011, P06022 (2011). arc https://en.wikipedia.org/wiki/Atan2 Destructive effect of fluctuations on the performance of a Brownian gyrator Pascal Viot, Aykut Argun, Giovanni Volpe, Alberto Imparato, Lamberto Rondoni and Gleb Oshanin § SUPPLEMENTARY MATERIAL §.§ A. Position probability density function The solutions of eqs. (<ref>) with the initial conditions X_0 = Y_0 = 0 and Ẋ_0 = Ẏ_0 = 0 are explicitly given by X_t = e^-t∫^t_0 dτ e^τ cosh(u (t- τ)) ξ_x(τ) + e^-t∫^t_0 dτ e^τ sinh(u (t- τ)) ξ_y(τ) , Y_t = e^-t∫^t_0 dτ e^τ sinh(u (t- τ)) ξ_x(τ) + e^-t∫^t_0 dτ e^τ cosh(u (t- τ)) ξ_y(τ) . We focus on the characteristic function of the form ϕ(ν_1,ν_2) = exp(i ν_1 X_t + i ν_2 Y_t) . The averaging in the latter expression can be performed straightforwardly to give ϕ(ν_1,ν_2) = exp(- T_x ∫^t_0 dτ Q_x^2(t,τ) - T_y ∫^t_0 dτ Q_y^2(t,τ) ) , where Q_x(t,τ) = e^-(t - τ)(ν_1 cosh(u (t -τ)) + ν_2 sinh(u (t -τ)) ) , Q_y(t,τ) = e^-(t - τ)(ν_1 sinh(u (t -τ)) + ν_2 cosh(u (t -τ)) ) . Performing the integrals in eqs. (<ref>), we find T_x ∫^t_0 dτ Q_x^2(t,τ) + T_y ∫^t_0 dτ Q_y^2(t,τ) = a ν_1^2 + b ν_2^2 - c ν_1 ν_2 , where we have used the shortenings a = T_x p + T_y q/4 (1-u^2) , b= T_x q + T_y p/4 (1-u^2) , c = (T_x+T_y) l/2 (1-u^2) , with p = 2 - u^2 - e^-2 t( cosh(2 u t) + u sinh(2 u t)) - (1-u^2) e^- 2 t , q = u^2 - e^-2 t( cosh(2 u t) + u sinh(2 u t)) + (1-u^2) e^- 2 t , l = u - e^-2 t(u cosh(2 u t) + sinh(2 u t)) . Having defined the characteristic function, we find eventually the position probability density function by merely inverting the Fourier transform: P(X_t,Y_t) = 1/(2 π)^2∫^∞_-∞∫^∞_-∞ dν_1 dν_2 e^- i ν_1 X_t - i ν_2 Y_t ϕ(ν_1,ν_2) = 1/2 π dexp(- b X_t^2 + a Y_t^2 + c X_t Y_t/d^2) , where d = √(4 a b - c^2) and the coefficients a, b and c are defined in eqs. (<ref>) and (<ref>). §.§ B. Probability density function of the angular momentum §.§.§ 1. Characteristic function Φ_ L(ν). In virtue of eqs. (<ref>), the magnitude of angular momentum can be cast into the form L = X_t Ẏ_t - Ẋ_t Y_t ≡ u (X_t^2 - Y_t^2) + X_t ξ_y(t) - Y_t ξ_x(t) , such that its characteristic function is formally defined by Φ_ L(ν) = exp(i ν L) = exp(i ν u (X_t^2 - Y_t^2) + i ν X_t ξ_y(t) - i ν Y_t ξ_x(t) ) . We note now that the values of the noise variables ξ_x(t) and ξ_y(t) are statistically decoupled from the values of the positions Y_t and X_t at this very time moment, such that, in principle, we can average over the instantaneous values of the noises. The problem is that the values of the noises are ill-defined. Taking this into account, and also having in mind a comparison with numerical and experimental data, which actually correspond to trajectories recorded at discrete time moments, we turn to time-discretized version of the continuous-time Langevin equations (<ref>). In discrete-time with time-step δ t, a conventional representation of eqs. (<ref>) is X_t+δ t - X_t/δ t = - X_t + u Y_t + √(2 T_x/δ t) η_x(t) , Y_t+δ t - Y_t/δ t = - Y_t + u X_t + √(2 T_y/δ t) η_y(t) , where now the noises η_x,y(t) are dimensionless, independent, normally-distributed random variables with zero mean and unit variance, drawn independently at each discrete time step. In terms of eqs. (<ref>), the magnitude of the angular momentum takes the form L = u (X_t^2 - Y_t^2) + √(2 T_y/δ t) X_t η_y(t) - √(2 T_x/δ t) Y_t η_x(t) , and therefore the characteristic function of the angular momentum reads Φ_ L(ν) = exp(i ν u (X_t^2 - Y_t^2) + i ν√(2 T_y/δ t) X_t η_y(t) - i ν√(2 T_x/δ t) Y_t η_x(t) ) . Performing the averaging over the variables η_x(t) and η_y(t), we readily find that Φ_ L(ν) obeys Φ_ L(ν) = exp(- (T_y/δ t ν^2 - i ν u ) X_t^2 - (T_x/δ t ν^2 + i ν u ) Y_t^2 ) , which yields, upon averaging with the PDF in eq. (<ref>), the following explicit result: Φ_ L(ν) = (1 - 4 i (a - b) u ν + 4 (b T_x + a T_y/δ t + u^2 d^2) ν^2 - 4 i u d^2 (T_x - T_y)/δ t ν^3 + 4 d^2 T_x T_y/δ t^2 ν^4 )^-1/2 , where the coefficients a, b and c are defined in eqs. (<ref>) and (<ref>), and d = √(4 a b - c^2). Therefore, the characteristic function Φ_ L(ν) is simply the inverse of a square-root of a quartic polynomial of ν and becomes ill-defined, as mentioned above, in the limit δ t → 0. In the limit t →∞, the expression (<ref>) attains the form Φ_ L(ν) = (1 - 2 i u (T_x- T_y) ν + (1+u^2 δ t)(4 T_x T_y + u^2 (T_x-T_y)^2)/(1 - u^2) δ t ν^2 - i u (T_x- T_y) (4 T_x T_y + u^2 (T_x-T_y)^2)/(1 - u^2) δ t ν^3 + T_x T_y (4 T_x T_y + u^2 (T_x-T_y)^2)/(1 - u^2) δ t^2 ν^4)^-1/2 . The above expression permits us to access the usual properties characterizing the probability density function, i.e., the moments and the cumulants. In particular, differentiating eq. (<ref>) once and twice, and setting ν=0 afterwards, we find explicit expressions for the first moment and the variance of L in eqs. (<ref>) and (<ref>). §.§.§ 2. Probability density function P( L). The probability density function of the angular momentum is given by P( L) = 1/2 π∫^∞_-∞∫^∞_-∞ dX_t dY_t P(X_t,Y_t) ∫^∞_-∞ dν exp(- i ν L - (T_y/δ t ν^2 - i ν u ) X_t^2 - (T_x/δ t ν^2 + i ν u ) Y_t^2 ) = √(δ t/4 π)∫^∞_-∞∫^∞_-∞dX_t dY_t /√(T_y X_t^2 + T_x Y_t^2) P(X_t,Y_t) exp(- δ t ( L - u (X_t^2 - Y_t^2) )^2/4 (T_y X_t^2 + T_x Y_t^2)) . Turning to polar coordinates through the transformation X_t = ρcos(θ) and Y_t = ρsin(θ), we formally rewrite the expression in the second line in eq. (<ref>) as P( L) = 1/2 d√(δ t/π)∫^2 π_0dθ/√(T_y cos^2(θ) + T_x sin^2(θ))∫^∞_0 dρ exp( - δ t ( L - u ρ^2 cos(2 θ) )^2/4 ρ^2 (T_y cos^2(θ) + T_x sin^2(θ))) ×exp(- (b cos^2(θ) + a sin^2(θ) + c cos(θ) sin(θ)/d^2) ρ^2 ). Performing the integral over ρ, we find the expression (<ref>). In Fig. <ref> (left panel) we present a comparison between our theoretical prediction for P( L) in eq. (<ref>) and the results of numerical simulations. One observes a perfect agreement. §.§.§ 3. Finite δ t. Large-L asymptotic behavior of P( L). The function Ξ(θ), eq. (<ref>), is an oscillatory function of the polar angle θ, which has two minima (and two maxima) of equal depth (height) on the interval [0,2 π] (see Fig. S2). In general, the minima corresponding to L > 0 and L < 0 are attained at somewhat different values of θ and have somewhat different depths, which signifies that P( L) is asymmetric around L=0. In the limit L→∞, the integral in eq. (<ref>) is entirely dominated by the behavior of Ξ in the close vicinity of the minima, which yields the exponential asymptotic form in eq. (<ref>). Differentiating Ξ(θ) with respect to θ and equating the resulting expression to zero, we find an equation that determines the positions of the extrema of Ξ. This is a sixth-order algebraic equation, which is unsolvable for arbitrary values of parameters. We therefore consider only the limiting case when the coupling parameter u is close (but not equal) to zero and δ t is also small, in which limit it is possible to derive explicit forms of L_±. A rather cumbersome but straightforward analysis shows that here 1/L_± = √(δ t/2 T_x T_y)(1 - (T_y + T_x)/2 √(T_x T_y) |u| + O(u^2)) . When u=0, we recover from the latter expression the result in eq. (<ref>). Another situation, in which a systematic analysis of the large-L asymptotic behavior is possible, is that of thermodynamic equilibrium, i.e., T_x=T_y = T. Here, we pursue a slightly different approach focusing on the characteristic function in eq. (<ref>), which becomes (for u^2 δ t ≪ 1) Φ_ L(ν) = (1 + 4 T^2/(1 - u^2) δ t ν^2 + 4 T^4/(1 - u^2) δ t^2 ν^4)^-1/2 . The latter expression can be formally rewritten as Φ_ L(ν) = √((1 - u^2))/√((1 + 2 T^2 ν^2δ t)^2 - u^2) = √((1 - u^2))∫^∞_0 dx I_0(|u| x) exp(- (1 + 2 T^2 ν^2/δ t) x) , and hence, the probability density function attains the form P( L) = √((1 - u^2) δ t)/2 √(2 π) T∫^∞_0 dx/√(x) I_0(|u| x) exp(- x - δ t L^2/8 T^2 x) , where I_0 is the modified Bessel function of the first kind. Setting u=0 in eq. (<ref>), we recover our eq. (<ref>) with T_x=T_y=T. Upon some inspection, we realize that the large-L (more precisely, δ t L^2/(8 T^2) is to be large and u bounded away from zero) behavior of P( L) is supported by the large-x behavior of the integrand. To this end, we take advantage of the Hankel's asymptotic expansion of the modified Bessel function I_0(z) = e^z/√(2 π z)(1 + 1/8 z + 9/2 (8 z)^2 + O(1/z^3)) . Inserting the latter expansion into eq. (<ref>) and performing integrations, we arrive at the converging asymptotic large-L expansion of the form P( L) = (1-u^2/8 π T |u L|)^1/2(2 δ t/(1 - |u|))^1/4exp(- √((1 - |u|) δ t/2)| L|/T) ×(1 - (3 |u| - 2)T/4 √(2 (1 - |u|) δ t) |u L| + 3 T^2/16 δ u^2 L^2 + O(1/| L|^3) ) . Consequently, we recover the asymptotic form in eq. (<ref>) with L_+=L_-= √(2/(1- |u|) δ t) T , which is fully consistent with eq. (<ref>). Note that the higher order terms in δ t and u are evidently missing here because we assumed that u^2 δ t ≪ 1. §.§.§ 4. Continuous-time limit δ t → 0. Asymptotic behavior of P( L). We concentrate on the behavior of the probability density function P( L) in the limit δ t → 0. One notices that in this limit the second term in eq. (<ref>) becomes much larger than the first term, (i.e., u^2 cos^2(2 θ)), which can be therefore safely neglected. In consequence, we have Λ(θ) ≃2/d √(δ t)((T_x sin^2(θ) + T_y cos^2(θ)) (b cos^2(θ) + a sin^2(θ) + c cos(θ) sin(θ)) )^1/2 , which implies that Λ(θ) diverges when δ t → 0. In this limit, the first term in the nominator in eq. (<ref>) is much larger than the second term, (i.e., sign( L) u cos(2 θ)), which can be neglected. Therefore, the function Ξ(θ) in eq. (<ref>) obeys (in this limit) Ξ(θ) ≃Λ(θ)/2 (T_y cos^2(θ) + T_x sin^2(θ))δ t = (b cos^2(θ) + a sin^2(θ) + c cos(θ) sin(θ) /T_x sin^2(θ) + T_y cos^2(θ))^1/2√(δ t)/d and hence, vanishes in proportion to the square-root of δ t. We assume that t →∞, in which case the coefficients a, b and c attain sufficiently simple time-independent forms. Using the definitions of the coefficients a, b and c, and conveniently rewriting the terms entering the expression (<ref>) as T_x sin^2(θ) + T_y cos^2(θ) = T_x + T_y/2(1 - Δcos(2 θ)) , Δ = T_x - T_y/T_x + T_y , b cos^2(θ) + a sin^2(θ) + c cos(θ) sin(θ) = T_x + T_y/4 (1 - u^2)(1 + u sin(2 θ) - (1-u^2) Δcos(2 θ) ) . Further on, we suppose that L is finite (i.e., Lδ t is small when δ t → 0) and expand the expression in the right-hand-side of eq. (<ref>) into the Taylor series in powers of δ t. In doing so, we realize that, in the leading in the limit δ t order, the probability density function P( L) in eq. (<ref>) is given by eq. (<ref>) with f(u,Δ) = ∫^2 π_0 dϕ/√((1 - Δcos(ϕ)) (1 + u sin(ϕ) - (1-u^2) Δcos(ϕ) )) . The integral in the latter equation cannot be performed exactly. To get an idea of the behavior of f(u,Δ), we depict it in Fig. <ref> as function of Δ for several values of the coupling parameter u. Note that f(u,Δ) is (logarithmically) diverging in the limit Δ→± 1, i.e., when either of the temperatures vanishes. It was shown in <cit.> that this limit is rather peculiar and the case when either of the temperatures vanishes is to be treated separately. We therefore suppose that both T_x,T_y > 0. Further on, correction terms to the leading asymptotic behavior in eq. (<ref>) are of order Lδ t. We verified that the coefficient in front of Lδ t is finite, such that a uniform distribution of L in the limit δ t → 0 is a valid feature. §.§ C. Probability density function of the angular velocity. §.§.§ 1. Characteristic function Φ_ W(ν). Using the discretized Langevin eqs. (<ref>) (see eqs. (<ref>) above), we rewrite the angular velocity as W = u (X_t^2 - Y_t^2)/X_t^2+ Y_t^2 + √(2 T_y) X_t η_y(t) - √(2 T_x) Y_t η_x(t) /√(δ t)(X_t^2+ Y_t^2) , such that characteristic function Φ_ W(ν) is given by : Φ_ W(ν) = exp(i ν u (X_t^2 - Y_t^2)/X_t^2+ Y_t^2 + i ν√(2 T_y) X_t η_y(t) - √(2 T_x) Y_t η_x(t) /√(δ t)(X_t^2+ Y_t^2)) . Again, noticing that the instantaneous values of the noises η_x(t) and η_y(t) at time t have no effect on the instantaneous positions X_t and Y_t at this very time, we straightforwardly average over η_x(t) and η_y(t) to get Φ_ W(ν) = exp(i ν u (X_t^2 - Y_t^2)/X_t^2+ Y_t^2 - T_y ν^2 X_t^2/δ t (X_t^2 + Y_t^2)^2 - T_x ν^2 Y_t^2/δ t (X_t^2 + Y_t^2)^2) . §.§.§ 2. Probability density function P( W). The probability density function P( W) of the angular velocity is formally defined as P( W) = 1/2 π∫^∞_-∞ dν Φ_ W(ν) e^-I ν W = 1/2 π∫^∞_-∞ dν e^-I ν W∫^∞_-∞∫^∞_-∞ dX_t dY_t P(X_t,Y_t) × exp(i ν u (X_t^2 - Y_t^2)/X_t^2+ Y_t^2 - T_y ν^2 X_t^2/δ t (X_t^2 + Y_t^2)^2 - T_x ν^2 Y_t^2/δ t (X_t^2 + Y_t^2)^2) . Changing the integration variables to polar coordinates, we formally rewrite the latter expression as P( W) = 1/(2 π)^2 √(4 a b - c^2)∫^2 π_0 dθ∫^∞_-∞ dν exp(i ν(u cos(2 θ) - W)) ×∫^∞_0 ρ dρ exp(- ν^2 (T_y cos^2(θ) + T_x sin^2(θ))/δ t ρ^2 - (b cos^2(θ) + a sin^2(θ) + c cos(θ) sin(θ)/4 a b - c^2) ρ^2) = 1/π^2 δ t √(4 a b - c^2)∫^2 π_0 (T_y cos^2(θ) + T_x sin^2(θ)) dθ/√(Λ^2(θ) - u^2 cos^2(2 θ))∫^∞_0ν dν cos(ν(u cos(2 θ) - W)) × K_1(√(Λ^2(θ) - u^2 cos^2(2 θ)) ν) , where K_1 is the modified Bessel function and Λ(θ) is defined in eq. (<ref>). Performing the integral over ν, we arrive at our result in eq. (<ref>). §.§.§ 3. Finite δ t. Large-W asymptotic behavior of P( W). Asymptotic large-W behavior of the probability density function P( W) in eq. (<ref>) can be accessed very directly by merely expanding the denominator in inverse powers of W. Verifying that all the integrals defining the coefficients in this expansion exist, we find P( W) = (T_x + T_y)/2 d δ t1/| W|^3 - 3 u (T_x - T_y)/4 d δ t1/ W^4 + o(1/ W^4) . In the limit t →∞, eq. (<ref>) gives P( W) = √(1 - u^2) (T_x + T_y)/√(4 T_x T_y + u^2 (T_x-T_y)^2)δ t1/| W|^3 + 3 u √(1 - u^2) (T_y - T_x)/2 √(4 T_x T_y + u^2 (T_x-T_y)^2)δ t1/ W^4 + o(1/ W^4) . The first term in this expansion is our eq. (<ref>) presented in the main text, while the second term defines the sub-dominant contribution. Interestingly enough, the amplitude of the second term vanishes in equilibrium conditions (i.e., for T_x = T_y) such that in equilibrium the sub-dominant contribution will vanish with W at a faster rate. Because of the algebraic tail, P( W) has only the first moment. In virtue of eq. (<ref>), we have W = u (X_t^2 - Y_t^2/X_t^2 + Y_t^2) = u ∫^∞_-∞∫^∞_-∞ dX_t dY_t P(X_t,Y_t) (X_t^2 - Y_t^2/X_t^2 + Y_t^2) = u √(1-u^2) (T_x - T_y)/√(4 T_xT_y + u^2 (T_x - T_y)^2) . This is a well-known result which shows that W is not equal to zero when simultaneously u ≠ 0 and T_x ≠ T_y. In general, W is a non-monotonic function of the coupling constant u, which vanishes when either u=0 or |u| = 1 and, hence, attains a maximal value for some intermediate coupling. §.§.§ 4. Continuous-time limit δ t → 0. Asymptotic behavior of P( W). We focus on the PDF P( W) defined in eq. (<ref>) and suppose that W is finite. Since Λ(θ) diverges in the limit δ t → 0, in the leading in δ t order, we are entitled to W-dependent terms in the denominator. Then, using eqs. (<ref>) and (<ref>), we have P( W) ≃d^2 √(δ t)/16 π∫^2 π_0 dθ/(T_y cos^2(θ) + T_x sin^2(θ))^1/2 (b cos^2(θ) + a sin^2(θ) + c cos(θ) sin(θ))^3/2 = (4 T_x T_y + u^2 (T_x - T_y)^2)/8 π (T_x + T_y)^2√(2 (1 - u^2) δ t)∫^2 π_0 dϕ/√((1 - Δcos(ϕ)) (1 + u sin(ϕ) - (1 - u^2) Δcos(ϕ))^3) , i.e., our eq. (<ref>) with g(u,Δ) given by g(u,Δ) = ∫^2 π_0 dϕ/√((1 - Δcos(ϕ)) (1 + u sin(ϕ) - (1 - u^2) Δcos(ϕ))^3) . Likewise f(u,Δ) in eq. (<ref>), g(u,Δ) diverges logarithmically when Δ→± 1, i.e., either of the temperatures vanishes. Therefore, the asymptotic form in eq. (<ref>) is valid only when both temperatures are bounded away from zero. §.§.§ 5. Probability density function P( W) in the decoupled case u = 0. Lastly, we aim to find an analogue of eq. (<ref>) which describes the probability density function P( L) in the decoupled case u = 0. For u = 0, the probability density function in eq. (<ref>) becomes (in the limit t →∞) P( W) = 1/2 π√(δ t/T_x T_y)∫^2 π_0 (T_y cos^2(θ) + T_x sin^2(θ)) dθ/(δ t W^2 + 2T_x T_y(T_y cos^2(θ) + T_x sin^2(θ))^2 )^3/2 . Using the first of eqs. (<ref>) and the integral identity p/(a^2 + p^2)^3/2 = ∫^∞_0 τ dτ J_0(a τ) e^- p τ , we cast eq. (<ref>) into the form P( W) = √(δ t/2)∫^∞_0 τ dτ J_0(√(δ t) Wτ) J_0(i (T_x+T_y)/√(2 T_x T_y)Δτ) exp(- (T_x + T_y)/√(2 T_x T_y)τ) = - √(δ t/2) d/dα∫^∞_0 dτ J_0(√(δ t) Wτ) J_0(i βτ) exp(- ατ) , α = (T_x + T_y)/√(2 T_x T_y) , β = (T_x-T_y)/√(2 T_x T_y) . The integral in the last line in eq. (<ref>) can be performed explicitly to give P( W) = - δ t^1/4/π√(2 i β W) d/dα Q_-1/2(α^2 + δ t W^2 - β^2/2 i β W√(δ t)) , where Q_-1/2 is the Legendre function. It might be also convenient to express the result in eq. (<ref>) in an explicit form using the Gauss hypergeometric functions: P( W) = (T_x + T_y) √(T_x T_y δ t (2 + δ t W^2))/2 (2 T_y + T_x δ t W^2) (2 T_x + T_y δ t W^2) _2F_1(3/4, 1/4; 1; - 2 (T_x - T_y) δ t W^2/T_x T_y (2 + δ t W^2)^2) + (T_x - T_y)^2 δ t^3/2 W^2/4 (T_x T_y)^3/4 (2 + δ t W^2)^3/2 (2 T_y + T_x δ t W^2) (2 T_x + T_y δ t W^2) _2F_1(5/4, 3/4; 2; - 2 (T_x - T_y) δ t W^2/T_x T_y (2 + δ t W^2)^2) . Importantly, when two processes (<ref>) are decoupled, the probability density function P( W) is an even function of W, such that it is symmetric around W = 0, likewise the probability density function of the angular momentum (see eq. (<ref>)). The dependence on temperatures and the functional form of the probability density function P( W) are evidently much more complicated than P( L) in eq. (<ref>). §.§ D. Probability density function of the moment of inertia We evaluate here the probability density function of the moment of inertia I=X^2_t+Y^2_t, which is a positive definite random variable. Therefore, it is convenient to focus on its moment-generating function Φ_ I(λ) = exp(- λ I) , λ≥ 0 , for which one readily obtains the following exact result Φ_ I(λ) = (1 + 4 (a+b) λ + 4 d^2 λ^2)^-1/2 , where the coefficients a, b, c and d are defined in eqs. (<ref>) and (<ref>). Inverting the latter expression, we find the probability density function P( I) = 1/2 dexp(- (a+b)/2 d^2 I) I_0(√((a - b)^2 + c^2)/2 d^2 I) , where I_0 is the modified Bessel function. In the limit t →∞, P( I) attains the form P( I) = √(1 - u^2/4 T_x T_y + u^2 (T_x - T_y)^2)exp(- T_x+ T_y/4 T_x T_y + u^2 (T_x - T_y)^2 I) × I_0(√(4 u^2 T_x T_y + (1 - u^2 + u^4) (T_x - T_y)^2)/4 T_x T_y + u^2 (T_x - T_y)^2 I) . In the case of two decoupled Ornstein-Uhlenbeck processes (i.e., for u =0), the probability density function in eq. (<ref>) simplifies considerably to give P( I) = √(1/4 T_x T_y )exp(- T_x+ T_y/4 T_x T_y I) I_0(|T_x - T_y|/4 T_x T_y I) . In Fig. <ref> the probability density function P( I) is depicted as function of I for three values of u (u=0, 0.5 and 0.8) and temperatures T_x=1 and T_y=5. Noisy curves are the solutions of stochastic eqs. (<ref>) using the Euler-Maruyama algorithm, while the dot-dashed curves correspond to the analytical expression (<ref>). One observes a perfect agreement between the numerical results and the theoretical prediction. We also note that the most probable value of the moment of inertia I is attained at I = 0, which explains why large values of W are abundant and therefore, why P( W) possesses heavy power-law tails. §.§ E. Numerical simulations. Numerical simulations of the BG model are performed by using the standard Euler-Maruyama algorithm. To compute the angular momentum and the angular velocity, we take advantage of two alternative approaches: – Approach I. The first approach relies on the formal definition of W as the rate of change of the angular position with respect to time W=δθ/δ t , where the instantaneous value of the polar angle is expressed via the cartesian coordinates as θ = atan2(Y_t,X_t) where the function atan2 is the 2-argument arctangent <cit.>. While determining W through eq. (<ref>), we exercise care that at each incremental step δθ does not exceed π, which is controlled by the increment δ t. Note also that within this approach the distribution of the angular velocity has its support on the interval [-π/δ t,π/δ t], so that while evaluating P( W) we consider only such values of W that are away from the boundaries, in order to avoid aliasing effects appearing close to the boundaries of the interval. To access the behavior at larger values of W, we have to diminish the increment δ t. Of course, this can be readily done in numerical simulations, but in experiments the value of δ t is bounded from below by the maximal frequency at which the images of the trajectories are taken. Once W is determined, the angular momentum is found via the relation L=(X_t^2+Y_t^2) W . Note that this approach is appropriate for the analysis of discrete-time trajectories recorded both in numerical simulations and in experiments (see Fig. <ref> in the main text), because it necessitates the position data only. In Fig. <ref> the results obtained using the approach I are depicted by noisy red curves. – Approach II. The second approach hinges on the standard recursive solution of the time-discretized Langevin eqs. (<ref>), for which the angular momentum L and the angular velocity W are determined by eqs. (<ref>) and (<ref>), respectively. Generating in numerical simulations the dimensionless noises η_x(t) and η_y(t), we build recursively the trajectories X_t and Y_t and eventually calculate the angular momentum and the angular velocity. Results obtained via this approach are depicted in Fig. <ref> by noisy green curves. Note that this second approach is only applicable for the numerical simulations. Lastly, we comment on the discrepancy between the theoretically predicted exponent (= 3), characterizing the tails of the probability density function P( W), and somewhat lower values deduced from the fitting of the experimentally-evaluated PDFs. To this end, we resort to numerical simulations of the BG model using approaches I and II and preform the fitting of the numerical data, which we fully control. These fits are presented in Fig. S5. We observe that for the same value of δ t (which is quite small, δ t = 0.002), the PDF P( W) obtained within the approach I is defined on a much shorter interval, as compared to the one obtained in terms of approach II, and hence, the large-W tails are systematically heavier than the exact solution. The best-fit values of the exponents α_± obtained within the approach I are α_- ≈ 2.73 and α_+ ≈ 2.71, respectively, which are somewhat higher that the exponents deduced from the experimental data (see Fig. 3 in the main text). This implies that the accuracy of approach I, which is used to analyze the experimental data, can be somewhat improved by reducing δ t, but the latter cannot be made arbitrarily small due to natural limits imposed on the sampling frequency of imaging the trajectories in Ref. <cit.>. In turn, approach II appears to be more accurate giving the values of the exponents that are closer to the predicted value.
http://arxiv.org/abs/2307.05002v1
20230711034826
Machine learning to predict the solar flux and geomagnetic indices to model density and Drag in Satellites
[ "S. Aljbaae", "J. Murcia-Pineros", "A. F. B. A. Prado", "R. V. Moraes", "V. Carruba", "G. A. Carita" ]
physics.space-ph
[ "physics.space-ph" ]
3rd IAA Conference on Space Situational Awareness (ICSSA) GMV, Madrid, Spain IAA-ICSSA: 4-6/04/2022 Machine learning to predict the solar flux and geomagnetic indices to model density and Drag in Satellites [email protected] fml3]J. Murcia-Piñeros fml1]A. F. B. A. Prado fml3]R. V. Moraes fml4]V. Carruba fml1]G. A. Caritá [fml1]Division of Post-Graduate Studies, INPE, C.P. 515, 12227-310 São José dos Campos, SP, Brazil. [fml2]2RP Net - Data-driven Company, Av. Paulista, 1159, sala 1511 - Bela Vista, SP, Brazil. [fml3]São Paulo Federal University (UNEFESP), Institute of Science and Technology (ICT), São José dos Campos, SP, 12247-014, Brazil. [fml4]São Paulo State University (UNESP), School of Natural Sciences and Engineering, Guaratinguetá, SP, 12516-410, Brazil. Keywords: data analysis, celestial mechanics, atmospheric effects, space vehicles, Machine learning § In recent years (2000-2021), human-space activities have been increasing faster than ever. More than 36000 Earth' orbiting objects, all larger than 10 cm, in orbit around the Earth, are currently tracked by the European Space Agency (ESA)[https://www.esa.int/Safety_Security/Space_Debris/Space_debris_by_the_numbershttps://www.esa.int/Safety_Security/Space_Debris/Space_debris_by_the_numbers]. Around 70% of all cataloged objects are in Low-Earth Orbit (LEO). Aerodynamic drag provides one of the main sources of perturbations in this population, gradually decreasing the semi-major axis and period of the LEO satellites. Usually, an empirical atmosphere model as a function of solar radio flux and geomagnetic data is used to calculate the orbital decay and lifetimes of LEO satellites. In this respect, a good forecast for the space weather data could be a key tool to improve the model of drag. In this work, we propose using Time Series Forecasting Model to predict the future behavior of the solar flux and to calculate the atmospheric density, to improve the analytical models and reduce the drag uncertainty. § INTRODUCTION During the last years, an exponential increase in the population of objects in orbit around our planet has been observed, especially in Low Earth Orbits (LEO, <cit.>). That reduces the available operational orbits. It also increases the probability of collisions, which, when happening, results in clouds of debris propagating around the orbits, like for the case of the Iridium 33 and Kosmos 2251 artificial satellites. Moreover, recent activities, like the multiple tests of Anti Satellite Weapons (ASAT), and the new Large Scale Constellations, could increase exponentially the population of objects around Earth in the next few years. If these activities are not controlled and/or regulated, the possible occurrence of a catastrophic scenario is known as the Kessler effect, could limit the space activities and access to orbit for a long period <cit.>. The environment of the LEO naturally influence the mitigation of artificial objects in orbit, due to the loss of the orbit mechanical energy, which is influenced by the atmospheric-satellite interaction, mathematically modeled as the perturbation caused by drag. Usually, a simplified model for satellites in LEO is used to reduce the computational cost during the propagations, where the main forces that influence the motion are the Keplerian gravity field of the Earth, the perturbation due to the non-sphericity of the central body (J2 and J4 terms of the gravitational perturbation) and the atmospheric drag. Other perturbations, like a third-body (either the Moon or the Sun), Solar Radiation Pressure, tides, and albedo could be negligible at altitudes lower than 400 km <cit.>. With the previous considerations, the equation of motion of the satellite moving in LEO in an inertial system, located at the Earth´s center of mass, is written as: r̈⃗̈ =-g⃗_⃗4⃗ ⃗×⃗ ⃗4⃗ + a⃗_D where g_4 × 4 represents the Earth´s Gravitational Model (EGM-08) of order 4×4, r is the inertial acceleration vector, and a_D is the drag acceleration vector, which is acting in the opposite direction of the airflow vector V⃗_∞. The airflow is the difference between the inertial velocity vectors and the atmospheric velocity due to the Earth´s rotation, including the winds. Several models of drag have been applied to determine the atmosphere-satellite interaction and to reduce the uncertainty <cit.>. The basic drag acceleration model is described as follows a⃗_D = -ρ(C_D A/2m) V_∞V⃗_∞ where, A is the satellite’s mean area normal to its velocity vector, which is a difficult parameter to estimate due to the winds and changes in attitude. C_D is the drag coefficient, which is a dimensionless quantity indicating the satellite’s susceptibility to drag forces, and m is the satellite’s mass. The quantity m/A C_D is usually called the ballistic coefficient. A satellite with a low ballistic coefficient will be considerably affected by the drag forces. ρ is the atmospheric density, and it is a function of the solar activity, local time, altitude and geographic coordinates. As such, it is a rather difficult parameter to estimate. For more details about modelling the aerodynamic drag, we refer the readers to <cit.>. Due to the satellite geometry, materials, and uncertainly of the attitude, the C_D is approximated by a mean value, reported in the scientific literature as 2.2 for satellites in the upper atmosphere in Free Molecular Flow (FMF) <cit.>. With the information of the satellite geometry, attitude and materials, it is possible to implement a high fidelity model of the drag for FMF and/or Rarefied Flow, as presented in <cit.>, however, this is out of the scope of this present research. In fact, the main problem for orbital determination and propagation in LEO is the accuracy of the drag perturbation. As shown in Eq. <ref>, the drag model is a function of the atmospheric density and, at the same time, it is a function of the space weather, which is a stochastic effect because of the multiple uncertainties affecting it, like the atmospheric conditions due to the solar and geomagnetic activity or the atmospheric density estimations due to the use of empirical models and the atmospheric dynamics (including winds). <cit.> discussed the importance of atmospheric modeling to drag estimation. They presented a detailed description of the uncertainties to model the drag, the differences between the atmospheric models, and the functions used to estimate the solar flux and geomagnetic data. For the solar cycle forecast, <cit.> presents a good analytical approximation, which predicts the 11-years solar cycle with variations lower than 15%, which is a good model to describe the general behavior of the solar flux in a long-term period. On the other hand, for the geomagnetic indices prediction, it is recommended to use the cubic spline approach <cit.>. Machine Learning could potentially process this type of problem to describe the daily variations in the solar activity or in planetary amplitude. In this context, predicting the future behavior of the weather data with reasonable confidence is of particular interest. This challenging task has been already addressed by several authors, for instance, <cit.> used a linear Autoregressive algorithm with lags based on the autocorrelation function. The highest correlations of each day is used to forecast the next one, which is similar to the simple naively forecasting method that we will use later in this work. <cit.> used the global solar magnetic field to forecast the solar 10.7 cm (2.8 GHz) radio flux. A simple forecasting model is applied in <cit.>, using a linear combination of the previous 81 observations to forecast the solar flux from 1 to 45 days. In this work, we apply Deep Learning methods for Time-Series forecasting, using historical data of solar activity (Since 1/10/1957 to 1/11/2021), available in the Earth Orientation Parameter (EOP) and Space Weather Data[https://celestrak.com/SpaceData/https://celestrak.com/SpaceData/, accessed on November 2021.], to predict the behavior of the weather data and to calculate the atmospheric density. § METHODOLOGY AND RESULTS Daily data, since 1/10/1957, for the Solar Radio Flux (F10.7 OBS) is available in the Earth Orientation Parameter (EOP) and Space Weather Data. It is a univariate series, presented in the left panel of Fig. <ref>. Observing this plot, we can notice that seasonality trends probably exists. Daily planetary amplitude (AP AVG) is also available as an average of the 8 geomagnetic planetary amplitude, shown in the right panel of Fig. <ref>. This parameter has integer values from 0 to 280, with only 102 observations with a value grater that 100. As a first step for a more complete study, we used Naive one-step time-series forecasting to predict the value of the parameters already mentioned, F10.7 OBS and AP AVG. The last 365 time steps (one year) are used as a test set to evaluate a very simple naively forecasting method, fitted on all the remaining observations. This strategy of forecasting simply teaks the previous period and applies it to the actual one. The walk-forward validation method is used to measure the performance of the model, where we applied one separate one-step forecast to each of the test observations. The true data was then added to the training set for the next forecast. We used the Mean absolute percentage error regression loss (MAPE) to compare our results with the real test set. This metric is defined in scikit-learn documentation[https://scikit-learn.org/stable/modules/model_evaluation.html#mean-absolute-percentage-errorhttps://scikit-learn.org/stable/modules/model_evaluation.html#mean-absolute-percentage-error] as: MAPE = 1/n∑_i=0^n-1|y_i-ŷ_i|/max(ϵ, |y_i|) where y and ŷ are the real and predicted value, respectively. n is the number of the samples, and ϵ is an arbitrary small positive number to avoid undefined results when y is zero. Our results are presented in Fig. <ref>, where we can notice a good performance for F10.7 OBS with an MAPE of 0.025%, and a relative error less than 0.15. A week performance is identified for the AP AVG with an MAPE of 0.7%, which is expected for the chaotic behavior of this parameter as shown in the right panel of Fig. <ref>. Next, we tried to develop a deep learning models to make one-week forecasts. For this purpose, we first attempt to use a simple naive method, and then applied a Convolutional Neural Network (CNN). In the whole weather data, we have 23406 days, giving 3343 full weeks. We split the data into 3008 weeks as a train set and 335 as a test set. Here, we also used a walk-forward validation method to evaluate the models, where the model is used to predict one week, then the real data of this week is added to the training set. The process is repeated for all the weeks in the training set. Our results are presented in Fig. <ref>, where we can notice a week performance with a MAPE of 0.231%, and 1.745%, and a relative error less than 0.3 and 6 for the selected parameters, F10.7 OBS and AP AVG, respectively. displays the ACF of a regular particle in dynamical map for the Veritas family region. The right panel shows the ACF of a rather chaotic orbit To improve our results, we used a more sophisticated CNN method. CNN was originally created for image data <cit.>, however, several CNN models can be adapted to time-series prediction tasks. In this work, we used a CNN model with two convolutional layers with the Rectified Linear Unit activation function (ReLU) defined to returm the positive part of the input. In each layer, a convolutional operation will read the intersted week 3 times (kernel size of 3) and the process will be performed 32 times (32 filters), then a pooling layer is added to select the maximum value over a window of size 2. The connected layers that interprets the features is then increased to 300 nodes. We fitted the model exposeing the model 100 times to the whole training dataset (100 epochs). The weights of the model are updated each epoch for each 12 samples (batch size of 12). For more details abour our process, we refer the reader to <cit.>. Our results are presented in Fig. <ref>. We notice that the CNN can considerably reduce the MAPE for the solar flux (F10.7 OBS) to 0.041% and a relative error less than 0.1. However, the simple naive model provides very similar performance to the CNN model concerning the planetary amplitude (AP AVG). § CONCLUSION In this work, we introduced a Machine Learning approach using a time series forecasting model to predict the daily variations in the solar activity and in planetary amplitude. This could be the first step for a more complete dynamical study to reduce the uncertainly of the Drag due to the estimation of the atmospheric density. We used the historical data of solar activity from 1957 to the present day from the EOP and Space Weather Data. A good agreement was found between the predicted and real data. We applied a simple naive method, and a Convolutional Neural Network (CNN). The walk-forward validation method is used in order to make the best possible forecast at each time step. In this method, we train the model as new data becomes available. A relative error less than 0.15 with respect to the analytical model of the solar flux is reached making a single one-step prediction, while the same model provides a relative error less than 0.3 making a single one-week prediction. However, the CNN model reduced this error to less than 0.1. A more complete study is necessary to reduce the error in the prediction of the geomagnetic indices. § ACKNOWLEDGMENTS The authors would like to thank the Institutional Training program (PCI/INPE), which supported this work via the grant (444327/2018-5), São Paulo Research Foundation (Fapesp) project number 2016/24561-0, and Coordination for the Improvement of Higher Education Personnel (CAPES) project PrInt CAPES-INPE. 18 natexlab#1#1 [#1],#1 [Cowardin and Miller(2021)]nasa_2021 authorH. Cowardin, authorR. Miller, titleOrbital debris quarterly news 25-3, journalOrbital Debris Quarterly News volume25 (year2021). [Kessler(1991)]Kessler_1991 authorD. J. Kessler, titleCollisional cascading: The limits of population growth in low earth orbit, journalAdvances in Space Research volume11 (year1991) pages63–66. [Vallado(2007)]vallado_2007 authorD. A. Vallado, titleFundamentals of Astrodynamics and Applications, year2007. [Dell'Elce et al.(2015)Dell'Elce, Arnst, and Kerschen]dellelce_2015 authorL. Dell'Elce, authorM. Arnst, authorG. Kerschen, titleProbabilistic Assessment of the Lifetime of Low-Earth-Orbit Spacecraft: Uncertainty Characterization, journalJournal of Guidance Control Dynamics volume38 (year2015) pages900–912. [Mostaza Prieto et al.(2014)Mostaza Prieto, Graziano, and Roberts]prieto_2014 authorD. Mostaza Prieto, authorB. P. Graziano, authorP. C. Roberts, titleSpacecraft drag modelling, journalProgress in Aerospace Sciences volume64 (year2014) pages56–65. [Vallado and Finkleman(2014)]vallado_2014 authorD. A. Vallado, authorD. Finkleman, titleA critical assessment of satellite drag and atmospheric density modeling, journalActa Astronautica volume95 (year2014) pages141–165. [Zhejun and Hu(2017)]zhejun_2017 authorL. Zhejun, authorW. Hu, titleEstimation of ballistic coefficients of space debris using the ratios between different objects, journalChinese Journal of Aeronautics volume30 (year2017). [Rafano Carná and Bevilacqua(2019)]rafano_2019 authorS. F. Rafano Carná, authorR. Bevilacqua, titleHigh fidelity model for the atmospheric re-entry of CubeSats equipped with the Drag De-Orbit Device, journalActa Astronautica volume156 (year2019) pages134–156. [Tewari(2009)]tewari_2009 authorA. Tewari, titleEntry Trajectory Model with Thermomechanical Breakup, journalJournal of Spacecraft and Rockets volume46 (year2009) pages299–306. [Schatten and Sofia(1987)]schatten_1987 authorK. H. Schatten, authorS. Sofia, titleForecast of an exceptionally large even-numbered solar cycle, journalGeophysical Research Letters volume14 (year1987) pages632–635. [Vallado and Kelso(2005)]vallado_2005 authorD. A. Vallado, authorT. S. Kelso, titleUsing EOP and Space Weather Data For Satellite Operations, year2005. [Lean et al.(2009)Lean, Picone, and Emmert]lean_2009 authorJ. L. Lean, authorJ. M. Picone, authorJ. T. Emmert, titleQuantitative forecasting of near-term solar activity and upper atmospheric density, journalJournal of Geophysical Research: Space Physics volume114 (year2009). [Henney et al.(2012)Henney, Toussaint, White, and Arge]henney_2012 authorC. J. Henney, authorW. A. Toussaint, authorS. M. White, authorC. N. Arge, titleForecasting f10.7 with solar magnetic flux transport modeling, journalSpace Weather volume10 (year2012). [Warren et al.(2017)Warren, Emmert, and Crump]warren_2017 authorH. P. Warren, authorJ. T. Emmert, authorN. A. Crump, titleLinear forecasting of the f10.7 proxy for solar activity, journalSpace Weather volume15 (year2017) pages1039–1051. [Pugliatti and Topputo(2020)]pugliatti_2020 authorM. Pugliatti, authorF. Topputo, titleSmall-body shape recognition with convolutional neural network and comparison with explicit features based methods, journalAAS/AIAA Astrodynamics Specialist Conference (year2020). [Pasqualetto Cassinis et al.(2019)Pasqualetto Cassinis, Fonod, and Gill]cassinis_2019 authorL. Pasqualetto Cassinis, authorR. Fonod, authorE. Gill, titleReview of the robustness and applicability of monocular pose estimation systems for relative navigation with an uncooperative spacecraft, journalProgress in Aerospace Sciences volume110 (year2019) pages100548. [Song et al.(2022)Song, Rondao, and Aouf]jianing_2022 authorJ. Song, authorD. Rondao, authorN. Aouf, titleDeep learning-based spacecraft relative navigation methods: A survey, journalActa Astronautica volume191 (year2022) pages22–40. [Brownlee(2020)]brownlee_2020 authorJ. Brownlee, titleDeep Learning for Time Series Forecasting. Ed. Machine Learning Mastery, San Juan, PR, USA, year2020.
http://arxiv.org/abs/2307.05100v1
20230711082053
Generative Contrastive Graph Learning for Recommendation
[ "Yonghui Yang", "Zhengwei Wu", "Le Wu", "Kun Zhang", "Richang Hong", "Zhiqiang Zhang", "Jun Zhou", "Meng Wang" ]
cs.IR
[ "cs.IR" ]
Key Laboratory of Knowledge Engineering with Big Data, Hefei University of Technology [email protected] Ant Group [email protected] [1] Key Laboratory of Knowledge Engineering with Big Data, Hefei University of Technology [email protected] This work is done when Yonghui Yang works as an intern at Ant Group. Le Wu is the Corresponding author Key Laboratory of Knowledge Engineering with Big Data, Hefei University of Technology [email protected] Key Laboratory of Knowledge Engineering with Big Data, Hefei University of Technology [email protected] Ant Group [email protected] Ant Group [email protected] Key Laboratory of Knowledge Engineering with Big Data, Hefei University of Technology Institute of Artificial Intelligence, Hefei Comprehensive National Science Center [email protected] By treating users' interactions as a user-item graph, graph learning models have been widely deployed in Collaborative Filtering (CF) based recommendation. Recently, researchers have introduced Graph Contrastive Learning (GCL) techniques into CF to alleviate the sparse supervision issue, which first constructs contrastive views by data augmentations and then provides self-supervised signals by maximizing the mutual information between contrastive views. Despite the effectiveness, we argue that current GCL-based recommendation models are still limited as current data augmentation techniques, either structure augmentation or feature augmentation. First, structure augmentation randomly dropout nodes or edges, which is easy to destroy the intrinsic nature of the user-item graph. Second, feature augmentation imposes the same scale noise augmentation on each node, which neglects the unique characteristics of nodes on the graph. To tackle the above limitations, we propose a novel   framework for recommendation. Specifically, we leverage variational graph reconstruction to estimate a Gaussian distribution of each node, then generate multiple contrastive views through multiple samplings from the estimated distributions, which builds a bridge between generative and contrastive learning. The generated contrastive views can well reconstruct the input graph without information distortion. Besides, the estimated variances are tailored to each node, which regulates the scale of contrastive loss for each node on optimization. Considering the similarity of the estimated distributions, we propose a cluster-aware twofold contrastive learning, a node-level to encourage consistency of a node's contrastive views and a cluster-level to encourage consistency of nodes in a cluster. Finally, extensive experimental results on three public datasets clearly demonstrate the effectiveness of the proposed model. <ccs2012> <concept> <concept_id>10002951.10003227.10003351.10003269</concept_id> <concept_desc>Information systems Collaborative filtering</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10002951.10003317.10003347.10003350</concept_id> <concept_desc>Information systems Recommender systems</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012> [500]Information systems Collaborative filtering [500]Information systems Recommender systems Generative-Contrastive Graph Learning for Recommendation Meng Wang ======================================================== § INTRODUCTION CF-based recommendation relies on the observed user-item interactions to learn user and item embeddings for personalized preference prediction, and has been pervasive in real-world applications <cit.>. Early works leverage the matrix factorization technique to obtain user and item embeddings, and then compute users' preferences by inner product <cit.> or neural networks <cit.>. As users' interactions can be naturally formulated as a user-item graph, borrowing the success of Graph Neural Networks (GNNs), graph-based CF models have been widely studied with superior performances <cit.>. These models iteratively propagate the neighborhood information for embedding updates, such that the higher-order collaborative signals can be incorporated for better user and item embedding learning. Despite the effectiveness, graph-based CF models suffer from the sparse supervision issue for model learning. As an alternative, self-supervised learning leverages the input data itself as the supervision signal and has attracted many researchers <cit.>. Among all self-supervised learning, contrastive learning is a popular paradigm that constructs data augmentations to teach the model to compare similar data pairs, and has shown competitive performance in computer vision, natural language processing, graph mining, and so on <cit.>. Some recent studies have introduced contrastive learning in graph-based CF  <cit.>. In addition to the supervised recommendation task, GCL-based models first construct multiple contrastive views through data augmentation, and then maximize the mutual information to encourage the consistency of different views. Existing GCL-based CF methods can be classified into two categories: structure augmentation and feature augmentation. Specifically, structure augmentation randomly dropout graph nodes or edges to obtain subgraph structure, and then feeds the augmented graphs into an encoder for contrastive representations <cit.>. Feature augmentation adds random noises to node embeddings as contrastive views <cit.>. These GCL-based CF models learn self-supervised signals based on data augmentation and significantly improve recommendation performances. Although data augmentation is the key to the performance of GCL-based CF models, we argue that current solutions are still limited by current data augmentation strategies, either structure augmentation or feature augmentation. Firstly, structure augmentation randomly dropout nodes or edges, which is easy to destroy the intrinsic nature of the input graph. The reason is that all nodes are connected on the graph and don't satisfy the IID assumption. Secondly, feature augmentation adds the same scale noise to each node, which neglects the unique characteristics of nodes on the graph. In real-world recommender systems, different users (items) have different characteristics, and the data augmentation techniques should be tailored to each user. E.g., some users have more item links in the user-item graph, which contains more supervised signals compared to users with only very few links. How to better exploit the user-item graph structure to design more sophisticated contrastive view construction techniques is still open. In this paper, we exploit the potential of the generative model to facilitate contrastive view generation without data augmentation. Specifically, we propose a   framework for recommendation. Instead of data augmentation, we leverage variational graph inference <cit.> to estimate a Gaussian distribution of each node, then generate multiple contrastive views through multiple samplings from the estimated distributions. As such, we build a bridge between the generative and contrastive learning models for recommendation. The generated contrastive views can well reconstruct the input graph without information distortion. Besides, the estimated variances are tailored to each node, which can adaptively regulate the scale of contrastive loss of each node for optimization. We consider that similar nodes are closer in the representation space, and then propose cluster-aware contrastive learning with twofold contrastive objectives. The first one is a node-level contrastive loss that encourages the consistency of each node's multiple views. The second one is a cluster-level contrastive loss that encourages the consistency of different nodes in a cluster, with the cluster learned from the estimated distributions of nodes. The major contributions of this paper are summarized as follows: * We introduce a novel generative-contrastive graph learning paradigm from the perspective of better contrastive view construction, and propose a novel   framework for recommendation. * We leverage variational graph reconstruction to generate contrastive views, and a design cluster-aware twofold contrastive learning module, such that the self-supervised signals can be better mined at different scales for GCL-based recommendation. * Extensive experiments on three public datasets clearly show the effectiveness of the proposed framework, our   consistently outperforms all baselines. § PRELIMINARIES §.§ Graph based Collaborative Filtering In fundamental collaborative filtering, there are two kinds of entities: a userset U (|U|=M) and an itemset V (|V|=N). Considering the recommendation scenarios with implicit feedback, we use matrix 𝐑∈ℝ^M× N to describe user-item interactions, where each element 𝐫_ai=1 if user a interacted with item i, otherwise 𝐫_ai=0. Graph-based CF methods <cit.> formulate the available data as a user-item bipartite graph 𝒢={U ∪ V, 𝐀}, where U ∪ V denotes the set of nodes, and 𝐀 is the adjacent matrix defined as follows: 𝐀=[[ 0^M×M 𝐑; 𝐑^T 0^N×N ]]. Given the initialized node embeddings 𝐄^0, graph-based CF methods update node embeddings through multiple graph convolutions: 𝐄^l=𝐃^-1/2𝐀𝐃^-1/2 𝐄^l-1, where 𝐃 is the degree matrix of graph 𝒢, 𝐄^l and 𝐄^l-1 denote node embeddings in l^th and (l-1)^th graph convolution layer, respectively. When stacking L graph convolution layers, the final node representations can be obtained with a readout operation: 𝐄=Readout(𝐄^0, 𝐄^1, ..., 𝐄^L). The pairwise ranking <cit.> loss is adopted to optimize model parameters: ℒ_rec=∑_a=0^M-1∑_(i,j)∈ D_a -logσ(r̂_ai-r̂_aj) + λ ||𝐄^0||^2, where σ(·) is the sigmoid activation function, λ is the regularization coefficient. D_a={(i,j)|i∈ R_a∧j∉R_a} denotes the pairwise training data for user a. R_a represents the item set that user a has interacted. §.§ Graph Contrastive Learning for Recommendation GCL usually as an auxiliary task to complement recommendation with self-supervised signals <cit.>, which performs multi-task learning: ℒ=ℒ_rec + αℒ_cl, where α is a hyper-parameter that controls the contrastive task weight, ℒ_cl is the typical InfoNCE loss function <cit.>: ℒ_cl = ∑_i ∈ℬ -log exp(𝐞'_i^T𝐞”_i/τ)/∑_j ∈ℬ exp(𝐞'_i^T𝐞”_j/τ), where ℬ denote a batch users (items), τ is the contrastive temperature. For node i, e'_i and e”_i denote the corresponding contrastive representations with L_2 normalization, the same as node j. This objective encourages consistency of contrastive representations for each node. Revisiting GCL-based recommendation models from a data augmentation perspective, there are two popular strategies: structure augmentation <cit.> and feature augmentation <cit.>. As illustrated in the upper part of Figure<ref>, structure augmentation randomly perturb graph structure to obtain two augmented views 𝒢', 𝒢”, then generate contrastive representations as follows: 𝐄' = ℰ(𝒢', 𝐄^0), 𝐄” = ℰ(𝒢”, 𝐄^0), where ℰ(·) denotes graph encoder. Because nodes do not satisfy the IID assumption on the graph, random structure perturbation easy to destroys the intrinsic nature of the input graph, then can't fully make use of GCL for recommendation. Another is feature augmentation <cit.>, which is illustrated in the lower part of Figure<ref>. Feature augmentation adds random noises into node embeddings, then generate contrastive representations with GNNs: 𝐄' = ℰ(𝐄^0,ϵδ'), 𝐄” = ℰ(𝐄^0, ϵδ”), where δ', δ”∼ U(0,1) are uniform noises, ϵ is the amplitude that controls noise scale. Although this noise-based augmentation is controllable and constrains the deviation, we argue that a fixed and generic ϵ is not generalized for nodes with unique characteristics. For example, user-item interactions usually perform the long-tail distribution, the head nodes have more supervision signals than tails, then a small ϵ maybe satisfy the tail nodes while not sufficient to the head nodes. The above flaws drive us to find a better graph augmentation that maintains graph information and is adaptive to each node. § METHODOLOGY In this section, we present our proposed   framework for recommendation. As shown in Figure <ref>,   consists of two modules: a variational graph reconstruction module and a cluster-aware contrastive learning module. Specifically, we first use variational graph reconstruction to estimate the probability distribution of each node, then design cluster-aware twofold contrastive learning objectives to encourage the consistency of contrastive views which are generated by multiple samplings from the estimated distribution. Next, we introduce each component in detail. §.§ Variational Graph Reconstruction VAE Brief. Given the training data 𝐗={𝐱_i}_i=1^n, VAE assumes that each sample 𝐱_i is constructed from a generative process: 𝐱∼ p_θ(𝐱|𝐳). Thus, it's natural to maximize the likelihood function: log p(𝐱) = log∫p_θ(𝐱|𝐳)p(𝐳)d𝐳, where p(𝐳) is the prior distribution of latent variable 𝐳. However, it's intractable to compute Eq.(<ref>) because we don't know all possible latent variables 𝐳. Thus, VAE adopts a variational inference technique and uses an inference model q_ϕ(𝐳|𝐱) to approximate the posterior distribution p_θ(𝐱|𝐳). Then, VAE is optimized by minimizing the Evidence Lower Bound (ELBO) based objective: ℒ_ELBO=-𝔼_𝐳∼q_ϕ(𝐳|𝐱)[log(p_θ(𝐱|𝐳))] + KL[q_ϕ(𝐳|𝐱)||p(𝐳)], where q_ϕ(𝐳|𝐱) and p_θ(𝐱|𝐳) also denote the encoder and decoder which are parameterized by neural networks. KL[q_ϕ(𝐳|𝐱)||p(𝐳)] is the Kullback-Leibler divergence between the approximate posterior q_ϕ(𝐳|𝐱) and prior p(𝐳), which is used to constrain q_ϕ(𝐳|𝐱) closer to the prior Gaussian distribution. Graph Inference. Given the observed user-item interaction graph 𝒢={U∪ V, 𝐀}, and initialized node embeddings 𝐄^0. Graph inference aims to learn probability distributions 𝐙 which can reconstruct the input graph structure: Â∼ p_θ(𝐀|𝐙). Same to VAE, we also adopt variational inference q_ϕ(𝐙|𝐀, 𝐄^0)=∏_i=0^M+N-1q_ϕ(𝐳_i|𝐀,𝐄^0) to approximate the posterior p_θ(𝐀|𝐙). To be specific, we encode each node i into a multi-variate Gaussian distribution q_ϕ(𝐳_i|𝐀, 𝐄^0) =𝒩(𝐳_i|μ_ϕ(i), diag(σ_ϕ^2(i))), where μ_ϕ(i) and σ_ϕ^2(i) denote the mean and variance of node i's distribution, respectively. To better exploit high-order user-item graph structure, we adopt GNNs to estimate the parameters of node distributions: μ = GNN(𝐀,𝐄^0,ϕ_μ), σ = GNN(𝐀,𝐄^0,ϕ_σ), where ϕ_μ and ϕ_σ denote learnable parameters on graph inference. Following the previous research on graph-based collaborative filtering, we select LightGCN <cit.> as the encoder to deploy the above graph inference process. For each node i, the corresponding means are updated as follows: μ_i^l =∑_j∈𝒩_i1/√(|𝒩_i|) √(|𝒩_j|) μ_i^l-1, where μ_i^l and μ_i^l-1 are corresponding means on l^th and (l-1)^th graph convolution layer, 𝒩_i and 𝒩_j denote the connected neighbors for node i and node j. We initialize the means μ^0=𝐄^0. When stacking L graph convolution layers, we have L+1 outputs [μ^0, μ^1, ..., μ^L], then we fuse all layers' outputs and compute the means and variances as follows: μ = 1/L∑_l=1^Lμ^l, σ = MLP(μ), where the variances are learned from an MLP, which feeds the means as the input. In practice, we find that one-layer MLP achieves the best performance, then σ=exp(μ𝐖 + 𝐛), where 𝐖∈ℝ^d× d and 𝐛∈ℝ^d are two learnable parameters. After obtaining the mean and variance of the approximate posterior, we generate the latent representation 𝐳_i by sampling from 𝒩(μ_i, σ^2_i). However, it can not be directed optimized because the sampling process is non-differentiable. We employ the reparameterization trick instead of the sampling process <cit.>: 𝐳_i=μ_i+σ_i·ε, where ε∼𝒩(0, 𝐈) is a normal Gaussian noise. Graph Generation. After estimating the probability distribution of the latent variables 𝐙, the objective of graph generation is to reconstruct the original user-item graph: p(𝐀|𝐙) = ∏_i=0^M+N-1∏_j=0^M+N-1p(𝐀_ij|𝐳_i, 𝐳_j). There are many choices to realize the graph generation process, such as inner product, factorization machine, and neural networks. As suggested in <cit.>, we use an inner product to compute the propensity score that node i connected with node j: p(𝐀_ij=1|𝐳_i,𝐳_j) =σ(𝐳_i^T𝐳_j), where σ(·) is the sigmoid function. §.§ Cluster-aware Contrastive Learning Contrastive View Construction. Given the estimated probability distribution of latent representation 𝐙∼𝒩(μ,σ^2), we introduce a novel contrastive learning paradigm based on the estimated distribution. Different from previous GCL-based recommendation methods <cit.>, we construct contrastive views through multiple samplings from the estimated distribution instead of data augmentation. Specifically, for each node i, we generate contrastive representations 𝐳' and 𝐳” as follows: 𝐳_i' = μ_i + σ_i·ε', 𝐳_i” =μ_i + σ_i·ε”, where ε',ε”∼𝒩(0, 𝐈) are two random normal noise. Compared to structure or feature augmentations, our method is more efficient and effective for contrastive view construction. Firstly, all contrastive representations are sampled from the estimated distributions, which can well reconstruct the input graph without any information distortion. Secondly, the estimated variances are tailored to each node, which can be adaptive to regulate the scale of contrastive loss. Node-level Contrastive Loss. After constructing contrastive views of each node, we maximize the mutual information to provide self-supervised signals to improve recommendation performance. Considering that similar nodes are closer in the representation, we propose cluster-aware twofold contrastive objectives for optimization: a node-level contrastive loss and a cluster-level contrastive loss. Among them, node-level contrastive loss encourages consistency of contrastive views for each node, and cluster-level contrastive loss encourages consistency of contrastive views of nodes in a cluster. The objective of node-level contrastive learning is ℒ_N=ℒ_N^U+ℒ_N^V, where ℒ_N^U and ℒ_N^V denote user side and item side losses: ℒ_N^U = ∑_a ∈ℬ_u -log exp(𝐳'_a^T𝐳”_a/τ_1)/∑_b ∈ℬ_u exp(𝐳'_a^T𝐳”_b/τ_1), ℒ_N^I = ∑_i ∈ℬ_i -log exp(𝐳'_i^T𝐳”_i/τ_1)/∑_j ∈ℬ_i exp(𝐳'_i^T𝐳”_j/τ_1), where τ_1 is the contrastive temperature, ℬ_u and ℬ_i denote users and items in a batch training data. Cluster-level Contrastive Loss. Considering the similarity of the estimated distributions of nodes, we design cluster-level contrastive loss to further distinguish the positive and negative contrastive pairs in batch training data. Overall, our aim is to maximize the consistency of node pairs with the same cluster and minimize the consistency of node pairs with different clusters. Suppose there are K_u user prototypes 𝐂^u ∈ℝ^d × K_u and K_i item cluster prototypes 𝐂^i ∈ℝ^d × K_i, we use p(c_k^u|z_a) to denote the conditional probability that user a belongs to k^th user cluster, and p(c_h^i|z_i) denote the conditional probability that item i belongs to h^th item cluster. Given the estimated distributions as input, we implement the clustering process by the K-Means algorithm <cit.>. Then, we compute the probability that two users (items) are assigned to the same prototype: p(a,b) = ∑_k=0^K_u-1p(𝐜_k^u|𝐳_a)p(𝐜_k^u|𝐳_b), p(i,j) = ∑_h=0^K_i-1p(𝐜_h^i|𝐳_i)p(𝐜_h^i|𝐳_j), where p(a,b) denote the probability that user a and user b belong to the same cluster, and p(i,j) denote the probability that item i and item j belong to the same cluster. Next, we present the cluster-level contrastive loss ℒ_C=ℒ_C^U+ℒ_C^I, where ℒ_C^U and ℒ_C^I denote user side and item side losses: ℒ_C^U =∑_a ∈ℬ_u -1/SP(a)log(∑_b ∈ℬ_u, b!=ap(a,b)exp(𝐳'_a^T𝐳”_b/τ_2)/∑_b ∈ℬ_u,b!=aexp(𝐳'_a^T𝐳”_b/τ_2)), ℒ_C^I = ∑_i ∈ℬ_i-1/SP(i) log(∑_j ∈ℬ_i,j!=ip(i,j)exp(𝐳'_i^T𝐳”_j/τ_2)/∑_j ∈ℬ_i,j!=iexp(𝐳'_i^T𝐳”_j/τ_2)), where SP(a)=∑_b ∈ℬ_u, b!=a p(a,b) and SP(i)=∑_j ∈ℬ_i, j!=i p(i,j), τ_2 is the temperature to control the mining scale of hard negatives. The final contrastive loss is the weighted sum of the node-level loss and the cluster-level contrastive loss: ℒ_cl=ℒ_N+γℒ_C, where γ is the coefficient to balance two level contrastive losses. §.§ Model Optimization. For the variational graph reconstruction part, we optimize the parameters of graph inference and graph generation with ELBO: ℒ_ELBO=-𝔼_𝐙∼q_ϕ(𝐙|𝐀,𝐄^0)[log(p_θ(𝐀|𝐙))] + KL[q_ϕ(𝐙|𝐀,𝐄^0)||p(𝐙)]. Among them, the first term is the reconstruction error between the original graph and the generated graph. We employ a pairwise learning strategy to minimize the reconstruction error: 𝔼_𝐙∼q_ϕ(𝐙|𝐀,𝐄^0)[log(p_θ(𝐀|𝐙))]= ∑_a=0^M-1∑_(i,j)∈D_a-logσ(r̂_ai-r̂_aj)), D_a={(i,j)|i∈ R_a∧j∉R_a} denotes the pairwise training data for user a. R_a represents the item set that user a has interacted. Overall, we optimize the proposed   with a multi-task learning framework: min ℒ=ℒ_ELBO + αℒ_cl + λ||𝐄^0||^2, where α is the balance parameter of contrastive loss, and λ is the regularization coefficient. After the model training process, we use Eq.(<ref>) to predict the unknown preferences for the recommendation. §.§ Model Analysis Space Complexity. As shown in Algorithm 1, the model parameters are composed of two parts: node embeddings 𝐄^0 and MLP parameters 𝐖, 𝐛. Compared to traditional embedding-based collaborative filtering, the additional parameters only have 𝐖, 𝐛 which are shared among all nodes. So the additional storage space is very small and can be neglected. Time Complexity. We compare the time complexity of   with other GCL-based recommendation methods based on data augmentation. Let |E| denote the edge number of the graph, d be the embedding size, and S denote the average neighbor number. For the graph convolution part,   costs 𝒪(2|E|dS), where 2|E| denotes the number of non-zero elements on the adjacent matrix. However, SGL and SimGCL need to repeat graph convolution three times, which generates main embeddings for recommendation and two auxiliary embeddings for contrastive learning. Therefore, SGL and SimGCL all cost 𝒪(6|E|dS) while   only need 𝒪(2|E|dS). For the contrastive learning part,   additionally has a clustering process, we implement the K-means clustering algorithm with Faiss-GPU [https://faiss.ai/], and the time cost can be neglected compared to model learning in practice. Therefore,   is more time-efficient than current GCL-based recommendation methods based on data augmentation. § EXPERIMENTS §.§ Experimental Settings §.§.§ Datasets To compare the recommendation performance of our   with other state-of-the-art models, we select three benchmarks to conduct the empirical analysis: Douban-Book <cit.>, Dianping <cit.> and Movielens-25M <cit.>. For Movielens-25M, we convert ratings equal to 5 as positive feedback, and other ratings as negative feedback. We filter users with less than 10 interactions for all datasets, and randomly sample 80% interactions as training data, and the remaining 20% as test data. The statistics of three datasets are summarized in Table <ref>. §.§.§ Baselines and Evaluation Metrics We compare our model with the following baselines, including matrix factorization based method: BPR-MF <cit.>, graph based method: LightGCN <cit.>, VAE based methods: Multi-VAE <cit.>, CVGA <cit.>, and graph contrastive learning based methods: SGL <cit.>, NCL <cit.>, SimGCL <cit.>. We employ two widely used metrics: Recall@N and NDCG@N to evaluate all recommendation models. Specifically, Recall@N measures the percentage of recalled items on the Top-N ranking list, while NDCG@N further assigns higher scores to the top-ranked items. To avoid selection bias in the test stage, we use the full-ranking strategy <cit.> that views all non-interacted items as candidates. All metrics are reported with average values with 5 times repeated experiments. §.§.§ Parameter Settings We implement our   model and all baselines with Tensorflow[https://www.tensorflow.org]. We initialize all models parameter with a Gaussian distribution with a mean value of 0 and a standard variance of 0.01, embedding size is fixed to 64. We use Adam as the optimizer for model optimization, and the learning rate is 0.001. The batch size is 2048 for the Douban-Book and Dianping datasets and 4096 for the Movielens-25M dataset. For our   model, we turn the contrastive temperature τ in [0.10, 0.25], contrastive regularization coefficient λ in [0.01, 0.05, 0.1, 0.2, 0.5, 1.0], and clustering number k_1, k_2 in [100, 1000]. Besides, we carefully search the best parameter of γ, and find   achieves the best performance when γ=0.4 on Douban-Book, γ=0.5 on Dianping dataset, and γ=1.0 on Movielens-25M dataset. As we employ the pairwise learning strategy for graph reconstruction, we randomly select one unobserved item as a candidate negative sample to compose triple data for model training. For all baselines, we search the parameters carefully for fair comparisons. We repeat all experiments 5 times and report the average results. §.§ Overall Performance Comparisons As shown in Table <ref>, we compare our model with other baselines on three datasets. We have the following observations: * Our proposed   consistently outperforms all baselines under different settings. Specifically,   improves LightGCN w.r.t NDCG@20 by 28.17%, 14.70% and 8.61% on Douban-Book, Dianping and Movielens-25M dataset, respectively. Compared to the strongest baseline (SimGCL),   also achieves better performance, e.g., about 6.36% performance improvement of NDCG@20 on the Douban-Book dataset. Besides, we find that   achieves higher improvements on the small-length ranking task, which is more suitable for real-world recommendation scenarios. Extensive empirical studies verify the effectiveness of the proposed  , which benefits from combining the strength of generative and contrastive graph learning for recommendation. * Graph-based methods achieve better performance than their counterparts, which shows the superiority that capturing users' preferences by modeling high-order user-item graph structure. To be specific, LightGCN always outperforms BPR and CVGA consistently outperforms Multi-VAE, which proves that graph learning can effectively capture the high-order user-item interaction signals to improve recommendation performance, whether in embedding-based or VAE-based recommendation methods. * All GCL-based methods (SGL, NCL, SimGCL) significantly improve LightGCN on three datasets. It verifies the effectiveness of incorporating self-supervised learning into collaborative filtering. SimGCL achieves the best performance among these baselines, demonstrating that feature augmentation is more suitable for collaborative filtering than structure augmentation, which can maintain sufficient invariants of the original graph. It's worth noting that, our method also can be regarded as feature augmentation, but we rely on multiple samplings from the estimated distribution and the scales of augmentations are adaptive to different nodes. Therefore,   achieves better performance compared to SimGCL. §.§ Ablation Study To exploit the effectiveness of each component of the proposed  , we conduct the ablation study on three datasets. As shown in Table <ref>, we compare   and corresponding variants on Top-20 recommendation performances. VGCL-w/o C denotes that remove the cluster-level contrastive loss of  , we only use the general node-level contrastive loss. VGCL-w/o V denotes that remove the variational graph reconstruction part of  , then we use feature augmentation the same as SimGCL to generate contrastive views. From Table <ref>, we observe that VGCL-C consistently improves SimGCL on three datasets, which verifies that the proposed variational graph reconstruction module can provide better contrastive views for contrastive learning. Besides, VGCL-V also shows better performances than SimGCL, it demonstrates the effectiveness of cluster-aware contrastive pair sampling on contrastive learning. Finally,   consistently outperforms two variants, demonstrating the effectiveness of combining the variational graph reconstruction and cluster-aware sampling strategy. Based on the above analysis, we can draw the conclusion that variational graph reconstruction can provide better contrastive views than simple data augmentation and cluster-aware sampling is better than random sampling for contrastive learning. All of the proposed modules are beneficial to GCL-based recommendations. §.§ Investigation of the Estimated Distribution As we introduced in methodology, our proposed   can adaptively learn variances for various nodes. To investigate the effect of personalized variances, we conduct comparisons of different user groups. Specifically, we first split all users into 4 groups according to their interactions, then analyze recommendation performances under different user groups. Figure<ref> illustrates NDCG@20 values of various groups on Douban-Book and Dianping datasets. We observe that all models show better performances in the denser user group, which conforms to the intuition of CF. Besides, our proposed   achieves better performances on all user groups, demonstrating that   is general to users with different interactions. Further, we plot the relative improvements that   over SimGCL on Figure<ref>. We find that   achieves a more significant improvement in the denser groups, e.g., 8.4% improvement in U4 while 4.2% improvement in U1 on the Douban-Book dataset. To exploit this phenomenon, we compared the standard variances of the estimated distribution of different users. From the right part of Figure<ref>, we can observe that the inferred standard variances vary from each group, and increase by group ID. Compared with SimGCL which set fixed eps (noise scale) for all users, our method can learn personalized contrastive scales for different users. What's more,   can adaptively learn larger variances to those users with amounts of interactions, it's important to provide sufficient self-supervised signals to improve recommendation performance. Experimental results effectively demonstrate the effectiveness of our proposed adaptive contrastive objectives. §.§ Hyper-Parameter Sensitivities In this part, we analyze the impact of hyper-parameters in  . We first exploit the effect of temperature τ, which plays an important role in contrastive learning. Next, we investigate the influence of graph inference layer L. Finally, we study the impact of clustering prototype numbers K_u, K_v and contrastive loss weights α and γ. Effect of Graph Inference Layer L. To exploit the effect of different graph inference layers, we search the parameter L in the range of {1,2,3,4}. As shown in Table <ref>, we compare experimental results of different graph inference layers on Douban-Book and Dianping datasets. From Table <ref>, we observe that recommendation performances increase first and then perform slightly drop when the graph inference layer increases. Specifically,   achieves the best performance with L=2 on the Douban-Book dataset and L=3 on the Dianping dataset, respectively. This suggests that shallow graph inference layers can't well capture graph structure for node distribution estimation, but too deep graph inference layers also decrease the estimation quality due to the over-smoothing issue. Effect of Temperature τ. As introduced in the previous works, temperature τ controls the mining scale of hard negatives <cit.>. Specifically, a low temperature will highlight the gradient contributions of hard negatives that are similar to positive nodes. In  , there are two temperatures τ_1 and τ_2 in node-level and cluster-level contrastive losses, respectively. As suggested in the previous work, we fix the temperature τ_1=0.2 on node-level contrastive loss, then analyze the impact of τ=τ_2 of the cluster-level contrastive loss. From Figure <ref>(a) and Figure <ref>(b), we have the following observations. First, too high or low temperature will decrease recommendation performance on all methods. A too-high temperature drops the ability to mine hard negative samples, while a too-low temperature will over-highlight hard negatives which are usually false negatives. Second, SGL and SimGCL achieve the best performances when temperature τ=0.2 as suggested in the original paper, while   achieves better performance on a smaller temperature, e.g., τ=0.13 on Douban-Book and τ=0.15 on Dianping dataset. The reason is that our proposed cluster-aware contrastive learning further encourages the consistency of nodes in a cluster, then a lower temperature will help the model better mine hard negatives. Effect of Prototype Number K_u and K_i. To investigate the effect of prototype numbers, we set the prototype numbers from zero to hundreds. We illustrate the experimental results in Figure<ref>(c) and Figure<ref>(d). Please note that when K_u=K_i=0,   degenerates to VGCL-w/o C without the cluster-level objective. From this Figure, we find that   consistently outperforms VGCL-C, which demonstrates that our proposed cluster-aware twofold contrastive learning strategy effectively improves the recommendation performance. For the Douban-Book dataset,   reaches the best performances when K_u=900 and K_i=300. For the Dianping dataset,   reaches the best performance when K_u=500 and K_i=100. It shows that precise clustering can provide pseudo-labels to distinguish contrastive samples. Effect of Contrastive Loss Weights α and γ. As illustrated in Figure<ref>, we carefully tune the contrastive loss weights α and γ on the Douban-Book dataset. We observe that   achieves the best performance when α=0.2 and γ=0.4 on the Douban-Book dataset. As the space limit, we don't present analysis on the other two datasets, the best parameters are α=0.05, γ=0.5 and α=0.1, γ=1.0 on Dianping and Movielens-25M datasets, respectively. Besides, the performance increases first and then drops quickly while α and γ increase. It indicates that proper contrastive loss weights could effectively improve the sparse supervision issue, however, a too-strong self-supervised loss will lead to model optimization neglecting the recommendation task. § RELATED WORK §.§ Graph based Collaborative Filtering Collaborative filtering is a popular technique widely used in recommender systems. The key is to learn user and item embeddings relying on historical interactions <cit.>. Early works leverage the matrix factorization technique to project users' and items' IDs into latent embeddings, and compute preferences with the inner product or neural networks <cit.>. Recently, borrowing the success of Graph Neural Networks (GNNs) <cit.>, a series of graph-based models have been widely studied on various recommendation scenarios <cit.>. As users' behavior can be naturally formulated as a user-item graph, graph-based CF methods formulate the high-order user-item graph structure on representation learning and achieve great performance improvements <cit.>. NGCF is the first attempt that introduces GNNs to collaborative filtering, which injects high-order collaborative signals for embedding learning <cit.>. LR-GCCF proposes linear residual networks for user-item graph learning, which can effectively alleviate the over-smoothing issue in deep graph neural networks <cit.>. LightGCN is a representative work and proposes a simplified graph convolution layer for CF which only has neighbor aggregation <cit.>. Despite effectiveness, graph-based CF methods also suffer from sparse supervision. In this work, we investigate collaborative filtering with self-supervised learning to tackle the above issues. §.§ Contrastive Learning based Recommendation As one of the popular self-supervised learning paradigms, contrastive learning aims to learn the representational invariants by data augmentation <cit.>. In general, contrastive learning first generates contrastive views from data augmentation, then maximize the mutual information to encourage the consistency of different contrastive views. Recently, some research successfully apply the CL technique to graph representation learning, either local-global scale contrast <cit.> or global-global scale contrast <cit.>. For instance, DGI learns node representations by maximizing the mutual information between the local and global representations <cit.>. GraphCL proposes four random graph augmentation strategies to multiple subgraphs for contrastive learning <cit.>, AutoGCL further proposes an automated GCL method with learnable contrastive views <cit.>. Inspired by these works, some GCL-based CF methods have been proposed <cit.>. BiGI maximizes the local-global mutual information on user-item bipartite graph <cit.>. EGLN proposes to learn the enhanced graph structure and maximize the mutual information maximization with a local-global objective <cit.>. Besides, data augmentation based CL techniques are usually applied to CF, aiming to deal with sparse supervision and noise interaction problems <cit.>. SGL designs three structure graph augmentations to generate contrastive views and improve recommendation accuracy and robustness by maximizing the consistency of different views <cit.>. NCL proposes neighborhood-enriched contrastive learning to improve performance, it uses the correlated structure neighbors and semantic neighbors as contrastive objects <cit.>. SimGCL revisits structure augmentation methods and proposes a simple feature augmentation to enhance GCL-based recommendations <cit.>. Despite the effectiveness, we argue that current GCL-based recommendation methods are still limited by data augmentation strategies whatever structure or feature augmentation. First, structure augmentation randomly deletes nodes or edges of the input graph to generate subgraphs for contrastive learning. However, random structure augmentation is easy to destroy the intrinsic nature of the original user-item graph. Besides, feature augmentation adds the same scale noise to all nodes, ignoring the unique characteristics of nodes (such as degree on the graph), thus can't satisfy all nodes. In this work, we propose a novel contrastive paradigm without data augmentation and implement adaptive contrastive loss learning for different nodes. §.§ VAE and Applications on Recommendation Variational Auto-Encoder (VAE) is a generative method widely used in machine learning <cit.>. It assumes that the input data can be generated from variables with some probability distribution. Following, some extensions of VAE are proposed to improve performance from different perspectives <cit.>. CVAE considers complex condition distribution on inference and generation process <cit.>, β-VAE proposes to learn the disentangled representations by adding the loss of KL-term <cit.>, and DVAE reconstructs the input data from its corrupted version to enhance the robustness <cit.>. The basic idea of applying VAEs to the recommendation is to reconstruct the input users' interactions. Mult-VAE proposes that multinomial distribution is suitable for modeling user-item interactions, and parameterizes users by neural networks to enhance the representation ability <cit.>. RecVAE further improves Mult-VAE by introducing a novel composite prior distribution for the latent encoder <cit.>. BiVAE proposes bilateral inference models to estimate the user-item distribution and item-user distribution <cit.>. CVGA combines GNNs and VAE and proposes a novel collaborative graph auto-encoder recommendation method, which reconstructs user-item bipartite graph using variance inference <cit.>. Besides, some works attempt to leverage VAEs for sequential recommendation  <cit.> and cross-domain recommendation <cit.>. Different from the above VAE-based recommendation models, our   introduces the variational inference technique to generate multiple contrastive views for GCL-based recommendation, which build a bridge between generative and contrastive learning models for recommendation. § CONCLUSION In this work, we investigate GCL-based recommendation from the perspective of better contrastive view construction, and propose a novel   framework. Instead of data augmentation, we leverage the variational graph reconstruction technique to generate contrastive views to serve contrastive learning. Specifically, we first estimate each node's probability distribution by graph variational inference, then generate contrastive views with multiple samplings from the estimated distribution. As such, we build a bridge between the generative and contrastive learning models for recommendation. The advantages have twofold. First, the generated contrastive representations can well reconstruct the original graph without information distortion. Second, the estimated variances vary from different nodes, which can adaptively regulate the scale of contrastive loss for each node. Furthermore, considering the similarity of the estimated distributions of nodes, we propose a cluster-aware twofold contrastive learning, a node-level to encourage consistency of a node's contrastive views and a cluster-level to encourage consistency of nodes in a cluster. Empirical studies on three public datasets clearly show the effectiveness of the proposed framework. § ACKNOWLEDGEMENTS This work was supported in part by grants from the National Key Research and Development Program of China (Grant No. 2021ZD0111802), the National Natural Science Foundation of China (Grant No. 72188101, 61932009, 61972125, U19A2079, 62006066, U22A2094), Major Project of Anhui Province (Grant No. 202203a05020011), and the CCF-AFSG Research Fund (Grant No. CCF-AFSG RF20210006). ACM-Reference-Format
http://arxiv.org/abs/2307.03959v1
20230708115509
Understanding the power-law nature of participation in community sports organizations
[ "Jia Yu", "Mengjun Ding", "Weiqiang Sun", "Weisheng Hu", "Huiru Wang" ]
cs.SI
[ "cs.SI", "physics.soc-ph" ]
Understanding the power-law nature of participation in community sports organizations Jia Yu, Mengjun Ding, Weiqiang Sun,  Senior Member, IEEE, Weisheng Hu,  Member, IEEE, Huiru Wang Manuscript received June, 2023. (Corresponding author: Weiqiang Sun.) Jia Yu, Mengjun Ding, Weiqiang Sun, and Weisheng Hu are with the School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China. Huiru Wang is with the Department of Physical Education, Shanghai Jiaotong University, Shanghai 200240, China. (e-mail: {yujia543, mengjun_ding, sunwq, wshu, wanghr}@sjtu.edu.cn). August 12, 2023 =============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== The improvement of living standards and awareness of chronic diseases have increased the importance of community sports organizations in promoting the physical activity levels of the public. However, limited understanding of human behavior in this context often leads to suboptimal resource utilization. In this study, we analyzed the participation behavior of 2,956 members with a time span of 6 years in a community sports organization. Our study reveals that, at the population level, the participation frequency in activities adheres to a power-law distribution. To understand the underlying mechanisms driving crowd participation, we introduce a novel behavioral model called HFBI (Habit-Formation and Behavioral Inertia), demonstrating a robust fit to the observed power-law distribution. The habit formation mechanism indicates that individuals who are more engaged are more likely to maintain participation, while the behavioral inertia mechanism suggests that individuals' willingness to participate in activities diminishes with their absences from activities. At the individual level, our analysis reveals a burst-quiet participation pattern, with bursts often commencing with incentive activities. We also find a power-law distribution in the intervals between individual participations. Our research offers valuable insights into the complex dynamics of human participation in community sports activity and provides a theoretical foundation to inform intervention design. Furthermore, the flexibility of our model enables its application to other data exhibiting power-law properties, broadening its potential impact beyond the realm of community sports. human behavior, power law, habit formation, behavioral inertia, burst timing, community sports activity. § INTRODUCTION Globalization urbanization, and increased wealth have led to significant lifestyle changes, causing a wide decrease in physical activity. According to the World Health Organization (WHO), inactivity rates can climb as high as 70% in certain countries, primarily due to shifts in transportation habits, heightened reliance on technology, and urbanization <cit.>. Physical inactivity, which has been identified as a global pandemic, is responsible for up to 8% of non-communicable diseases and deaths globally <cit.>. Conservatively estimated, physical inactivity cost health-care systems INT$53.8 billion worldwide in 2013 <cit.>. Additionally, if the prevalence of physical inactivity remains unchanged, it is projected that by 2030, there will be around 499.2 million new cases of preventable major NCDs worldwide, resulting in direct health-care costs of INT$ 520 billion. The annual global cost of not taking action on physical inactivity is anticipated to reach approximately $47.6 billion <cit.>. In an effort to improve physical activity participation, community sports organizations have achieved remarkable results in recent years. Many concur that community sport, as a low-threshold physical activity, is a powerful tool for targeting socially vulnerable groups <cit.>. Moreover, community sport has been recognized as a policy area and a social field that goes beyond “just" providing opportunities for groups to participate in sports. It also encompasses functions such as social care and crime reduction <cit.>. Today, being non-profit by nature, community sports organizations face greater challenges, such as competition for limited resources, volunteer availability, and capacity, and the impact of pandemics (such as COVID-19) <cit.>. Understanding the nature of the population participating in community sports is thus pivotal to making the best use of limited resources. The interest in the data-driven exploration of human behavior has been persistent. Very early on, power-law distribution has been found in certain human behaviors, such as the intervals between emails <cit.>, the pattern of phone calls <cit.>, and complex social networks <cit.>. Efforts have been made to understand the principle behind the formation of this power-law distribution in these behaviors <cit.>. Classical models such as the decision-based queuing process <cit.> and preferential attachment <cit.> are proposed to explain the power law distribution observed in the waiting time for processing emails and the degree distribution in complex networks, respectively. Research on community sports organizations is usually conducted from an organizational management perspective, providing high-level guidance for organizational development by quantifying aspects such as resources, program design, diversity, life cycle, and resilience <cit.>. However, very few, if any, models are population-based and consider when, how, and who participates in community-level sports activities <cit.>. In this study, with the data from 2,956 users collected over a span of six years, we discovered a power-law distribution of population participation in community sports activities. To explain this power-low distribution, we proposed the hypothesis of habit formation and behavioral inertia in community sports activity participation. Previous research has indicated that physical activity behavior can be developed through repeated experience of the activity in stable contexts <cit.>. Human behavior does exhibit inertia, as evidenced by the tendency for users to stick with default options <cit.> and purchase habits <cit.>. Our empirical data provides evidence of habit formation and behavioral inertia in community sports participation. It may help to address the question, “What is the typical `shape' of within-person real-world habit growth with repetition over the long-term" identified in the 2019 European Health Psychology Society Synergy Expert Meeting <cit.>. Based on these two mechanisms we designed a behavioral model called HFBI that can robustly fit the power-law distribution of the empirical data. Power-law distribution is also observed in the interval of participation at the individual level, signifying a burst-quiet pattern of activity participation. With the relevant activity information, we found that bursts tend to be initiated by activities with incentive rewards, suggesting that incentive activities can help call people back for sustained engagements. The main contributions of the article as described as follows. * For the first time, we discovered that the frequency of population participation in community physical activities and the interval between individual's participations obey power-law distributions. * We proposed an intuitive model to explain the power-law distribution of population participation in community physical activities, by taking into account habit formation and behavioral inertia. We demonstrated good fitting performance and statistical significance with real-world data. The model may as well be used in other domains where power-law distributions with low power-law exponents are observed. * The intervals between individual's participation exhibit a power-law distribution, with a pattern of bursts followed by periods of inactivity (a burst-quiet pattern). We observed that bursts often start with incentive activities located in the head position. This implies that incentive activities not only attract more participants but also have the potential to call users back from a quiet state to an active state, thereby promoting sustained engagement. The rest of this article is organized as follows. In Section II, we demonstrate the power-law phenomenon of participation frequency in activities at the population level. In Section III, we introduce the proposed HFBI model and present the evidence. In Section IV, we verify the participation patterns at the individual level and the role of incentive activities. In Section V, we present the related work. Finally, we summarize this paper in Section VI. § POWER-LAW DISTRIBUTION OF PARTICIPATION FREQUENCY AT THE POPULATION LEVEL §.§ Data Description The data used in our research was sourced from a university-based community sports platform that we develop and operate, which allows individuals to initiate or participate in sports activities. The initiator of the activity can choose whether or not to provide rewards as incentives for the activity. Over the course of 6 years, from May 2015 to May 2021, our dataset captured 28,714 records of activity participation in 770 activities (including 110 activities with incentives), involving a total of 2,956 individuals. Each record in the dataset contains the participant's ID, activity ID, team ID, and type of activity (whether to provide incentives or not). The activity IDs are consecutive natural numbers starting from 0 and arranged in the order of their occurrence (numbered from 0 to 769). §.§ Fitting the Empirical Data The frequency of user i participating in activities over the entire period is denoted as q_i. For the sequence of activity participation frequency { q_i}, we assume that the frequency larger than a truncated value q_min is described by the power law distribution, p(q) ∼ q^-γ, q ≥ q_min. In the Kolmogorov-Smirnov (KS) test, p>0.1 (or p>0.05) suggests that the data can be considered to follow a power-law distribution. We select the smallest value of q that satisfies the KS test with p>0.1 as q_min, and the data above q_min can be plausibly modeled as a power-law distribution. The estimate γ is chosen by maximum likelihood (MLE) <cit.>. §.§ Power-law Distribution of Participation Frequency The participation frequency of the population follows a power-law distribution. Fig. <ref> shows the empirical distribution of user participation frequency in activities in a complementary cumulative way to enhance the statistical significance <cit.>. The complementary cumulative function can be represented as F(q)=∑_q^'=q^∞ p(q^'), where p(q) denotes the proportion of individuals who participated in activities q times. A clear straight-line trend can be observed on the double logarithmic axis, indicating a power law distribution of the data. Kolmogorov–Smirnov (KS) tests and Maximum likelihood estimation (MLE) fits are employed to check whether the empirical distributions obey power law distribution and estimate the related parameters. The result shows that the frequency of population participation in the activity is in line with the power law distribution (p=0.18, q_min=2) with the power-law exponent γ=1.76. The cutoff of the tail indicates that there are fewer individuals participating in an exceptionally large number activity than what a power-law distribution would expect, which is a phenomenon commonly observed in real-world systems. Fig. <ref> shows the relationship between the fraction P of the participation and the most active p of the population. 80/20 rule is evident that the top 20% of the most active users contributed to approximately 84% of the total activity participation. Theoretically, the case is more extreme for power-law distributions with γ less than 2. However, the fact that the number of activities is finite and the tail cutoff brings the ratio close to the classical Pareto's law. To demonstrate that the power-law distribution of the participation frequency is not momentary coincidental, we analyzed the data for each activity node after the platform scale reached 1000. All samples (287 (88.9%) with q_min=1 and 36 (11.1%) with q_min=2) conformed to the power law distribution by KS test, with p-values all greater than 0.1. Fig. <ref> presents the γ for all samples of 323 activity nodes. The range of γ spans from 1.66 to 1.81 with a mean of 1.72. And it keeps changing slowly with each activity held, first decreasing steadily, and then fluctuating and rising. The γ less than 2 indicates a significant “heavy tail" phenomenon in the frequency of participation. § HFBI-A BEHAVIORAL MODEL BASED ON HABIT FORMATION AND BEHAVIORAL INERTIA To explore the principle behind the power-law distribution of the participation frequency, we propose a behavioral model named HFBI, which is based on the assumptions of habit formation and behavioral inertia. Intuitively, people who have participated in activities frequently or have just participated in an activity are more likely to participate in subsequent activities. They are supported by convincing evidence from our data. §.§ Evidence for Habit Formation and Behavioral Inertia To provide evidence for the habit formation and behavioral inertia mechanisms, we performed a statistical analysis of all activities in the dataset. The proportion of people who have participated in q activities and would choose to participate in a new available activity can be represented as prop .(q)=∑_j=0^N-1 m_q^j/∑_j=0^N-1 n_q^j. Here, n_q^j represents the number of individuals who have participated in q activities before a new activity j, m_q^j represents the number of individuals among them who choose to participate in the activity j, and N is the total number of activities in the dataset. The denominator represents the total number of individuals who have participated in q activities for all activities, while the numerator represents the number of individuals who choose to continue to participate in an activity after participating in q activities. Similarly, the proportion of people who have been away from activities for d sessions and would choose to participate in a new available activity can be represented as prop .(d)=∑_j=0^N-1 m_d^j/∑_j=0^N-1 n_d^j. n_d^j represents the number of individuals who have been away from activities for d sessions for activity j, m_d^j represents the number of individuals among them who choose to participate in the activity j. The denominator represents the total number of individuals who have been away from activities for d sessions for all activities, while the numerator represents the number of individuals who choose to continue to participate in an activity after being away from activities for d sessions. Fig. <ref> shows the proportion of people who have participated in q activities and would choose to participate in a new available activity. As shown, the proportion of individuals opting to continue participation increases almost linearly with the number of activities participated in the early stage. Fig. <ref> illustrates the proportion of people who have been away from activities for d sessions and would choose to participate in a new available activity. As the number of sessions away from activities increases, the proportion of people choosing to back to participating in activities sharply decreases. These provide solid evidence for the existence of habit formation and behavioral inertia in community sports participation. §.§ The HFBI Model Based on the evidence presented, we propose the HFBI model, which incorporates habit formation and behavioral inertia, to simulate user participation in activities. The experimental results demonstrate that the model can accurately simulate user participation in activities with only four parameters. §.§.§ Parameter Settings The HFBI model only requires four parameters: n, c, m, and α. n represents the number of activities held, i.e., the model's iteration count. c and m refer to the quantities of new and existing users participating in an activity (added in a round of iterations), respectively. α is a parameter that adjusts the ratio of habit formation and behavioral inertia to achieve a better fit with the empirical data. The parameters of c and m can be derived from the mean values of the dataset. Note that since the parameters are natural integers, the values of c and m will be rounded. To ensure consistency in the scale of the population, n is calculated based on the number of population, c, and m. Additionally, we initiate the iteration process with m pre-existing users to enable the selection of existing users at the start of the iteration. §.§.§ Model Description The model is characterized by adding users in a sequential and batched manner, which aligns with many real-life situations. Initially, we make the assumption that for every activity, there will be c new users and m existing users participating. For a new available activity and an existing user i, q_i represents the total number of activities that user i has participated in before, and d_i represents the interval between the last activity they participated in and the current new activity. User i participating in the activity can be attributed to two mechanisms. (1) User i has a probability of α to participate in the activity due to habit formation, which means the probability of participating is proportional to q_i: q_i/∑_i ∈ I q_i. (2) Additionally, there is a probability of 1-α for user i to participate in the activity due to behavioral inertia, which means the probability is a decreasing function of d_i: 1 / d_i/∑_i ∈ I 1 / d_i. Therefore, the total probability of user i participating in the activity is: ϕ_i=αq_i/∑_i ∈ I q_i+(1-α) 1 / d_i/∑_i ∈ I 1 / d_i. I is the set of all existing users. The model will perform n rounds of iterations, adding c new users and selecting m existing users based on Eq. <ref> in each round. The c new users will be added to the existing user pool in each round. The overall process of the model is shown in Algorithm <ref>. Note that the specific form of the decreasing function for d_i is not unique, as it can be adjusted by the parameter α. §.§.§ Proof of Power-Law Distribution and Exponent in Habit Formation When only considering the habit formation, that is ϕ_i=q_i/∑_i ∈ I q_i, the model can generate power-law distribution data with a power exponent γ=2+c/m. The proof process is similar to the Price model <cit.>. In the HFBI, for every activity held, there will be c new users and m existing users participating, and the participation probability of existing users is proportional to the number of activities they have participated in before. Let p_q(n) be the fraction of users that have participated q times when the platform contains n users, which is also the probability distribution of participation frequency. q_i represents the number of activities participated by user i. When organizing an activity where only one user among all existing users will participate, the probability of existing user i participating in the activity is q_i/∑_i q_i=q_i/n⟨ q⟩=q_i/n m+c/c. where ⟨ q⟩ represents the average number of activities each person participates in, ⟨ q⟩=n^-1∑_i q_i. The number of people who have participated in q activities is np_q(n). When there is a new activity, the expected number of people who have participated in q activities and will join the new activity is n p_q(n) × m ×q/n m+c/c=p_q(n) × m ×q/m+c/c. Then the master equation for the evolution of the participation frequency distribution is (n+c) p_q(n+c)=n p_q(n)+(q-1) m c/m+c p_q-1(n)-m q c/m+c p_q(n). The left side of the equation is the expected number of people participating in the activity q times after adding an activity. The first term on the right-hand side here represents the number of users with previous q participation. The second term refers to the expected number of users who have a participation frequency of q-1 and join the activity and become q times, while the third term refers to the expected number of users who have a participation frequency of q and participate in this activity and are no longer q times. Eq. <ref> is applicable for all cases where q ≠ 1. When q = 1, the right side of the equation will increase by c new users whose participation frequency becomes 1, instead of the second term in Eq. <ref>, and the equation for q=1 is (n+c)p_1(n+c)=n p_1(n)+c-m c/m+cp_1(n). When considering the limit of large population size n →∞ and calculating the asymptotic form of the distribution participation frequency in this limit, we take the limit n →∞ and use the shorthand p_q=p_q(∞). Eqs. <ref> and <ref> become p_q=(q-1) m c/c(m+c)+m q c p_q-1 for q>1, p_1=m+c/2 m+c for q = 1. Let k = c/m, then p_1=1+k/2+k for q = 1, p_q=(q-1)/1+k+p p_q-1 for q>1. With Eqs. <ref> and <ref>, we can iteratively determine p_q for all values of q, beginning with our initial solution for p_1. The results are as follows: [ p_1=1+k/2+k; p_2=1/2+k+1×1+k/2+k; p_3=2/3+k+1×1/2+k+1×1+k/2+k; p_4=3/4+k+1×2/3+k+1×1/2+k+1×1+k/2+k; ...; ] The expression for general q can be successively derived as: p_q=(q-1) ×(q-2) …× 1 ×(1+k)/(q+k+1) ×(q-1+k+1) …×(2+k+1) ×(2+k). It is known that the gamma function is Γ(x)=∫_0^∞ t^x-1e^-t d t, and it has the property that Γ(x+1)=x Γ(x) for x > 0. Applying this equation iteratively, we find that Γ(x+n)/Γ(x)=(x+n-1)(x+n-2) … x. Using this result, we can rewrite Eq. <ref> as p_q=(1+k) Γ(q) Γ(2+k)/Γ(1) Γ(2+k+q). By further employing Euler's formula B(x, y)=Γ(x) Γ(y)/Γ(x+y), Eq. <ref> can be simplified to p_q=(1+k)/Γ(1) B(q, 2+k) . Using Stirling’s approximation for the gamma function, the beta function B(x, y) falls off as a power law for large values of x, with exponent y <cit.>, B(x, y) ≃ x^-yΓ(y). Applying this finding to Eq. <ref>, for large values of q, the distribution of participation frequency goes as p_q∼ q^-γ=q^-(2+k)=q^-(2+c/m), where the exponent γ is γ=2+k=2+c/m. Therefore, by only considering habit formation, represented by ϕ_i=q_i/∑_i ∈ I q_i, the model is able to generate data with a power-law distribution, where the power exponent is given by γ=2+c/m. §.§.§ Experimental Results on the Real Dataset We conducted experiments on real data, and the results show that HFBI is capable of generating data with only four parameters derived from the mean values of the empirical data and also exhibits good statistical significance. The Kolmogorov-Smirnov (KS) test is used to assess whether the data generated by the model and empirical data are drawn from the same distribution. The KS statistic is a value that measures the maximum distance between two cumulative distribution functions (CDFs) of two samples, which is used to determine if two samples are drawn from the same underlying probability distribution or not. The null hypothesis is that the two distributions are identical. If p > 0.1, we cannot reject the null hypothesis, which suggests that the data generating process is plausible. The experiment is first performed on the largest-scale data, that is, the data up to the last activity node. The parameter values for c, m, and n are derived from the mean values of the data and are determined as 4, 33, and 731, respectively. In Fig. <ref>, a comparison is shown between the generated data from HFBI and the real data. It can be seen that the distribution of the simulated data and the real data are very close. The model achieves the best fit when α is set to 0.9. The α values within the range of 0 to 1 suggest that the results of the empirical distribution are attributed to the combined effects of both habit formation and behavioral inertia mechanisms. The habit formation mechanism described by Eq. <ref> can be demonstrated to generate data with a power-law distribution for γ=2+c/m, which is strictly greater than 2 and differs from the empirical data. The participation frequency with γ less than 2 implies that the frequency of participation in activities is slightly more than what can be explained by the habit formation mechanism alone. The behavioral inertia mechanism precisely compensates for this deficiency, as it captures the situation of individuals who have just participated in an activity being highly likely to continue participating in one or two due to inertia. It effectively adjusts the exponent while preserving the power-law distribution. It is the joint effect of both mechanisms that generate data that closely fit the empirical data. The data produced by the model is incapable of including the extremely rare users who have engaged in activities excessively. One possible explanation is that these individuals usually have a strong self-motivation to participate in activities, which cannot be captured by habit formation, as evidenced by the non-steady growth in the later stage of Fig. <ref>. And since the parameters have to be integers and the operation to maintain consistency of the number of users between the generated data and the empirical data, there will be a small difference between the model's n and the actual number of activity counts. This is considered acceptable since the proportion of these individuals is extremely low. To demonstrate the robustness of the model, the model was also employed to fit the participation frequency up to each activity node. As the generated data can be slightly different each time, we conducted 5 runs for each possible value of α and selected the optimal α value with the highest average p-value among 5 runs. The average p-values and corresponding optimal α of model fitting for 323 samples are shown in Fig. <ref>. In Fig. <ref> and Fig. <ref>, the behavioral inertia mechanism is represented by 1 / d_i/∑_i ∈ I 1 / d_i,e^-d_i/∑_i ∈ I e^-d_i, respectively. It shows that different functional forms can achieve a good fit at different values of α. The model shows good fitting performance (p>0.1) for all empirical data samples, indicating its correctness and robustness. The range of α values from 0.69 to 1 suggests that the proportion of habit formation and behavioral inertia may vary in different situations. We can observe clear downward trends in α around 450 to 600, indicating that the proportion of behavioral inertia gradually increases during this stage. By combining with Fig. <ref>, it can be observed that there is also a decreasing trend of γ. This indicates that behavioral inertia can effectively help to capture situations with smaller γ. § PARTICIPATION PATTERNS AT THE INDIVIDUAL LEVEL At the population level, the frequency of participation in activities follows a power-law distribution. At the individual level, the pattern of activity participation, specifically the intervals between each user's participation, is also worth studying. Similarly, we investigated the distribution of intervals between each individual's activity participation and discovered that they also exhibit a power-law distribution. In terms of activity participation patterns, it is a burst-quiet mode where individuals alternate between periods of high activity and periods of low activity. §.§ The Burst-Quiet Pattern The interval between an individual's participation is defined as the subtraction of the IDs of two consecutive activities in which they have participated, denoted by the r. Considering the requirement of a sufficient amount of interval sequence data, we focused on 58 loyal users who participated in more than 100 activities for the individual-level analysis. Fig. <ref> shows an example of a real user's participation in activities. It is evident that intervals of individual participation in activities vary greatly in size, with a majority being small and some being large. The participation of individuals is characterized by alternating bursts of high activity and long periods of low activity, similar to the outgoing mobile phone call sequence of an individual <cit.>. This burst-quiet pattern is common among the group of loyal users. We studied the distribution of interval sequences for all 58 users and discovered that their interval sequences also follow a power-law distribution (p>0.1 for 54 users, p>0.05 for all 58 users, r_min=1 for 48 users, and r_min=2 for 10 users). The power law distribution also plays an important role in the intervals of individual participation in activities. Fig. <ref> shows examples of complementary cumulative probability distributions of the intervals for three users. The intervals of participation in the activities of each of the three individuals obeyed a power law distribution with different power exponents. Fig. <ref> plots the probability distribution of the estimated power-law exponents γ for all loyal individuals, revealing a range from 1.6 to 3.25 and a mean of 2.35. Although their activity participation intervals all follow power-law distributions, the difference in the power-law exponent is quite significant. The range of γ is surprisingly consistent with γ for individuals with the intraday inter-call duration that follows a power-law distribution reported by Jiang et al <cit.>. And the probability distributions are also somewhat similar, which may suggest a potential connection between the intervals of different human behaviors. §.§ The Role of Incentive Activities in Bursts Burst, characterized by frequent participation in activities with short intervals within a specific period, has a significant impact on improving individuals' overall fitness level. Therefore, it is important to explore the factors associated with this pattern to promote physical activity among the population. In this study, a burst is defined as a period in which the interval between consecutive activities a user participates in is less than a threshold value Δ. The specific value of Δ is arbitrarily set in empirical analysis <cit.>. Organizations often invest resources to provide incentives for activities to attract users to participate. Incentives are crucial in promoting physical activity. Typically, physical activity behavior is initially motivated by incentive, and as habits form, it shifts towards unconscious and automatic processes <cit.>. The effectiveness of incentives can be immediately reflected in the number of participants in the activity. However, the benefits in other aspects are yet to be discovered. Our study has made some findings by observing the position of incentive activities in bursts. At thresholds of Δ=8, 9, and 10, we identified a total of 433, 399, and 378 bursts for all individuals, respectively, and recorded the positions of the first occurrence of the incentive activity within each burst. As shown in Fig. <ref>, the majority of bursts are observed to start with incentive activities. Table <ref> shows the number and percentage of bursts with the first incentive activity appearing at the head position in the bursts. Over 50% of bursts have their first incentive activity in the first position, and over 65% in the first three positions at different Δ. Note that there is only one in seven activities is incentivized. The proportion of incentive activities in the head of bursts is much higher than it, indicating a correlation between the occurrence of incentive activities and bursts. This phenomenon suggests that in addition to increasing the number of participants in the activity, incentive activities may also play a role in calling users back from a quiet state to a burst state for sustained engagements. § RELATED WORK Power law distributions have been observed in various domains and contexts, such as biology <cit.>, general science <cit.>, economics <cit.> and the social sciences <cit.>. Many human behaviors, such as the intervals between sending emails <cit.> and the pattern of phone calls <cit.>, have also been identified as following power-law distributions. Our work has discovered that the participation frequency of the population and the intervals between individual participation in activities exhibit power-law distributions in the context of community sports organizations. Over the years, there have been continuous efforts to propose diverse models aimed at replicating and explaining data characterized by power-law distributions. Barabási proposed the classic preferential attachment model, which can generate data exhibiting a power-law distribution with an exponent of 3 <cit.>. There are also derivative models that can generate data with power-law distributions with exponents between 2 and 3 <cit.>. They have been widely used to explain the power-law distribution of node degrees observed in social networks. The decision-based queuing process <cit.> simulates the power-law distribution of waiting times for emails by randomly assigning priorities to each incoming task and following a rule of processing tasks in priority order. This suggests that the power-law distribution of waiting times for emails may be attributed to human decision-making based on priorities. The preferential attachment model suggests that the power-law distribution of node degrees in networks may be due to the preferential connection of newly added nodes to high-degree nodes in the network <cit.>. In our HFBI model, the habit formation mechanism exhibits similarities to the preferential attachment model and can be proven to generate data conforming to a power-law distribution. In addition, the behavioral inertia component of the HFBI model introduces effective modifications, leading to a slight decrease in the exponent of the data while preserving its essential power-law characteristics. Community sports organizations have been receiving increasing attention for their significant contributions to public health and social harmony. Klenk et al. <cit.> investigated the participation of people with disabilities in community sports activities from three aspects: (1) social contacts, interactions, and friendships, (2) self-perception and identity formation, and (3) social acceptance, support, and embeddedness. Hanlon et al. <cit.> conducted a questionnaire survey to investigate the needs and initiatives for women's participation in community sports activities. Zhou et al's survey <cit.> revealed a correlation between the provision of community-sport services (both core and peripheral services) and participants’ satisfaction levels. To the best of our knowledge, there is no research that explores and comprehensively understands individual participation in community sports organizations from a data-driven and modeling approach. § CONCLUSION Our study has identified new members of the power-law data family, a) the frequency of community sports participation among populations, and b) the interval of individual activity participation. The participation frequency exhibits a power-law distribution with a tail cutoff and an exponent less than 2. We have proposed HFBI - a model based on habit formation and behavioral inertia, to uncover the underlying causes for this power-law distribution. In the model, the behavioral inertia mechanism effectively complements the habit formation mechanism, with which alone one can only generate power-law distributions with an exponent greater than 2. The model provides a robust fit to the empirical data. Furthermore, Individual participation in community sports activities exhibits a burst-quiet pattern. Importantly, our study suggests that periods of high activity bursts are often driven by incentive activities, highlighting the importance of incentive activities to sustain long-term physical activity behavior. Our results have important implications for the design of interventions aimed at promoting sustainable physical activity behavior. Interventions can be better tailored to align with individuals' behavioral tendencies by gaining insights into habit formation, behavior inertia, and incentive activities. Additionally, the classic preferential attachment process restricts the power law exponent to γ>2 <cit.>, while many real-world networks exhibit γ<2 <cit.>. Our HFBI model based on habit formation and behavior inertia can be valuable in other domains where power-law distributions with low power-law exponents are observed, such as the population of cities <cit.>, short-message communication <cit.>, and corporate innovative patent counts <cit.>. Despite the strengths of our study, there are limitations that should be noted. First, our study only focused on a sports community in a university, whose members are mostly well-educated university faculties and staff members, and may differ in the perception of self-motivated exercise from the population in the society at large. Further research is needed to understand how our study may be generalized to other community sports organizations. Secondly, the model cannot capture the behavior of extremely rare individuals who engage in activities excessively. As reported in the study's 80/20 rule, active individuals make a significant contribution to community activity participation, and future research should pay more attention to this group. In conclusion, our study provides novel insights into the principle underlying human participation in community sports activities and offers practical implications for the design of interventions to promote sustained physical activity behavior and human health. Our findings may also have broader implications for other fields where power-law distributions are commonly observed. § ACKNOWLEDGMENTS We would like to thank every member of the SJTU Health Community for their selfless commitment in building a supportive community and providing help to those in need. IEEEtran
http://arxiv.org/abs/2307.10212v1
20230714132835
Capsule network with shortcut routing
[ "Dang Thanh Vu", "Vo Hoang Trong", "Yu Gwang-Hyun", "Kim Jin-Young" ]
cs.CV
[ "cs.CV" ]
Benchmarking Digital-Analog Quantum Computation Adrian Auer August 12, 2023 =============================================== Capsules are fundamental informative units that are introduced into capsule networks to manipulate the hierarchical presentation of patterns. The part–whole relationship of an entity is learned through capsule layers, using a routing-by-agreement mechanism that is approximated by a voting procedure. Nevertheless, existing routing methods are computationally inefficient. We address this issue by proposing a novel routing mechanism, namely "shortcut routing", that directly learns to activate global capsules from local capsules. In our method, the number of operations in the routing procedure is reduced by omitting the capsules in intermediate layers, resulting in lighter routing. To further address the computational problem, we investigate an attention-based approach, and propose fuzzy coefficients, which have been found to be efficient than mixture coefficients from EM routing. Our method achieves on-par classification results on the Mnist (99.52%), smallnorb (93.91%) , and affNist (89.02%) datasets. Compared to EM routing, our fuzzy-based and attention-based routing methods attain reductions of 1.42 and 2.5 in terms of the number of calculations. Capsule network, Attention mechanism, Deep Learning, Fuzzy Clustering. § INTRODUCTION §.§ Capsule network A capsule is a group of neurons <cit.> that encode the viewpoint variation of an entity. There are two quantities present in a capsule: the instantiation (pose) vector and the activation probability. The presence of an entity is disclosed by its activation probability, while its instantiation vector encodes the perturbations of that entity on a manifold. A CapsNet is made of capsule layers that are stacked on top of each other <cit.>. In CapsNets, the information flows from the lower layer to the higher layer via a voting mechanism <cit.>. Typical CNNs cannot perform tasks that require hierarchical parsing of a visual scene <cit.>. Instead, a capsule is designed to encapsulate the part–whole relationship into a group of neurons, particularly the instantiation vector. Intuitively, to formulate equivariance property, one capsule encodes three information: the position, shape, and existence of a visual entity <cit.> . §.§ Invariance and Equivariance Convolution is locally invariant to variant factors (e.g., translation, scaling, and rotation). The weight-sharing scheme and pooling mechanism also help CNNs to achieve local translational invariance. Poolings and convolutions, work as feature detectors, locally retaining the most activated neuron in the receptive fields, resulting in compact feature maps that help CNNs to recognize an object. However, those operators usually discard variant factors on the appearance manifold. Having introduced a parse tree-like structure of objects, <cit.> stated that a visual entity is constructed of its parts. The structural relationship of these entities is viewpoint-invariant under affine transformations, according to a reference fragment <cit.>. This idea motivates the use of matrix multiplication to investigate the part–whole relationship and test of agreement to examine if an entity is present <cit.>. In terms of topology, instantiation parameters represent the abstract variation, where each instantiation corresponds to each coordinate of the chart, mapping from the appearance manifold into the variance manifold <cit.>. The capsules attain the equivariance property when their instantiations change with respect to the viewpoint defined by the chart map. §.§ High computation However, computational complexity limits the feasibility of CapNets. This deficiency emerges both from the capsule units, and routing algorithm. As a capsule unit is a group of n neurons, storing one capsule layer thus requires n times more memory than storing a standard hidden layer. Moreover, the larger the capsules, the more learnable weights that are needed. Passing information through capsule layers involves matrix multiplication, which requires a vast number of operations. Furthermore, when the number of routing iterations increases, the required memory and computation time significantly increase. In general, this computational problem can be tackled in two different ways. One approach is to reduce the size of the voting tensors, while the other is to simplify the routing procedure. §.§ Contribution Our main contributions in this article can be summarized as follows: * To resolve the computational expense of capsule networks, we introduce routing paths that operate as shortcuts <cit.>, rather than as an unfolding sequence of layers. Shortcut flows iteratively update the capsules at the last layer by a routing algorithm. * We address the computational problem in the routing algorithm by introducing two simplified ways of calculating the routing coefficients: the attention-based approach based on cosine similarity, and the fuzzy-based approach based on fuzzy coefficients. * We conduct experiments on a classification task. To evaluate the performance of our framework on datasets that include varying viewpoints, in addition to the Mnist dataset, we also experiment on the affNist and smallNorb datasets. The rest of this paper is organized as follows: Section 2 presents related works in the literature on capsule networks. Section 3 describes the underlying idea of shortcut routing, and formulates the mathematical equations. Section 4 presents a baseline CapsNet architecture and the experimental results. Finally, Section 5 concludes this study. § RELATED WORKS Many studies have introduced specific mechanisms to standard CNNs, or regularized the objective function to encourage CNNs to observe the invariance inside a dataset. Kulkarni et al. proposed a deep convolutional inverse graphics network that learns graphics codes corresponding to distinct transformations <cit.>. Jaderberg et al. introduced a differentiable spatial transformer module that can produce an appropriate degradation for each input example <cit.>. Worral et al. replaced ordinary convolutional layers with steerable circular harmonic layers to exhibit patch-wise translations and rotations <cit.>. A widely acceptable way to increase the invariance capacity of a model is to train it with adversarial samples <cit.>, which introduces a locally invariant prior, and encourages neural networks exploit in the neighborhood of a data manifold <cit.>. The studies of Lenssen <cit.> and Cohen <cit.> presented equivariance under group theory, notes that capsule networks do not ensure equivariance, because there is no guarantee that the primary capsule layer will encode the spatial domain into the receptive field of the poses. Capsule networks have been studied extensively since routing algorithms were presented <cit.>. Sabour et al. developed the dynamic routing algorithm to be compatible with vector capsules whose orientation comprises the pose information, while the length of the pose vector is the activation probability. Hinton et al. proposed EM routing based on the expectation-maximization algorithm. They also applied patch-wise convolution to capsule layers. These two studies constitute the fundamental architecture of capsule networks. Several versions of CapsNet have modified the architecture and routing algorithm <cit.>. Zhang introduced two routing algorithms based on weighted kernel density estimation, one depends on the mean-shift algorithm, while the is a simplified version of EM routing <cit.>. Duarte et al. proposed VideoCapsuleNet, the main contribution being capsule pooling, which takes the average of all capsules in a receptive field before routing to higher levels <cit.>. To make CapsNets deeper, Rajasegaran et al. suggested using 3D convolutions instead of matrix multiplication, to increase the number of hidden layers; the third dimension pertains to the pose vector of capsules <cit.>. Although the authors announced that this approach surpasses the results of standard CapsNets, their framework does not align with the intuition of capsule units. Zhang <cit.> formulated learning multiple capsule subspaces by orthogonal projection. This projection can be embedded into the primary capsule layer to explore correlations between feature maps and capsule subspaces. Lalonde et al. revealed deconvolutional capsules that can be used to utilize U-net-based architectures, namely Segcaps, to deal with segmentation tasks <cit.>. Recently numerous works have deployed CapsNets to solve various tasks, including medical image classification <cit.>, text recognition <cit.>, and 3D point cloud classification <cit.>. § THE PROPOSED METHOD We denote an element in the k^th capsule layer as c_n,d,w,h^k ∈ℝ, c^k ∈ℝ^N × D × W × H, where N is the number of capsule channels in a layer, (W, H) are spatial dimensions (width and height), and D is the capsule dimension. We also denote a capsule as c_n,:,w,h, when the capsule dimension is omitted. In this study, we manipulate Matlab indexing to describe a variable, where lower letters (e.g. n, d, w, h) are index, and capital letters (e.g. N, D, W, H) are the corresponding sizes. Before routing, a lower capsule gives a prediction for a higher capsule on the next layer by multiplication with a transformation matrix. A capsule is widely represented as a vector (c_n,:,w,h∈ℝ^D) or matrix (c_n,:,w,h∈ℝ^√(D)×√(D)) <cit.>. An advantage of the matrix form is that the number of parameters required to represent a transformation matrix is less than that required by the vector form. For example, to perform a linear mapping from a 16-d vector to a 16-d vector, we need a matrix of size 16 × 16. Alternatively, to perform a linear map between two 4 × 4 matrices, we only need a 4 × 4 matrix. Thus, in favor of the smaller size of trainable transformation matrices, we mainly use the matrix form to represent the capsules. §.§ Shortcut routing The test of agreement <cit.> is a selective test that determines if a higher capsule is activated by lower capsules. Suppose that the poses of capsule A are represented by a matrix T_A. By multiplying T_A by a coordinate transform T_AC, which serves as a distinct part–whole relationship between C and A itself, we attain a prediction for capsule C, T_C=T_A T_AC. Likewise, we can deduce another prediction for capsule C, T'_C = T_B T_BC, from capsule B, where T_B and T_BC denote a capsule and its part–whole relationship with capsule C, respectively. If capsules A and B comply, capsule C is activated, i.e., if T'_C = T_C. The purpose of training capsule network is to discover the part–whole relationship, T_MN, between a higher capsule N and a lower capsule M. In a computer graphics settings, T_M is considered as a visual frame <cit.> that is coordinated with an explicit viewpoint. Because the part–whole relationships are unchanged under varying viewpoints, the activated capsules remain activated from another viewpoint, while the poses T_M change, according to the change of viewpoint. This property encourages capsule networks to gain an equivariant representation. Our shortcut routing is inspired by the observation that if a higher capsule is activated, the test of agreement holds for capsules at every lower level. For example, if capsules A and B mutually activate capsule C, T_C=T_A T_AC=T_B T_BC, and capsule C is also involved in activating capsule D with the coordinate transform T_CD, then capsules A and B associate in activating capsule D according to the transitive property. T_D=T_C T_CD=T_A (T_AC T_CD )=T_B (T_BC T_CD ), where T_BD=T_BCT_CD (T_AD=T_AC T_CD ) is the coordinate transform that relates the canonical visual entity of capsule B (A) to the canonical visual entity of capsule D, as shown in Fig 1. §.§ Building blocks We define local capsules as capsules on intermediate layers of Capsnet, and global capsules (class capsules) as capsules on the last layer, Fig 2. The overall framework comprises two modules: local capsule block and global capsule block. Local capsule block (locapblock): The input of the locapblock is a local capsule layer, and the output is the next capsule layer, and pre-voted capsules. This block employs depth-wise capsule convolution as its core component. Precisely, we apply a modified 3D convolution to the channel-wise approach <cit.>. The modified 3D convolution is a version that compatible with capsules in matrix form, where each capsule will be multiplied by a trainable transformation matrix. To achieve complete predictions for higher capsules, we apply ordinary 3D convolution on the pre-voted capsules in point-wise approach. In summary, a higher capsule is formed by a weighted combination of the lower capsules within a kernel, where the weights (routing coefficients) are shared on a channel, as explained in Eq 2. c_m,:,i,j^l+1 = ∑_n^Nr_n p_m,:,i,j, where p_m,:,i,j=∑_w=i-k^i+k∑_h=j-k^j+kT_m,n, : ,w,h^local· c_n,:,w,h^l is a pre-voted capsule, c_n,:,w,h^l ∈ℝ^√(D)×√(D) is a capsule at layer l on the n^th channel and at position (w,h), T_m,n, :, w,h^local∈ℝ^√(D)×√(D) is a transformation matrix that predicts a higher capsule, which are shared over (i,j), and "·" denotes multiplication between matrices. Additionally, r_n ∈ℝ is a trainable routing coefficient on channel n and differs among channels. Global capsule block (glocapblock): The global block is the core operating block of subsequent steps to update the global capsules. The inputs to the glocapblock are the pre-voted capsules and the latest global capsules. We multiply each pre-voted capsule by n transformation matrices to get candidates v_m,n,:,i,j∈ℝ^√(D)×√(D) of the corresponding global capsule: v_m,n,:,i,j= T_n,:,i,j^global· p_m,:,i,j^l, where p_m,:,i,j^l is a pre-voted capsule that is derived in locapblock, T_n,:,i,j^global is a trainable transformation matrix; which is shared among all capsules (i,j) in a channel n. Subsequently, we update the global capsules based on the routing coefficients: g_m,:^t=∑_n^N∑_i^W∑_j^Hr_m,n,i,jv_m,n,:,i,j, where g_m,:^t ∈ℝ^√(D)×√(D) is an updated global capsule, and r_m,n,i,j∈ℝ is a routing coefficient, which is explained in the next section. Note that the routing algorithm is only implemented in the glocapblock. Unified architecture: Firstly, a batch of images are fed into a backbone network. The output from backbone network is then reshaped into a 5-rank tensor with the volume (b, n, d, w, h), namely primary capsule, where n is the number of capsule channels, d is a square number representing the size of a capsule, (w,h) are spatial dimensions, and b is the batch size. Secondly, we employ the local capsule block to derive the next local capsule layers. The last layer in the first feed-forward pass is the initial prediction for the global capsules. Thirdly, following the depth of the network from the primary capsule layer, we update the global capsules by manipulating the glocapblock with the input area from a local capsule layer and the latest global capsules, as shown in the shortcut route in Fig. 2. The first prediction for the global capsules in the locapblock is the initialization of a routing algorithm. The local capsules at any level directly give predictions for the global capsules via the shortcuts, which process adheres to the idea of capsules at a lower layer all being involved in activating the global capsule. These shortcuts have the advantage of reducing the size of the voting tensors, v ∈ℝ^M × N × D × W × H in Eq. (4), where M is the number of channels (or styles of capsule), W and H are spatial sizes, D is capsule size, and N is the number of capsules are needed to predict. In the classification task, the number of global capsules is much smaller than the number of local capsules at any layer (N^global << N^local), so as shown in Fig. 3, skipping updating local capsules, will avoid expanding the network with large size voting tensors, and make the capsule network computationally efficient. §.§ Routing coefficients Routing or voting coefficients are scalars that weigh the degree of contribution of the lower capsules to the higher ones. CapsNets are structured from the voting process that approximates the test of agreement. In the locapblock, the routing coefficients are trainable parameters of the 3D point-wise convolution, and these parameters are shared over capsules on the same channel. Meanwhile, the routing coefficients in the glocapblock are calculated based on correlations between the local capsules in a layer and the latest global capsules. The two well-known approaches to computing the routing coefficients are attention-based <cit.>, and clustering-based <cit.>. The attention-based approach considers the higher capsules as queries, and the votes from the lower capsules as keys. On the other hand, the cluster-based approach defines the higher capsules as the center of the clusters, and the correlations are distances from the votes to the centers measured by a metric, such as Euclidean distance. In this study, we introduce fuzzy weights as routing coefficients, as they are more straightforward than the mixture weights in EM routing <cit.>. We also propose a simple way to calculate routing coefficients in the attention-based approach. A routing coefficient r_m,n,i,j∈ℝ is determined by voting capsules v_m,n,:,i,j and the latest global capsules g_m,:. The voting capsules are defined using learnable transformation matrices multiplied by the pre-voted capsules, as in Eq. (4). Note that the pre-voted capsules are the result of the summation of all capsules inside a kernel, as in Eq. (3). Concisely, a routing coefficient represents the contribution of all capsules in a receptive field. Formally, for voting capsule v_m,n,:,i,j at position (i,j) on channel n, the correlation r_m,n,i,j between v_m,n,:,i,j and a global capsule g_m,: is calculated as follows: * Attention-based approach a_m,n,i,j=v_m,n,:,i,j · g_m,: r_m,n,i,j=exp(a_m,n,i,j)/∑_k e x p(a_k,n,i,j), where a_m,n,i,j∈ℝ is an attention score, r_m,n,i,j is the normalized attention score, and ∑_m r_m,n,i,j=1. * Fuzzy-based approach d_m,n,i,j= v_m,n,:,i,j - g_m,:_2 f_m,n,i,j=1/∑_k(d_m,n,i,j/d_k,n,i,j)^2/m_f-1 r_m,n,i,j= (f_m,n,i,j)^m_f/∑_n,i,j(f_m,n,i,j)^m_f, where f_m,n,i,j∈ℝ is the fuzzy coefficient grading the degree of contribution of the local capsules (within a kernel) to the global capsule g_m,:, and m_f is a fuzzy degree that is set to 2 in this study. §.§ Activation probability and loss function The preceding section provides materials for deriving the poses (instantiations) of a capsule from their lower counterparts. Each capsule also has its own activation probability, which demonstrates how it is activated. We follow <cit.> to calculate the probability prob_m corresponding to global capsule g_m,: in the attention-based approach: * Attention-based approach prob_m = squash (g_m,:)_2 = g_m,:_2/1 + g_m,:_2 The activation probability of a global capsule is the length of its representation vector determined by squash normalization. The nonlinear squash function is applied to all capsules at each layer in the attention-based approach. * Fuzzy-based approach σ_m^2= ∑_n^M∑_j^H∑_i^Wr_r,m,n,jv_m,n,:,i,j - g_m,:_2 prob_m=sigmoid(λ(β_m-ln(σ_m)), where σ_m^2 measures the tightness of the m^th cluster, λ is hyperparameter to avoid numerical errors, and β_m is a trainable threshold. The tighter the cluster, the closer the votes are to the center global capsule. In other words, the global capsule is activated with a high probability. Our proposed formula for calculating the activation probability is not exactly the same as that introduced in <cit.>. The probability of activation in <cit.> is estimated from a Gaussian mixture component and the loss when activating a capsule, while our σ_m^2 is an objective function that is minimized in the fuzzy c-means algorithm. The objective loss used to train our network is Spread Loss <cit.>: L =∑_m≠ t(max(0, margin-(prob_m-prob_t))^2, where class t is the target class, and the margin ranges from 0.2 to 0.9 through the training time. Spread Loss aims to maximize the gap between the activation of the target class and the activation of the other classes. § EXPERIMENTS We evaluated our model on the Mnist, affNist, and smallNorb datasets. Each dataset was split into three subsets for training, validation, and testing. The splitting portions varied by dataset. After training for 200 epochs, the model with the lowest error rate on the validation set was used to evaluate the test set. We normalized all image ranges from 0 to 1. We set the batch size to 128 and trained the model with the Adam optimizer, with a learning rate of 10^-3 and the other parameters set to default values. We decreased the learning rate by 0.8 every 20 epochs to ensure stable convergence during late training. The parameter λ introduced in Eq. (13) was 10^-1, the margin in the loss function (Eq. (14)) during the first ten epochs varied from 0.2 to 0.9 and was then unchanged. The execution environment was an Ubuntu server with a TITAN V GPU with 12 GB RAM. We mainly implemented our method in PyTorch. The source code is available at https://github.com/Ka0Ri/shortcut-routing.git. §.§ Datasets * Mnist: Each image in the dataset is a grayscale image with a resolution of 28 × 28 labeled from 0 to 9. In this experiment, the training set was divided into two subsets, one for training (50k images) and the other for validation (10k images). When training, we randomly translated the images in a batch by a small amount as <cit.> suggested. * smallNorb: The database <cit.> contains images of 50 figures belonging to 5 categories. In this study, we followed the default settings, and the test set contained the remaining 5 instances (from 0 to 5). We randomly cropped 32 × 32 patches of the training images, and cropped from the center of the test images. We also imposed random brightness and contrast to the training images. This augmentation was suggested in <cit.>. As a result, the objects in the training set differ in viewpoint from the objects in the test set. * affNist: This dataset is extended from the Mnist dataset. The dataset was built by taking images from Mnist and applying different affine transformations to them, resulting in images with a larger size of 40 × 40. While the default training and validation images were made by 0-padding, the test data was created by transforming the 10k test cases from the original Mnist dataset. Following the training scenario <cit.>, we trained the model with translated data (50k images) by randomly shifting the padded images within 5 pixels. §.§ The proposed architecture We conducted the experiments with a baseline model consisting of a backbone network of two 2D convolutional layers followed by two convcaps layers and a class capsule layer. The first convolutional layers had a kernel size of 5 × 5 with a stride of 2 and 64 channels. Next were 1 × 1 convolutional layers that formed a primary capsule layer with 8 input channels, and 128 output channels. With the above settings, the primary capsule had 8 channels, and the size of a capsule was 4×4. This architecture was inspired by the small model introduced in the original paper <cit.>. After each 2D convolutional layer, we used ReLU as the activation function and then batch normalization. Subsequently, 3 consecutive locapblocks were used to calculate local capsules and derive the first predictions for global capsules. The channel-wise capsule convolution integrated inside each locapblock utilized a kernel of size 3 × 3 without padding. We used a stride of 2 in the first locapblock and a stride of 1 for the following locapblocks. The outputs of the channel-wise capsule convolution are pre-voted capsules. We applied point-wise 3D convolution to the pre-voted capsules with the 16 output channels. The kernel size used in the last block depends on the size of the current feature map, and reduces the size of the output feature maps to 1 × 1 in the fully convolutional way. The number of glocapblocks (shortcuts) is contingent on the depth of the model. In particular, if there are n feed-forward times from the primary capsule layer to the first global capsule layer, then the number of glocapblocks is n. We used n=3 in our designated architecture, and the number of routing iterations was set to 2. We also used capsule dropout <cit.> to make an ensemble model with a dropout probability of 0.2, and applied it to the primary capsule layer. With the designated architecture, we aimed to construct a baseline network with the backbone is as simple as possible, so that the ability of the model would mostly depend on the capsule part. For complexity comparison, we also implemented our Capsnet with the standard configuration <cit.>, where the model is wider, named as expanded model. §.§ Classification results Table 1 lists the classification results. With fuzzy and attention routing, we achieved competitive accuracies on all datasets. Moreover, fuzzy routing gave the highest accuracy most of the time. Additionally, our model outperforms the ordinary CapsNet on the affNist dataset. The result of 98.17% on the smallNorb dataset reported in <cit.> is the state-of-the-art for this dataset. However, without the release of official implementation, we attained much lower accuracies using their algorithms. Eventually, the above results were lower than those from our proposed shortcut (92.16% with EM routing, 91.27% with fuzzy routing, 90.26% with dynamic routing, and 89.36% with attention routing). We addressed the trade-off between computational burden and accuracy. In particular, the number of parameters of the baseline models with shortcut routing was not large enough to obtain significant improvements overall. Although expanding learning models is a rule of thumb to deal with the underfitting problem, when working with capsule networks it seems to be impractical, because expanding a capsule network exponentially increases the number of calculations with respect to the number of parameters. Herein, our proposed shortcut routing takes advantage of expanding the model with the simplified architecture, and which when fuzzy or attention routing is integrated, is even more compact. We have shown that expanded models with shortcut routing achieved higher accuracy compared to all baseline models without shortcut routing, even though the numbers of parameters of those models were approximate. Although the results of the shortcut approach with the expanded model were a little inferior to those of the conventional approach, the shortcut routing was much faster than the original architecture. As suggested by the results, we expect that the proposed architecture would help build deeper capsule networks to enhance the learning ability of models. §.§ Performance Analysis Our proposed model is computationally efficient compared to the original version <cit.>. The efficiency is demonstrated in two quantities: the number of calculations and the number of trainable parameters. When comparing the number of parameters used in each model, the baseline model with shortcut routing consists of 23k parameters. The architecture proposed in <cit.> has the same depth in the capsule part as our proposed model (3 capsule convolutional layers), but the number of parameters is much higher (88k) than in our model, and considerably fewer than that used by the model in <cit.>, with 8.2M parameters . Notably, the expanded architecture using shortcut routing has 68k parameters, which is even fewer than the number of parameters in the baseline architecture without shortcuts (88k). It is substantially smaller than the original model without shortcuts (377k). Our small-scale model is the result of exploiting the depth-wise capsule convolution, shortcut routing, and transformation matrices shared in a channel. Another quantity involved in the computational burden in CapsNets is the size of voting tensors. The votes are the predictions of the lower capsules for the higher capsules abiding by the M-N correspondence, where M (N) is the number of lower (higher) capsules. Routing methods consider the voting matrix as explicit input to derive the higher-level capsules and their corresponding activation probability. Theoretically, it is sufficient to compare two routing procedures based solely on the calculations in an iteration. A prediction from the capsule at position (i,j) on channel n at an intermediate layer for the global capsule m is v_m,n,:,i,j. Thus, the size of the voting tensor in our implementation is C_out× C_in× H× W× D, where C_out is the number of global capsules, C_in is the number of channels of the input channel , and W × H is the spatial size of the pre-voted capsule layer. On the other hand, routing between two consecutive layers in the patch-wise approach presented in [2] yields a voting tensor of size K× K×C'_out×C'_in× W' × H' × D, where C'_out (C'_in) is the number of output (input) channels, W' × H' is the spatial size of the target layer, and K× K is the size of a patch. The ratio of the size of the voting tensors in our shortcut approach and without shortcut approach is r_voting = C_out× C_in× H× W× D/K× K×C'_out×C'_in× W'× H'× D, where C_in=C'_in is the number of channels in the current layer, and in fact W'=W and H'=H since the spatial size of the pre-voted capsule layer is the spatial size of the target layer in our approach, Eq. (15) can be reduced to r_voting = C_out/K× K×C'_out. We further address the computational problem in the routing algorithm by introducing two simplified ways of calculating the routing coefficients: the attention-based approach based on cosine similarity and the fuzzy-based approach based on fuzzy coefficients. To compare these approaches, we calculated the number of FLOPS theoretically required in a routing iteration in the fuzzy-based, attention-based, and EM-based algorithms <cit.>. Denoting Q as the total size of the voting tensor, in approximation, the attention-based approach requires 4Q FLOPS, and the fuzzy-based approach requires 7Q FLOPS, whereas the EM-based approach requires 10Q FLOPS. These results show that our fuzzy-based, and attention-based algorithms are 1.42 times, and 2.5 times more efficient than the EM-based algorithm, respectively. Moreover, the size Q of the voting tensor in our proposed algorithms is much smaller than that in EM-based routing (Eq. (16)), thereby making our model overall computationally efficient. In practice, the training time of fuzzy-based routing and attention-based routing is shorter than that of EM-based routing, given images of size 28 × 28; Table 2 provides the details of consumptions. Moreover, the memory consumed by fuzzy-based routing and attention-based routing is also lower than that consumed by EM-based routing. These results are reported in the training phase with the same configuration. Table 2 summarizes the above performance analysis. § CONCLUSIONS Our study introduces a new routing path for capsule networks, namely shortcut routing. To this end, we developed two modules that were integrated into the architecture of CapsNets. First, the local routing block from the depth-wise convolution with a modification is equivalent to matrix multiplication. Second, the global capsule block helps in explicit routing from local capsules to global capsules with the shortcuts. Additionally, we presented fuzzy coefficients as a computationally efficient voting method for deducing higher-level capsules from the lower ones. The proposed framework achieved on-par performance with the original benchmark, while reducing in the complexity of model architecture, as well as of routing algorithm. This study therefore contributes a united framework that shows promise for further research into capsule networks. 99 1 G. E. Hinton, A. Krizhvesky and S. D. Wang, Transforming Auto-Encoders, International Conference on Artificial Neural Networks - ICANN 2011, Berlin, 2011. 2 G. E. Hinton, S. Sabour and N. Frosst, Matrix Capsules With EM Routing, International Conference on Learning Representations, 2018. 3 S. Sabour, N. Frosst and G. E. Hinton, Dynamic Routing Between Capsules, Advances in neural information processing systems, pp. 3856-3866, 2017. 4 A. Kosiorek, S. Sabour, Y. W. Teh and G. E. Hinton, Stacked capsule autoencoders, Advances in Neural Information Processing Systems, pp. 15486-15496, 2019. 6 G. E. Hinton, Z. Ghahramani and W. Y. Teh, Learning to Parse Images, Advances in Neural Information Processing Systems 12, 1999. 7 R. S. Zemel, C. M. Mozer and G. E. Hinton, TRAFFIC: Recognizing Objects Using Hierarchical Reference Frame Transformations, Advances in Neural Information Processing Systems 2, 1989. 5 I. Goodfellow, Y. Bengio and A. Courville, Manifold Learning, in Deep Learning, The MIT Press, pp. 156-159, London, 2017. 8 Y. Bengio, A. Courville and P. Vincent, Representation learning: A review and new perspectives, IEEE transactions on pattern analysis and machine intelligence, vol. 35, no. 8, pp. 1798-1828, 2013. 10 K. He, X. Zhang, S. Ren and J. Sun, Deep residual learning for image recognition, Proceedings of the IEEE conference on computer vision and pattern recognition, 2016. 12 T. D. Kulkarni, W. Whitney, P. Kohli and J. B. Tenenbaum, Deep Convolutional Inverse Graphics Network, Advances in neural information processing systems, pp. 2539-2547, 2015. 13 M. Jaderberg, K. Simonyan, and A. Zisserman, Spatial transformer networks, Advances in neural information processing systems, pp. 2017-2025, 2015. 14 D. E. Worrall, S. J. Garbin, D. Turmukhambetov and G. Brostow, Harmonic Networks: Deep Translation and Rotation Equivariance, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017. 15 I. Goodfellow, Y. Bengio and A. Courville, Adversarial Training, in Deep Learning, London, The MIT Press, pp. 261-263, 2017. 16 I. J. Goodfellow, J. Shlens and C. Szegedy, Explaining and Harnessing Adversarial Examples, arXiv preprint arXiv:1412.6572, 2014. 17 J. E. Lenssen, M. Fey and P. Libuschewski, Group equivariant capsule networks, Advances in Neural Information Processing Systems, pp. 8844-8853, 2018. 18 T. S. Cohen and M. Welling, Group equivariant convolutional networks, International conference on machine learning, 2016. 30 J. Choi, H. Seo, S. Im and M. Kang, Attention Routing Between Capsules, Proceedings of the IEEE International Conference on Computer Vision Workshops, 2019. 31 H. Li, X. Guo, B. Dai, W. Ouyang and X. Wang, Neural Network Encapsulation, Proceedings of the European Conference on Computer Vision (ECCV), 2018. 32 K. Ahmed and L. Torresani, STAR-CAPS: Capsule Networks with Straight-Through Attentive Routing, Advances in Neural Information Processing Systems, pp. 9098-9107, 2019. 19 S. Zhang, W. Zhao, X. Wu and Q. Zhou, Fast dynamic routing based on weighted kernel density estimation, International Symposium on Artificial Intelligence and Robotics, pp. 301-309, 2018. 20 K. Duarte, Y. S. Rawat and M. Shah, VideoCapsuleNet: A Simplified Network for Action Detection, arXiv preprint arXiv:1805.08162, 2018. 21 J. Rajasegaran, V. Jayasundara, S. Jayasekara, H. Jayasekara, S. Seneviratne and R. Rodrigo, DeepCaps: Going Deeper with Capsule Networks, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019. 22 L. Zhang, M. Edraki and G. J. Qi, CapProNet: Deep feature learning via orthogonal projections onto capsule subspaces, Advances in Neural Information Processing Systems, pp. 5814-5823, 2018. 23 R. LaLonde and U. Bagci, Capsules for object segmentation, arXiv preprint arXiv:1804.04241, 2018. 24 P. Afshar, A. Mohammadi and P. Konstantinos, Brain tumor type classification via capsule networks, in 2018 25th IEEE International Conference on Image Processing (ICIP), 2018. 25 T. Iesmantas and R. Alzbutas, Convolutional capsule network for classification of breast cancer histology images, International Conference Image Analysis and Recognition, 2018. 26 X. Zhang, P. Li, W. Jia and H. Zhao, Multi-labeled Relation Extraction with Attentive Capsule Network, arXiv preprint arXiv:1811.04354, 2018. 27 W. Zhao, J. Ye, M. Yang, Z. Lei, S. Zhang and Z. Zhao, Investigating capsule networks with dynamic routing for text classification, arXiv preprint arXiv:1804.00538, 2018. 29 Y. Zhao, T. Birdal, H. Deng and F. Tombari, 3D Point Capsule Networks, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019. 11 A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto and H. Adam, MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications, arXiv preprint arXiv:1704.04861, 2017. 34 L. Yann, J. F. Huan and L. Bottou, Learning Methods for Generic Object Recognition with Invariance to Pose and Lighting, IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. 35 C. Xiang, L. Zhang, Y. Tang, W. Zou and C. Xu, "MS-CapsNet: A novel multi-scale capsule network," IEEE Signal Processing Letters, vol. 25, no. 12, pp. 1850-1854, 2018. [author1.eps]Dang Thanh Vureceived B.S. degrees in Mathematics and Computer Science from Vietnam National University in 2018. He is now a M.S. student in the Department of ICT Convergence System Engineering, Chonnam National University, Rep. of Korea. His research interests include statistical methods, and machine learning. [author2]Vo Hoang Trongreceived M.S. degree in Electronics Engineering from Chonnam National University in 2020. He is now a Ph.D. student in the Department of ICT Convergence System Engineering, Chonnam National University, Rep. of Korea. His research interests include image processing, and deep learning. [author3]Yu Gwanghyunreceived M.S. degree in Electronics Engineering from Chonnam National University in 2018. He is now a Ph.D. student in the Department of ICT Convergence System Engineering, Chonnam National University, Rep. of Korea. His research interests include signal processing, and deep learning. [author4]Kim Jin Youngreceived his Ph.D. degree in Electronics Engineering from Seoul National University, Rep. of Korea. From 1993 to 1994, he worked on speech synthesis at Korea Telecom. Since 1995, he has been a professor in the Department of ICT Convergence System Engineering, Chonnam National University, Rep. of Korea. His research interests include speech signal processing, audio-visual speech processing, and deep learning-based signal processing
http://arxiv.org/abs/2307.06150v1
20230712131500
Synergies of Drell-Yan, beauty, top, and Z observables in MFV-SMEFT
[ "Cornelius Grunwald", "Gudrun Hiller", "Kevin Kröninger", "Lara Nollen" ]
hep-ph
[ "hep-ph", "hep-ex" ]
Biofilm.jl: a fast solver for one-dimensional biofilm chemistry and ecology [ August 12, 2023 =========================================================================== § EFFECTIVE FIELD THEORIES The Standard Model Effective Field Theory (SMEFT) offers numerous benefits in the search for new physics (NP) above the electroweak scale. Besides allowing for largely model-independent analyses, it provides the opportunity to connect different sectors such as beauty and top via SU(2)_L <cit.>. The SMEFT Lagrangian is constructed by using the Standard Model (SM) Lagrangian as the starting point and then adding the series of all higher dimensional operators O_i^(d) multiplied by the corresponding Wilson coefficient C_i^(d) and suppressed by d-4 powers of the energy scale of BSM phsics Λ ℒ_SMEFT = ℒ_SM + ∑_d=5^∞ ∑_i C_i^(d)/Λ^d-4O_i^(d) . We consider only dimension-six operators, for which we employ the Warsaw basis <cit.>. We assume all Wilson coefficients to be real-valued as well as lepton and baryon number conserving. While the SMEFT is the appropriate theory for the description of collider observables, b → s observables are computed within the weak effective theory <cit.>. Both theories are connected by RGE evolution and matching at the W-boson mass scale. § MFV IN SMEFT Besides reducing the number of degrees of freedom, flavor patterns impose correlations among the different flavor components of a Wilson coefficient and thus link different sectors. We employ the Minimal Flavor Violation (MFV) approach, in which the flavor structure of the Wilson coefficients is expanded in SM Yukawa matrix insertions, reading for example q̅_L q_L : C_ij = ( a_1 1+a_2Y_uY_u^†+a_3Y_d Y_d^† +a_4 (Y_uY_u^†)^2 +... )_ij , for an operator containing two left-handed quark doublets. We consider only the top quark Yukawa coupling and neglect all other Yukawa couplings in the following. After a rotation to the mass basis, the ansatz (<ref>) yields C_ij q̅_L_i q_L_j⊃u̅_L_i^' [a_1 1+a_2 [Y_u^diag]^2 +a_3 V [Y_d^diag]^2 V^† +a_4 [Y_u^diag]^4 + ... ]_ij u_L_j^' + d̅_L_i^' [a_1 1+a_2 V^†[ Y_u^diag]^2 V +a_3 [Y_d^diag]^2 +a_4 V^†[ Y_u^diag]^4 V + ... ]_ij d_L_j^' , with the CKM matrix V. Powers of the top Yukwa proportional to a_2n induce FCNCs in the down quark sector. Regarding the lepton sector, this approach results in lepton flavor diagonal and universal Wilson coefficients. We define C̃_q q̅ = v^2/Λ^2 a_1 , γ_a=∑_n ≥ 1 y_t^2n a_2n/a_1 . for left handed quark-doublets. We absorb all higher orders of top-Yukawa insertions into the ratio γ_a as they lead to the same coupling pattern. See <cit.> for the definitions for other quark bilinears. § GLOBAL SMEFT FIT We use top-quark data, Drell-Yan measurements, Z decays and b → s observables to constrain 14 Wilson coefficients and two flavor ratios γ_a/b, where γ_b is defined for right handed up-type quarks analogously to Eq. (<ref>). For the fit we employ EFTfitter <cit.>, using Bayesian inference performed with BAT.jl <cit.>. We employ a uniform prior in the range -1≤C̃_i ≤ 1 for the rescaled Wilson coefficients and a uniform prior -10≤γ_a/b≤ 10 for the flavor parameters. Additionally, we perform fits of the individual sectors assuming a benchmark value γ_a/b=1. We choose this value as γ_a=0 would decouple the b→ s observables. The 90% credible intervals of the Wilson coefficients as well as the total width of these intervals are shown in Fig. <ref>. In Fig. <ref>, we show the posterior probability distribution of γ_a. We observe synergies in the global fit, which improves the bounds set by the individual sectors. In all fits, the constraints on the Wilson coefficients are compatible with C̃_i=0, corresponding to the SM value. The marginalized posterior probability distribution of γ_a of the global fit (dark blue) shown in Fig. <ref> exhibits a distinct double peak behavior with a larger peak at γ_a=-1.2 and a smaller peak at γ_a=1.9, while the distribution of γ_b (see <cit.>) is rather flat. The fit with the b → s measurements set to their SM prediction (green), in contrast, features only one narrow peak centered around γ_a=0. This implies that the b → s anomalies that are currently observed at the level of 3σ are the main driver of the shape of the posterior probability of γ_a, demonstrating that this parameter provides an interesting indirect probe of the flavor of possible physics beyond the Standard Model (BSM). r0.5 < g r a p h i c s > Marginalized posterior probability distribution of γ_a from the global fit (dark blue), from a scenario in which all b→ s measurements are set to their SM prediction (green) and from a pure Drell-Yan fit (light blue). The plot is taken from Ref. <cit.>. § CONNECTION TO DINEUTRINO BRANCHING RATIOS Due to the manifest SU(2)_L link in the SMEFT, b-hadron decays into neutrinos are directly linked to the b-hadron decays involving charged leptons. The process b → s ℓ^+ ℓ^- probes the combination C_lq^(+)= C_lq^(1) + C_lq^(3), whereas the dineutrino process b → s νν̅ is sensitive to the orthogonal combination C_lq^(-)= C_lq^(1) - C_lq^(3). A combination of both types of processes is thus crucial to pinpoint possible BSM contributions. While there are currently only experimental upper limits on branching ratios of b → s νν̅ transitions, the observation of these processes is expected in the near future by Belle II <cit.>. To test the impact of these measurements, we perform fits including several benchmark scenarios of SM like (BM SM) as well as enhanced (BM +2σ) and suppressed (BM -2σ) branching ratios for the decays B(B^0 → K^*0νν̅) and B(B^+ → K^+νν̅). Both decays are sensitive to the same BSM coefficient C_L, as right-handed currents are suppressed in MFV by small down-type Yukawas. The resulting 90% credible limits are shown in Fig. <ref>. l0.5 < g r a p h i c s > 90% credible intervals of the global fit with dineutrino benchmark measurements. The plot is taken from Ref. <cit.>. We see that including the dineutrino measurements has a significant impact on the fit, especially on the Wilson coefficients C̃_qe, C̃_lq^(1) as well as the Wilson coefficients of the penguin operators. Even a SM-like measurement would signal a non-zero value for C̃_qe and C̃_φ d, since the inclusion of b → s νν̅ observables resolves the flat direction in the parameter space. Suppressed branching ratios would shift those coefficients to even larger values, whereas enhanced branching ratios would imply a non-zero value for C̃_lq^(1). Moreover, we derive predictions for the dineutrino branching ratios based on our global fit. The resulting marginalised posterior probability distribution is shown in Fig. <ref>. We see that the resulting 68% credible intervals are centered around the SM prediction and that they are below the current experimental limit. § CONCLUSION We perform a global SMEFT fit of top, Drell-Yan, Z and beauty observables within the MFV framework. We find that the global fit shows synergies and improves the bounds of the individual sectors. Presently, the global fit is consistent with the SM. We test the MFV expansion by including two ratios of expansion parameters as degrees of freedom in the fit. Our results indicate a non-zero value of γ_a, mainly resulting from the b anomalies. We investigate the impact of future dineutrino measurements on the fit and find a significant impact on several Wilson coefficients and the potential to result in non-zero values. The result of the global fit is used to predict the branching ratios B(B^0 → K^*0νν̅) and B(B^+ → K^+νν̅) within the MFV scenario. The resulting posterior probability distributions are centered around the SM predictions, and leave sizeable room for new physics. § ACKNOWLEDGMENTS LN is very grateful to the organizers to be given the opportunity to present this work. LN is supported by the doctoral scholarship program of the Studienstiftung des Deutschen Volkes. 99 Bissmann:2020mfi S. Bißmann, C. Grunwald, G. Hiller and K. Kröninger, JHEP 06 (2021), 010 doi:10.1007/JHEP06(2021)010 [arXiv:2012.10456 [hep-ph]]. Aoude:2020dwv R. Aoude, T. Hurth, S. Renner and W. Shepherd, JHEP 12 (2020), 113 doi:10.1007/JHEP12(2020)113 [arXiv:2003.05432 [hep-ph]]. Bruggisser:2021duo S. Bruggisser, R. Schäfer, D. van Dyk and S. Westhoff, JHEP 05 (2021), 257 doi:10.1007/JHEP05(2021)257 [arXiv:2101.07273 [hep-ph]]. Greljo:2022cah A. Greljo, A. Palavrić and A. E. Thomsen, JHEP 10 (2022), 010 doi:10.1007/JHEP10(2022)005 [arXiv:2203.09561 [hep-ph]]. Greljo:2022jac A. Greljo, J. Salko, A. Smolkovič and P. Stangl, JHEP 05 (2023), 087 doi:10.1007/JHEP05(2023)087 [arXiv:2212.10497 [hep-ph]]. Grzadkowski:2010es B. Grzadkowski, M. Iskrzynski, M. Misiak and J. Rosiek, JHEP 10 (2010), 085 doi:10.1007/JHEP10(2010)085 [arXiv:1008.4884 [hep-ph]]. Jenkins:2017jig E. E. Jenkins, A. V. Manohar and P. Stoffer, JHEP 03 (2018), 016 doi:10.1007/JHEP03(2018)016 [arXiv:1709.04486 [hep-ph]]. Grunwald:2023nli C. Grunwald, G. Hiller, K. Kröninger and L. Nollen, [arXiv:2304.12837 [hep-ph]]. Castro:2016jjv N. Castro, J. Erdmann, C. Grunwald, K. Kröninger and N. A. Rosien, Eur. Phys. J. C 76 (2016) no.8, 432 doi:10.1140/epjc/s10052-016-4280-9 [arXiv:1605.05585 [hep-ex]]. Schulz:2020ebm O. Schulz, F. Beaujean, A. Caldwell, C. Grunwald, V. Hafych, K. Kröninger, S. La Cagnina, L. Röhrig and L. Shtembari, [arXiv:2008.03132 [stat.CO]]. Belle-II:2018jsg E. Kou et al. [Belle-II], PTEP 2019 (2019) no.12, 123C01 [erratum: PTEP 2020 (2020) no.2, 029201] doi:10.1093/ptep/ptz106 [arXiv:1808.10567 [hep-ex]].
http://arxiv.org/abs/2307.04216v1
20230709161102
Hierarchical Autoencoder-based Lossy Compression for Large-scale High-resolution Scientific Data
[ "Hieu Le", "Hernan Santos", "Jian Tao" ]
cs.LG
[ "cs.LG", "cs.AI", "eess.IV" ]
Texas A&M University Administration Building, 400 Bizzell St College Station Texas USA 77843 [email protected] 0000-0003-2510-073X 0009-0005-0113-5067 Texas A&M University Administration Building, 400 Bizzell St College Station Texas USA 77843 [email protected] 0000-0003-4228-6089 Texas A&M University Administration Building, 400 Bizzell St College Station Texas USA 77843 [email protected] Lossy compression has become an important technique to reduce data size in many domains. This type of compression is especially valuable for large-scale scientific data, whose size ranges up to several petabytes. Although Autoencoder-based models have been successfully leveraged to compress images and videos, such neural networks have not widely gained attention in the scientific data domain. Our work presents a neural network that not only significantly compresses large-scale scientific data but also maintains high reconstruction quality. The proposed model is tested with scientific benchmark data available publicly and applied to a large-scale high-resolution climate modeling data set. Our model achieves a compression ratio of 140 on several benchmark data sets without compromising the reconstruction quality. Simulation data from the High-Resolution Community Earth System Model (CESM) Version 1.3 over 500 years are also being compressed with a compression ratio of 200 while the reconstruction error is negligible for scientific analysis. [Further information and requests for resources and reagents should be directed to and will be fulfilled by the lead contact, Hieu Le ([email protected])] <ccs2012> <concept> <concept_id>10002951.10003152</concept_id> <concept_desc>Information systems Information storage systems</concept_desc> <concept_significance>300</concept_significance> </concept> <concept> <concept_id>10010147.10010257</concept_id> <concept_desc>Computing methodologies Machine learning</concept_desc> <concept_significance>300</concept_significance> </concept> </ccs2012> [300]Information systems Information storage systems [300]Computing methodologies Machine learning Hierarchical Autoencoder-based Lossy Compression for Large-scale High-resolution Scientific Data Jian Tao August 12, 2023 ================================================================================================ § INTRODUCTION Over the past few decades, the amount of information available for analysis has increased significantly. Scientific instruments and related computation systems such as Linac Coherent Light Source <cit.>, the Very Large Array Radio Telescope <cit.> and high-resolution climate modeling <cit.>, produce a massive amount of data and put a huge burden on the existing storage system. It is important to design efficient compression models that are able to reduce the data size for storage while maintaining the key information for analysis. Data compression can be lossless and lossy. Lossless compression, whose reconstruction is exactly the same as the original data, suffers from a low compression ratio (around 2:1 <cit.>) on floating-point datasets <cit.>. Meanwhile, lossy compression scarifies data information to achieve a much higher compression ratio. Despite the loss of information, the quality of data reconstructed by lossy compression schemes is generally acceptable and usable <cit.>. The nature of lossy compression has driven scientists and engineers to implement many compression algorithms and methods to substantially reduce the size of scientific data <cit.>, whose size is often enormous (might be up to 32 exabytes <cit.>). Furthermore, recent studies by <cit.>, <cit.>, and <cit.> showed that reconstruction data from lossy compression can be used for post-hoc analyses. In recent years, both scientific and engineering communities have focused on developing neural network models for computer vision <cit.>, natural language processing <cit.>, and compression <cit.>. Among numerous types of deep learning models, Autoencoder (AE) has gained tremendous attention because of its capability to learn data representation. AE is a type of neural network that can efficiently learn the representation of input data in an unsupervised manner for reconstruction. Internally, the network contains a bottleneck layer, whose representation is much smaller than its inputs in terms of size. Therefore, AE is primarily used for dimension reduction and feature extraction. Many variations of AE have been developed to improve the quality of reconstructed data <cit.>. Although AE has been shown to be successful in lossy image and video compression <cit.>, there are only a few number of studies leveraging this type of neural network for scientific data compression <cit.>. In this work, we explore the possibility of leveraging a lossy AE-based compression model to compress scientific data. Specifically, this work aims to achieve high reconstruction quality of data at a very low bit rate, below 0.50. We propose a novel AE-based model that is capable of significantly reducing data size without compromising data quality. Moreover, Higher-order Singular Value Decomposition (HOSVD) method is also implemented and applied to compress floating-point data. Outputs of HOSVD are used to compare against the outputs of the AE-based model. The key contributions of this work are as follows: * We introduce data processing methods for both training and testing sets to overcome issues when dealing with large-scale scientific data. These techniques enable efficient compression on both high performance computing (HPC) nodes and regular commercial devices, such as personal computers. * Targeting a very low bit rate region (below 0.5), a lossy AE-based compression model is proposed to significantly compress simulation data from high-resolution HR-CESM1.3 data sets. The rest of this paper is organized as follows. In Section <ref>, related work is discussed. In Section <ref>, we describe important concepts and techniques, which are implemented in our proposed models. Section <ref> describes compression experiments on benchmark data and large-scale simulation data. We evaluate and analyze our results in Section <ref>. Section <ref> concludes our findings with directions for future work. § RELATED WORK Traditional lossy compression for scientific data could be categorized into two types: prediction-based and transform-based. Transform compression (e.g. ZFP <cit.>) transformed the data before applying other techniques, e.g. embedded coding, to truncate the transformed data. Coefficients and bit-planes determined by the model were used to decompress data. Increasing the number of coefficients and bit-planes improved the quality of reconstructed data but decreased the compression ratio. On the other hand, prediction-based models, such as SZ <cit.>, FPZIP <cit.>, predicted the target data using previous reconstructed data points <cit.>. Similar to transform-based models, the authors found that the fidelity of the reconstructed data degraded when a high compression ratio was required. Prediction-based models have been shown to have high reconstruction quality at a high compression ratio, which leads to more studies to improve the performance of this type of compression <cit.>. Recently, deep learning models have been leveraged to compress many types of data. Many AE-based models showed remarkable results in image and volumetric compression tasks. Balle et al. <cit.> introduced an effective end-to-end AE-based model to compress images. The authors trained their models to optimize rate-distortion performance. In order to balance the trade-off between the quality of reconstructed data and compression ratio, both losses for reconstruction and compression rate were minimized simultaneously. Since the quantization layer of their compression models prevented the gradients from flowing through the networks, independently and identically distributed uniform noise was used to replace the quantization layer during training. The added noise enabled the back-propagation without significantly deteriorating the performance of the quantization layer when compressing images. Similarly, Theis et al. <cit.> implemented AE-based compression models using a different quantization method from <cit.>. Their soft quantization was continuous, thus allowing the gradient to flow smoothly through the entire network. As a result, additive noise was not required for training. Models with two levels of quantization were also investigated in <cit.>. The second layer not only provided fine-grained quantization but also acted as a prior for the first quantization layer. Moreover, arithmetic encoding <cit.> <cit.> was implemented instead of variants of Huffman coding <cit.>. Integer quantization, proposed by <cit.>, was applied to quantization layers to eliminate the dependence on hardware-platform floating-point implementation, which varied from machine to machine, during compression. Adopting the idea of two-level quantization, several studies have been conducted to improve the capability of neural networks in image compression. Minnen et al. <cit.> built an autoregressive model. The first quantization layer, which received inputs from the prior given by the second quantization and from the encoder, autoregressively processes data representation to produce high quality images. Their neural networks were also among the first machine learning models that outperformed the state-of-the-art BPG compression <cit.>. However, autoregression by its nature prevented neural networks to compute in parallel. Models created by <cit.> eliminated the autoregressive layer and replaced it with multiple splitting layers, which enabled the decoder to comprehensively learn different sets of channels in parallel. Additionally, optimization for compression speed using neural networks was addressed by <cit.>, which suggested several methods to improve compression performance. Compression on audio signals using AE-based neural networks has also experienced much progress. The work of <cit.> outperformed MP3 in both compression ratio and audio quality. Their models adopted vector quantization techniques proposed by <cit.>. The authors not only optimized signal losses in the time domain but also minimized reconstruction losses in the frequency domain. Furthermore, the coupling of AE and Generative Adversarial Networks (GAN) <cit.> was leveraged to achieve a high-quality compression model. Neural networks have also been implemented to compress volumetric scene data. Kim et al. <cit.> replaced fine-grain compression layers in their tree-based models with neural networks, which greatly enhanced the performance on volumetric representation. Coordinate networks by <cit.> not only focused on learning the scene representation but also provided great compression capability. However, image and video compression models mainly reconstructed integer pixels (or voxels), which were only a subset of scientific data, where data types ranged from integer to floating-point. As a result, several studies using neural networks to enhance scientific data compression have been conducted. Glaws et al. <cit.> proposed an AE model, which was built upon 12 residual blocks of convolution layers. The authors also incorporated three compression layers to reduce the dimensions of the data in their AE's architecture. The model was trained to compress turbulence data with a fixed compression ratio of 64. Liu et al. <cit.> introduced a seven-layer AE model to compress scientific data. The encoder was comprised of three layers of fully connected layers, each of which compressed input data by eight folds. Theoretically, the encoder could compress data by 512x (8^3). Similar to the encoder, the decoder had three fully connected layers to decompress the compressed data. Between the encoder and decoder, a bottleneck contained latent variables, whose size was much smaller than the inputs. However, this work mainly focused on small-scale 1D data, whereas our models learned data representation in higher dimensions, particularly in 2D and 3D. Another limitation of this model was that only CPUs were used for compression, which did not fully utilize the parallel computing power offered by GPUs <cit.>. Recently, a compression method proposed by Liu et al.<cit.> achieved great results for 2D and 3D data. Their AESZ framework was comprised of a Lorenzo prediction model and an AE model, each of which compressed data independently. Compression outcomes from both models were then compared for the framework to select the model for the data being compressed. The compression ratio of their proposed framework on many scientific data sets surpassed results from other hand-engineered models, e.g. and other AE-based models. However, instead of optimizing one particular model for each input, the framework employed two distinct models to compress the same data. Slightly different from traditional deep learning models, physics-informed neural networks (PINNs) <cit.> have been successfully developed to extrapolate data and solve many scientific problems. Choi et al. <cit.> combined PINN and variational autoencoder (VAE) <cit.> to compress plasma simulation data. Unlike other types of neural networks, this PINN model optimized several physics constraints, such as mass, moment, and energy, along with the reconstruction loss, i.e. L2 distance. Similar to our work, the authors used integer quantized latent variables, which could be reliably transmitted between different hardware and software platforms as studied by <cit.>. § METHODS Our proposed model is built upon three main components: an encoder network (E), a quantizer (Q), and a decoder network (D). The encoder network encodes data inputs to output latent variables z_e. The quantizer then processes z_e to produce a quantized latent representation z_q. Finally, the decoder network reconstructs data from the compressed representation z_q to output x̂. The whole model is trained in an end-to-end fashion to minimize a reconstruction loss and constraint losses imposed by codebooks of the quantizer. The model architecture is depicted in Figure <ref>. The detailed implementation of the model is present in Table <ref>. Each stage (EncRes) of the Encoder is connected to an intermediate convolution layer. The intermediate layer acts as a bridge to map the number of channels to the desired vector dimension of the quantization layer. The output representation is then quantized using the corresponding codebook. §.§ Encoder & Decoder Architecture As mentioned above, the encoder is trained to extract data representation into latent spaces, whereas the decoder decodes the latent variables to reconstruct the given data. There are two most widely used reconstruction errors, mean-squared error (MSE) and multi-scale structural similarity (MS-SSIM). Depending on targeting criteria, either measure can be used to achieve desirable outcomes. Both measures have been shown to be good metrics since they generally output generated images with high quality <cit.>. The encoder network E is created hierarchically. The first level aggressively reduces the dimension of the inputs and learns their data representation. The second level also performs slight dimension reduction. Data representation from the second level is quantized by its corresponding vector quantizer. The quantized values are then fed into the first-level vector quantizer. The second-level quantization acts as a prior to the first-level quantization. The additional information from the second-level quantization improves the capability of the first-level quantization, which leads to a better reconstruction quality. Even though the second level creates slightly more bits during compression, reconstruction quality improvement significantly outweighs a slight decrease in compression ratio. The network E comprises several 2D convolution layers and blocks of residual connections. The first two convolution layers map inputs to a higher number of channels using a kernel size of 4. It is followed by a couple of residual blocks, which consist of strided convolutions with a kernel size of 5. Components of a residual block are illustrated in Figure <ref>. We use non-linear GELU functions as our activation functions <cit.>. A generalized divisive normalization (GDN) is used to normalize residual block's outputs and transform their distribution to be more Gaussian <cit.>. GDN is effective for both image compression <cit.> as well as scientific data compression <cit.>. The encoder network can be simply represented as a mapping function as shown in equation <ref>. z_e = 𝐄(x) The decoder network D is a mirror of the encoder network E. Transposed convolution layers are used to replace strided convolutions. Transposed convolutions at the beginning of each hierarchy alter decoder inputs to acquire suitable representation with C channels for the following residual blocks. In general, all blocks of the decoder loosely reverse all operations performed by the encoder. Network D maps the latent representation back to the original dimension, outputting reconstructed data. The decoder can also be considered to be a mapping function as shown in equation <ref>. x̂ = 𝐃(decoder_inputs) §.§ Vector Quantizer Although vanilla AE can perform dimension reduction, it cannot flexibly generate data given fixed inputs. Variational Autoencoder (VAE) <cit.> and its variants are implemented to improve reconstruction performance. VAEs not only minimize the reconstruction loss but also learn the distribution of the latent variable by optimizing the Kullback–Leibler (KL) divergence. As a result, a more diverse set of images can be generated with much higher quality <cit.>. Based on the idea of VAE, we impose slightly different criteria on the objective function. Following a proposed approach implemented in Vector Quantized Variational Autoencoder (VQ-VAE) <cit.>, our model is trained to minimize the reconstructed loss, i.e. L2 distance, as well as optimize discrete codebooks. The latent representation encoded by the encoder is projected onto codebook vectors. The vector, which has the smallest Euclidean distance to the encoded latent variable, is selected to become the decoder's inputs as shown in equation <ref> z_q = 𝐐(z_e) = argmin_q_k ∈ Q (||z_e - q_k||) The quantizer outputs a set of integer values, which are indices of quantized vectors. These indices are then further compressed using a lossless compression scheme, i.e. Huffman coding-based algorithms. The size of compressed quantized data is significantly reduced because quantized values are in the form of integers, which are efficiently compressed by any lossless compression algorithm. Our training procedure for our codebooks is similar to the method described in <cit.>. Each codebook in the model is updated using an exponential moving average with a decay of 0.99. The update is measured based on changes in codebook entries after each training iteration. A straight-through estimator <cit.> is implemented to overcome the discontinuity of gradients created by discrete codebooks. The estimator acts as an identity function that identically maps the gradients of the decoder to the encoder in the backward propagation. Overall, the end-to-end training is comprised of three mapping functions: encoding, quantization, and decoding. The model can be summarized using equation <ref>. x̂ = Model(x) = 𝐃(𝐐(𝐄(x))) where E is the encoder network, Q is the quantizer, and D is the decoder network. §.§ Preprocessing Large-scale Data §.§.§ Data Standardization In this work, we focus on compressing large-scale high-resolution scientific data obtained from Earth's simulation. Since each set of data has its own data distribution, it is important to preprocess raw data prior to training. Statistical measures of data are usually available for most simulations. The availability of statistics enables us to use Gaussian standardization for data whose distribution is Gaussian. The technique is also applicable to distribution that approaches the Gaussian distribution. The standardization method is shown in equation <ref>. x_st = x - μ/σ where x is a data value, μ is the mean of the data, and σ is the data standard deviation. The inverse of standardization is required for converting the reconstructed data back to the actual value range. The inverse is formulated in equation <ref>. x = μ + x_st*σ However, if the data distribution is not Gaussian, directly applying standardization does not improve compression performance. In this scenario, logarithm scaling is a technique to transform the original data to its corresponding logarithmic scale. The technique usually changes the data distribution to be close to Gaussian, which enables us to effectively use the standardization method on the data. §.§.§ Missing Value Handling Data masking is necessary for data compression in many cases. In many scientific simulations, there are regions that are not of interest to the researchers conducting experiments. Those areas are generally assigned values that are extremely negative or easily distinguished from actual simulation values. Therefore, we use masking layers to indicate valuable values and ignore unwanted regions in our model. Even though the masking increases the storage size, this redundancy is negligible since it is made up of several integer values, which can be significantly compressed by any standard lossless compression algorithm such as Huffman coding-based compression schemes. Missing values in data are also replaced by a different value. The replacing values can be the mean or the median of the entire available data. For simplicity, we assign missing values with the data mean since data statistics are readily available. After cleansing missing values and masking the data, the data and their corresponding masks are partitioned into small blocks. §.§.§ Data Partitioning Machine learning models generally cannot handle raw scientific data, since each dimension of any data is large, which cannot fit into the system's memory. To overcome this issue, data are partitioned into small blocks prior to training or compression. Each dimension of a block is a power of two. Particularly, we restrict the block to having a height and width of 64 for the training process, as we observe that this setting achieves the best reconstruction quality. Moreover, a power of two in each block dimension makes the up-sampling and down-sampling efficient. No padding or trimming is required for the outputs, which saves additional computing power. However, the shapes of raw data are not always divisible by two, which is a requirement to have a block size of a power of 2. Then, data whose size is not a multiple of block size are padded. Padding is performed at the edges of each dimension. For Earth's simulation data, we cyclically replicate data values at one edge and concatenate them at the other end. For example, to pad the left edge of 2D data, values on the right edge are copied and appended to the opposite side. This padding pattern is especially helpful for treating continuous simulation data with periodical boundary conditions, e.g., climate modeling data. The partitioning technique mentioned above works well in general. However, as all partitioned blocks are discrete, the whole set of partitions does not include any transition from one block to its adjacent neighbors. To smooth out the boundary and make the transition from one block to another more accurate, an overlapping block partition technique is implemented <cit.>. Instead of making mutually exclusive blocks of data, adjacent blocks are partitioned in a way that they overlap with each other in a small area. In particular, assuming each block is of size 64 and there is an overlap of eight, the second block is created to contain the last eight values of the first block as well as the next 56 values. The data overlapping technique is only implemented for training data, whereas the discrete data partitioning technique without overlapping is used for testing and compression. §.§ Objective Function §.§.§ Reconstruction Loss The reconstruction loss is the discrepancy between the reconstructed and original data. We minimize the L2 distance of the target and compressed data, i.e. l_recon(x, x̂) = ||x-x̂||_2. The minimization simply matches the compressed data to the original data as closely as possible. §.§.§ VQ commitment loss The commitment loss accounts for the difference between the quantized codebook vectors and outputs of the encoder. Since quantization distorts the data, decreasing the distance between the quantized vectors and the original data reduces the distortion. We impose an L2 distance constraint on the codebook vectors and their corresponding inputs. The commitment loss, l_q, is defined as in equation <ref>. l_q(z_e, z_q) = ||z_e-z_q||_2 = ||z_e-Q(z_e)||_2 Where z_e and z_q are outputs of the encoder and their corresponding quantization values, respectively. Overall, the model is trained to optimize the following objective L = λ_recon * mask * l_recon + λ_q * l_q where mask is a masking layer, which indicates which data points should be taken into account in optimization; λ_recon and λ_q are constant coefficients of the reconstruction and commitment losses, respectively. The constant λ_q is set to be 0.25 following the suggestion by <cit.>. The objective function in equation <ref> is acquired based on the assumption that quantization values are uniformly distributed. Uniform distribution leads to a removal of an additional KL term in the objective because the term becomes a constant with respect to encoder parameters <cit.>. §.§ Error-bounded Technique Reconstructed data from neural networks sometimes have large distortions from the original data. To counteract the large distortion of some reconstructed values, a straight-through technique is introduced. The straight-though technique classifies reconstructed values into two groups, predictable and unpredictable. Reconstructed data that meet the tolerance constraints are called predictable values. In other words, predictable data have error values less than or equal to a predefined threshold. Otherwise, they are unpredictable values. Unlike predictable values, which can be used directly as final reconstructed values, unpredictable values have errors that exceed the threshold. Thus, corresponding true values and their locations are saved separately on a file to replace unpredictable values during reconstruction. §.§ High-Order Singular Value Decomposition The Higher Order Singular Value Decomposition (HOSVD) is a generalization of the Singular Value Decomposition (SVD) to higher-order tensors. SVD is a factorization technique that decomposes a matrix into three separate matrices, each of these matrices represents different aspects of the Data. HOSVD extends this concept to 3D data or n-dimensional data. Just like SVD, HOSVD decomposes a high-dimensional tensor into a set of sub-tensors that capture a specific aspect of the original data. This decomposition is achieved by first re-organizing a tensor into a set of matrices and then applying SVD to each of those matrices. One of the advantages of HOSVD is that it preserves the mode-orthogonality property of the original tensor, resulting in a set of sub-tensors that are orthogonal along the mode axes. This property makes HOSVD particularly useful in applications such as image compression and feature extraction, where maintaining the structure of the original data is important. The HOSVD could be approximated with a predefined tolerance through equation <ref>. 𝒳≈U^(1), …, U^(N)argmin𝒳 - 𝒢_F^2, where 𝒢 = 𝒳×_1 U^(1)T×_2 U^(2)T×_3 ⋯×_N U^(N)T, and U^(n) is an orthonormal matrix for each mode-n of 𝒳. In this formula, 𝒳 is the original higher-order tensor that we want to decompose, and 𝒢 is the reconstructed tensor using HOSVD. The U^(n) matrices represent the orthogonal bases for each mode-n of 𝒳, and ×_n denotes the mode-n product of a tensor. ._F is the Frobenius norm. It is important to notice that the HOSVD formula is a minimization problem, where we seek for a set of orthogonal matrices that best approximate the original tensor 𝒳 with a predefined tolerance. This tolerance will be directly proportional to the compression-ratio and bit rate. The goal is to find the set of matrices that minimize the Frobenius norm of the difference between 𝒳 and 𝒢, which is a measure of the distance between the two tensors. §.§ Metrics §.§.§ Peak signal-to-noise ratio (PSNR) The metric measures the performance of compression schemes. PSNR is defined via the mean squared error (MSE). MSE is given in equation <ref>. MSE(x,x̂) = 1/n||x-x̂||_2^2 Where x and x̂ are the original and reconstructed data, respectively. PSNR is then defined as PSNR = 10*log_10(MAX_I^2/MSE) Where MAX_I is the maximum range of the input data. PSNR is inversely proportional to MSE. When the error between input and output data is small, MSE is a small number, which leads to a large PSNR. Therefore, it is desired to maximize PSNR for any compression model. §.§.§ Compression Ratio Compression ratio (CR) is the ratio between the sizes of the original data and their compressed latent representation. The compressed data of our model are outputs of the quantizer in an integer format. We define CR as CR = original_size/compressed_latent_size §.§.§ Bit rate Bit rate is a convenient way to represent compression ratio. It is a measure of the average number of bits used per data point for the compressed data. It is inversely proportional to compression ratio and is defined in equation <ref> bit_rate = data_type_size/CR where data_type_size is the size of the data type. For instance, it is 32 for single-precision data or 64 for double-precision data. Thus, a small bit rate, or large compression ratio, is the objective of any compression algorithms. §.§.§ Compression speed Compression and decompression speeds are the time taken to process the data. They are both defined as speed = original_size/computation_time The speed in this work is expressed in unit of MB/s § EXPERIMENTS §.§ Resource Availability This paper uses existing, publicly available data from SDRBench (<https://sdrbench.github.io/>) for bench-marking the performance of our model. As for the compression on real-world application data, the model compresses the High-Resolution Earth System Prediction (iHESP) data. The iHESP data have been deposited at <https://ihesp.github.io/archive/> and are publicly available as of the date of publication. All original code has been deposited at <https://github.com/hieutrungle/data-slim> and is publicly available as of the date of publication. Any additional information required to reanalyze the data reported in this paper is available from the lead contact upon request. §.§ Hardware All models are trained on GPU A100 compute nodes of Texas A&M High Performance Research Computing (TAMU HPRC). Each node is equipped with two Intel Xeon 6248R (Cascade Lake), 3.0GHz, 24-core, and 2 NVIDIA A100 40GB GPU. After training, our models reconstruct data on different platforms, the first one comprises GPU A100 compute nodes and the second is a personal computer with 11th Gen Intel i5-11600K, 12-core, 4.9GHz, and NVIDIA GeForce RTX 3060 Ti. The compression performance on the latter platform, as will be discussed in section <ref>, shows that the model can also efficiently run on regular personal devices instead of powerful chips such as NVIDIA A100. §.§ Benchmark Data: SDRBench Our proposed models are initially tested on published scientific benchmark data, SDRBench <cit.>. This benchmark provides numerous simulation data from many different fields, ranging from electronic structures of atoms, molecules to weather, and cosmology. The benchmark is publicly available for different scientific purposes. Even though we focus on compression 2D data, a couple of 3D data sets are also being compressed to verify the possibility of generalizing our architecture to higher-dimension data. Table <ref> summarizes several data sets and some fields we use for our compression. A brief description of the data is as follows: * NYX data: The data describe simulation astrophysical reacting flows created using an adaptive mesh, cosmological hydrodynamics simulation code <cit.>. * CESM data: Both 2D and 3D CESM data are cloud properties acquired from a well-known climate simulation package <cit.>. As suggested by domain scientists, 3D CESM data should be treated as 2D for compression. This treatment improves both compression ratio and PSNR of reconstructed data <cit.>. The 3D CESM data include comprehensive attributes of cloud properties for many different altitudes, which can be viewed as many 2D data stacking on top of each other. Therefore, we use the 3D CESM data as a training set for CESM cloud data, whereas all snapshots of 2D CESM data are our testing data. §.§ High-Resolution Earth System Prediction (iHESP) Data The International Laboratory for High‐Resolution Earth System Prediction (iHESP)<cit.> was a project aiming to develop more advanced modeling frameworks for high-resolution multiscale Earth system predictions to improve the simulation and prediction of future changes in extreme events. iHESP also provides numerous global and regional high-resolution simulation data spanning hundreds of years. The global climate was simulated using different high-resolution configurations of CESM version 1.3 for atmosphere, land, ocean, and sea-ice. Meanwhile, regional data were generated from the ocean model ROMS (Regional Ocean Modelling System) with the atmospheric model WRF (Weather Research and Forecast model) using the CESM/CIME coupling infrastructure. All data are also publicly accessible. Among a large array of ocean properties provided by iHESP, sea surface temperature (SST) is one of the most important attributes for the ocean. The property is simulated over hundred years, which leads to a substantial amount of storage needed to store the data. However, the large amount of available data also enables us to leverage machine learning for compression. Basic information of SST data is presented in Table <ref>. The first dimension of the data represents the time evolution. The next two dimensions are the height and width of the data, respectively. General ocean information, such as simulation history and climate coefficients, are also included in the metadata of the data set. Latitudes and longitudes are also available to scale the data back to the global coordinate system when it is required. Data preprocessing is crucial for SST in both training and compression. Temperature values are only available where sea water presents, whereas undefined values are assigned to continents. In other to deal with missing values, a masking layer is created to differentiate between those regions. The data are split into two sets, a training set and a testing set. The training set contains almost ∼100GB of SST data while the testing set consists of temperature data of the last 120 consecutive months in the simulation. Data in the training set are partitioned using the overlapping technique, while we apply the discrete partitioning technique for the testing set. Both training and testing sets contain blocks of data of size 64. During compression, data are partitioned into blocks of size 256 for better resolution. § RESULTS AND DISCUSSION §.§ Compression of Benchmark Data §.§.§ 2D Data Visualization of the reconstruction of CESM CLDHGH data is presented in Figure <ref>. At a bit-rate of 0.22, our proposed model provides great visual results. The model preserves most detail of the data, especially for regions where gradual changes in temperature occur. However, at the boundaries of different regions, slight distortion can be observed. The distortion is caused by the sharp changes in adjacent values. The compression performance of our models on different data sets is compared to other compression models, namely HOSVD, SZ2.1 <cit.>, ZFP <cit.>, and AESZ <cit.>. Using HOSVD as our baseline, the PSNR of our proposed model is higher than that of the baseline model at all bit-rates (Figure <ref>). Moreover, Figure <ref> shows that our proposed model outperforms other compression schemes when bit-rates are below 0.40, which are equivalent to compression ratios of greater than 80. At a very low bit-rate of 0.22, the reconstructed data of our model has a PSNR of 46.35 dB. This is an improvement from the hybrid AESZ model, which requires a bit-rate of around 0.41 to obtain the same PSNR. However, the PSNR of the proposed model does not follow the same trend as the compression performance of other compression models. Our trained model has a fixed set of parameters. To increase PSNR without training a different model, we apply the straight-through method to restrict the error-bound of the reconstructed data. There is a possibility of training different models with larger latent variables and codebooks to get much higher PSNR at any certain bit-rate. However, we found it unnecessary to exhaustively explore all possible combinations of neural networks to achieve higher PSNR at bit-rate higher than 1, since we aim at compressing data at bit-rates below 0.50. Our compression model is also leveraged to compress different 2D data. Compression performance on many different cloud data is illustrated in Figure <ref>. Since the CESM 3D CLOUD data should be treated as 2D data as suggested by domain scientists <cit.>, its compression results are presented together with other 2D data. It is worth mentioning that compression on all CESM cloud data uses the same model architecture with the exact same weights. Even when applying the model to these data, it obtains high PSNR while maintaining a very low bit-rate. The compression performance indicates that this particular model for CESM cloud data achieves a good generalization. Compression performance comparison between our proposed model and HOSVD is presented in the Appendix <ref>. The compression performance of our proposed model outperforms HOSVD for all benchmark data. More compression performance comparisons and results are presented in Appendix <ref> and <ref>. §.§.§ 3D Data The proposed model achieves reasonable compression on 3D benchmark data. As can be seen from Figure <ref>, at low bit-rates, our model surpasses SZ2.1 and ZFP in terms of performance. However, the quality of reconstruction from the hybrid model - AESZ - is higher than our model. One of many possible reasons for the weaker performance of our compressor is that our model is designed using primarily 2D convolution layers. Therefore, it does not have the extensive capability to learn data representation in 3D. On the other hand, when compressing 3D data, AESZ changes its machine learning architecture to 3D convolution neural networks. This change is one of the factors that boost the compression performance for volumetric data. §.§ Compression of iHESP Sea Surface Temperature (SST) Data Compression results for the testing set of high-resolution SST data show that the model can reconstruct data with high quality while maintaining a high compression ratio even for large-scale simulation data. As can be seen from Figure <ref>, after being compressed by a factor of 240, the reconstruction achieves a PSNR of 50.16. Moreover, in terms of visualization, it is unlikely to detect differences between the original and reconstructed data. However, there are some slightly noticeable distortion areas, especially along coastal lines between oceans and continents. Since data are only available for sea water, data points on continents are set to be a suitable constant. The assignment of the constant creates large variations in values along the edges of continents, which hinders the reconstruction ability of the model in those particular regions. Table <ref> presents the compression performance of the model on the whole testing data. The quality of reconstruction, PSNR, of each snapshot varies from 48.58 to 51.5. The reason for the differences in PSNR is that data distribution of each snapshot differs from time to time, which leads to the variation in quantization values from codebooks; hence changes in the reconstruction quality. Nevertheless, the deviation of the snapshots' PSNR does not vary far from the average of 50.04, which indicates that our model achieves stable performance over all data sequences. Compression and decompression speeds are also acceptable. Compression speeds on HPC nodes are presented in Table <ref>. On average, it takes around 45 seconds to complete either compression or decompression for 4GB data. On a personal computer with an NVIDIA 3060 Ti accelerator, compression and decompression both take around one and a half minutes on the same data. The small difference in the two platforms indicates that the compression pipeline is primarily bottlenecked by the data transferring between CPUs and GPUs. However, compression speed on the personal computer shows promising results that the model is also suitable for compression on small devices. § CONCLUSIONS Our proposed model shows to be effective in compressing floating-point scientific data, both on 2D benchmark data and large-scale high-resolution data. It achieves an extremely high compression ratio while preserving a high quality of reconstruction. The model outperforms other state-of-the-art models in some benchmark data sets, particularly 2D simulation data. However, there is room for further improvement. Other lossless compression schemes, such as arithmetic coding, which offers better compression performance, can be used to replace Huffman coding. The model can also be further improved by optimizing the rate loss term, which potentially leads to a better compression ratio. Furthermore, the compression pipeline of the proposed model can be optimized to improve the speed of compression. Since scientific data compression using neural networks is still in its early age, there is so much more potential improvement that can be achieved for future research along this line. § AUTHOR CONTRIBUTIONS Conceptualization, Jian Tao and Hieu Le; Methodology, Hieu Le, Jian Tao, and Hernan Santos; Investigation, Hieu Le and Jian Tao; Writing – Original Draft, Hieu Le, Jian Tao, and Hernan Santos; Writing – Review & Editing, Hieu Le and Jian Tao; Funding Acquisition, Jian Tao; Resources, Jian Tao and Hieu Le; Supervision, Jian Tao. § DECLARATION OF INTERESTS The authors declare no competing interests. The authors would like to thank Dr. Chao Tian, Dr. Jaison Kurian, and Dr. Ping Chang from Texas A&M University for their suggestions and comments on this work. The authors gratefully acknowledge the helpful support provided by the School of Performance, Visualization and Fine Arts, Texas A&M High Performance Research Computing (HPRC) and Texas A&M Institute of Data Science (TAMIDS). Portions of this research were conducted with the advanced computing resources provided by Texas A&M High Performance Research Computing. This work is partially supported by the TAMIDS Career Initiation Fellow Program and NSF grants OAC-2112356, OAC-2019129, and OAC-1925764. ACM-Reference-Format § ADDITIONAL EXPERIMENTS §.§ Frequency Loss Term We conduct experiments which take into account loss terms in the frequency domain of the data. Both inputs and reconstruction are transformed into the frequency domain using Fast Fourier Transform (FFT). The L2 distance between the two transformed sets are then calculated using equation <ref>. This added term forces the model to directly minimize errors between high and low frequency components of the L2 distance. l_fft(x, x̂) = ||FFT(x)-FFT(x̂)||_2 Where x and x̂ are inputs and reconstructed data, respectively. The FFT loss, l_fft, is added with other losses to create an objective function of the model as shown in equation <ref>. L = λ_recon * mask * l_recon + λ_q * l_q + λ_fft * l_fft where λ_recon, λ_q , and λ_fft are constant coefficients of the reconstruction, commitment losses, and FFT loss, respectively. §.§ Results Our model trained with the added FFT loss performs reasonably well for the iHESP sea surface temperature (SST) data set. At a compression ratio of 221.63, the model achieves a PSNR of 47.04 for the reconstruction. Despite of having a good quality of reconstruction, its performance is surpassed by the model trained without the added FFT loss as discuss in section <ref>, which achieves an average PSNR of 50.04 at a compression ratio of 231.54. One possible explanation for the lower reconstruction quality is that there is a trade-off between the MSE terms in the time domain and the frequency domain during training. While the MSE loss term in the time domain learn data representation in a particular region, the FFT loss term focuses on different regions. As a result, the quantitative result, PSNR, of the "FFT model" is outperformed by its counterpart. § COMPRESSION PERFORMANCE COMPARISON Our proposed neural network model outperforms purely mathematical model, HOSVD, in all benchmark data sets. Figure <ref> show that at low bit-rates, the proposed model achieve much better reconstruction quality than the HOSVD. § ADDITIONAL VISUALIZATION RESULTS Additional results of the compression using our proposed model are provided in this section. Figure <ref> - <ref> show visualization results for compression on different data sets.
http://arxiv.org/abs/2307.06007v1
20230712083641
Building Persuasive Robots with Social Power Strategies
[ "Mojgan Hashemian", "Marta Couto", "Samuel Mascarenhas", "Ana Paiva", "Pedro A. Santos", "Rui Prada" ]
cs.HC
[ "cs.HC", "cs.RO", "I.2.9; H.5" ]
Building Persuasive Robots with Social Power Strategies Mojgan Hashemian12, Marta Couto1, Samuel Mascarenhas12, Ana Paiva12, Pedro A. Santos12, Rui Prada12 1INESC-ID 2 Instituto Superior Técnico, Universisade de Lisboa August 12, 2023 =========================================================================================================================================================================== Can social power endow social robots with the capacity to persuade? This paper represents our recent endeavor to design persuasive social robots. We have designed and run three different user studies to investigate the effectiveness of different bases of social power (inspired by French and Raven's theory) on peoples' compliance to the requests of social robots. The results show that robotic persuaders that exert social power (specifically from expert, reward, and coercion bases) demonstrate increased ability to influence humans. The first study provides a positive answer and shows that under the same circumstances, people with different personalities prefer robots using a specific social power base. In addition, social rewards can be useful in persuading individuals. The second study suggests that by employing social power, social robots are capable of persuading people objectively to select a less desirable choice among others. Finally, the third study shows that the effect of power on persuasion does not decay over time and might strengthen under specific circumstances. Moreover, exerting stronger social power does not necessarily lead to higher persuasion. Overall, we argue that the results of these studies are relevant for designing human–robot-interaction scenarios especially the ones aiming at behavioral change. § INTRODUCTION Social power, i.e., the potential for social influence, is a pervasive social process in human–human interactions. However, despite its acknowledged role in social interaction, little attention has been paid to this phenomenon in human–robot interaction (HRI). One prominent example of social agents that have been of interest these days is robotic agents' evolution. However, concerning robotic agents, few studies have addressed social power in HRI. Recent advances in the field of social robotics raise the question that whether a social robot can be used as a persuasive agent. To date, few attempts have been performed using different approaches to tackle this research question (reviewed in Section <ref>). In a nutshell, the present work's objective is to empower intelligent agents with social power dynamics that are aiming to develop more persuasive agents. In this paper, we report our recent advancements and draw suggestions for future directions. Recent evidence suggests that social power (or, in short, power) is central to a multitude of social processes. Power acts as a heuristic solution to potential conflicts among group members and guides social perception and behavior <cit.>. Extensive research in the field of social psychology has shown that social power affects a wide variety of social and cognitive processes, such as stereotyping <cit.>, moral judgment <cit.> as well as nonverbal behavior, like emotional displays <cit.>, and its inferences <cit.>. Additionally, recent evidence suggests that humans perceive computers as social agents, and people respond socially to computer actors (computers are Social actors [CASA] paradigm). In other words, humans treat computers similarly to how they treat other humans <cit.>. In this sense, people apply similar social rules to their relationship with computers <cit.>. The same might apply to social power theories for computers. We argue that more research on social power dynamics is needed to create socially competent robots. The studies presented here contribute to this. We approach this goal from two perspectives, since any power-related relationship deals with two sides: the agent who exerts power (the actor) and the target. Specifically, we aim to design social robots capable of 1) processing social power dynamics, 2) representing power in their behavior, and 3) investigating how they are perceived when using different power sources. To operationalize the expression of power, we propose to utilize persuasion as an application of social power. In sum, we aim to investigate the link between social power and persuasion in social robotics. Specifically, we would like to understand how to design more persuasive robots using social power. Furthermore, by operationalizing social power in the context of persuasion, we develop different persuasion studies based on three different power bases. We investigate the effectiveness of these persuasive strategies by designing and implementing three different user studies. This paper is organized as follows. Section <ref> introduces the terminology and definition of social power and persuasion. A brief overview of the recent advancements in the field of persuasive social robots is presented in Section <ref>. The paper's remaining sections describe our recent endeavors in designing persuasive social robots using social power. Section <ref> details our studies' general methodology, followed by Section <ref>, which details the results obtained in the first user study. Then, Section <ref> presents the findings of the second study, and Section <ref> begins by laying out the details of the third study, followed by the findings. Finally, the last Section displays our conclusions, focusing on the two key themes that highlight the studies' limitations and providing recommendations for further research work. § BACKGROUND   Social power is defined as the ability to influence others to do something they would not do in the absence of such power <cit.>. This study uses a well-known theory of social power introduced by French and Raven <cit.>, which presents one of the most popular models of social power; many researchers have tested this model within the context of many applications. Further, its widely accepted conceptualizations have been examined in terms of applicability to various settings <cit.>. Moreover, the model has been verified after 30 years, which offers a good reliability base <cit.>. The authors' model considers five principal bases for social power: reward, coercion, legitimacy, expertise, and reference. The model is based on a typological analysis of the bases of power in interpersonal influence, making it immensely interesting for our second goal of designing persuasive agents. French and Raven <cit.> identified different bases of power, i.e., resources, that can change another person's beliefs, behaviors, and attitude. Although there have been other identified bases, in <cit.>, the authors argue that these five bases are the most common ones among the others. The definitions of the five bases are as follows: * Reward social power is realized when the target is willing to do what the actor requests in response to another action that brings value to the target in their perspective. For example, a factory manager (the actor) promises an employee (the target) to double their salary if they increase production. * Coercive power stems from the ability of one individual to mediate punishments for another. For example, a factory manager (the actor) warns a worker (the target) that if they do not increase production, they will be fired. * Legitimate power stems from internalized values that give one individual the authority to influence another. For example, in a family, a parent (the actor) instructs their teenage child (the target) to be home before midnight. In case of a successful interaction, the teenager gets home earlier, since, generally, children recognize their parent's authority to enforce a curfew. * Expert power stems from one individual's perception of other's higher knowledge. For example, a physician (the actor) instructs a patient (the target) to follow a given medical prescription. * Referent power stems from liking, respect, and identification of one individual with another. For example, a person (the actor) asks a friend (he target) for help in studying for an upcoming exam. Another example of referent power could be the influence of celebrities or social media influencers on people's decisions. One way of exerting social power is to persuade people. Persuasion is defined as an attempt to change/shape a target's belief or behavior about a subject, an issue, or an object <cit.>. Hence, persuasion involves the study of attitudes and how to change them <cit.>. In other words, persuasion may be defined as attitude change formation through information processing in response to a message about an intended object  <cit.>. In this process, by transmitting a message, the communicator tries to convince others to change their attitude or behavior regarding an issue, in an atmosphere of free choice. In other words, the persuader conveys a message, which aims at convincing other people to change their attitude; this is usually not forced upon people. Here, it is noteworthy that the communicator is not changing people's minds. Conversely, in case of a successful persuasion, the target changes their attitudes. However, of course, persuasion is not always successful. Sometimes, the target can react negatively to persuasion attempts and actively avoid being persuaded. This phenomenon is called reactance <cit.>. Given this context, persuadability is not an individual characteristic but rather a “complex communication phenomenon." Persuasion is a key process in shaping and maintaining cooperation, social influence, and behavioral change <cit.>. It plays a critical role in human interaction and exchanges <cit.>, and several factors contribute to its effectiveness, such as the personality of the actor (the source or the one who is performing the influence) and the target (the one who is affected) <cit.>. It should be noted that there is contradicting evidence regarding personality. For instance, some psychologists believe that no personality trait is associated with persuasion due to the complexity of human behavior <cit.>. To understand the process of being persuaded, it is essential to comprehend the target's perception of the persuader's characteristics (e.g., the target's internal cognitive process). On the contrary, in understanding the process of persuading, the actor's characteristics play a vital role (e.g., actions of the actor). Social power and persuasion have a special bond that motivated us to investigate them jointly in the context of social robotics. If one considers the definition of social power as the ability to influence others, this means that the relationship between social power and persuasion is already established. In fact, they are almost inseparable. Additionally, persuasion is “an important medium of social power" <cit.>, which motivated us to investigate the effect of social power dynamics on social agents' persuasiveness and its potential effect on influencing others. Thus, persuasion attempts to change/shape a target's belief or behavior about a subject, an issue, or an object <cit.>. In the field of social psychology, the link between power and persuasion has been the subject of investigation for a long time <cit.> (for a recent review, see <cit.>). Early results show that a powerful individual is more influential in persuading others <cit.>. However, it should be noted that the extent to which the power is effective depends on the circumstances by which it can cause short-/long-term influence and increase or decrease persuasion <cit.>. Specifically, some theories indicate a linear correlation between power and persuasion. In other words, when more power is exerted, higher persuasion is achieved. However, recent evidence argues that this is generally not true. Under specific circumstances, when higher power is exerted, reactance comes into play and decreases the chance of persuasion. This happens because the persuasiveness of messages depends on the psychological sense of power. Hence, a high-power communicator may lead to high or low persuasion depending on the power level of the audience (the persuasion target). For instance, evidence suggests that during mock interviews, when the power levels of the interviewer and the interviewee match, higher persuasion is achieved <cit.>. In other words, high-power communicators are more effective in persuading high-power audiences; similarly, low-power communicators are more effective in persuading low-power audiences. As an example, when both interviewer and interviewee are in low-power state, the interviewer finds the target more persuasive. This contradicts earlier studies that stated interviewees with high power are more persuasive <cit.>. Recent finding suggests that this inconsistency in the results is due to the mismatch between the powers. In other words, this inconsistency is observed when low-power people are interviewed by low-power interviewers. Hence, the persuasiveness of messages depends on the psychological sense of power of the two sides. Additionally, high-power people generate and pay greater emphasis on information, which conveys competence (e.g., by stressing on skillfulness and intelligence). On the contrary, a low-power state leads to more warmth, i.e., low-power communicators generate messages with more warmth, for instance, by stressing on friendliness and trustworthiness <cit.>. This motivated us to investigate different persuasive strategies in terms of information, competence, and warmth (discussed in the following sections). Having discussed the terminologies and the link between social power and persuasion, in the next section, we discuss the recent advancements in the field of persuasive social robots in HRI. § RELATED WORK   To date, a considerable amount of literature has been published on persuasive robots, but few studies have considered social power in persuasion. Several lines of evidence suggest that robots can be used as persuasive interlocutors. Currently, much of the existing literature pays particular attention to behavioral strategies and nonverbal cues, either social (such as mimicry <cit.>) or physical (such as gender <cit.>, embodiment <cit.>, gaze <cit.>). Additionally, previous studies reveal several factors associated with the ability of an individual to persuade others. These factors include verbal and nonverbal behaviors of the individual, the dynamics of social interaction, and psychological and societal factors, such as social roles <cit.>. We would like to highlight that so far, little attention has been paid to the importance of message strategy, or the way that a robot phrases a request appeal to gain higher compliance. We briefly review these studies in this section. In a recent research <cit.>, ten multi-modal persuasive strategies (direct request, cooperation, critique, threat, deceit, liking, logic, affect, exclusivity, and authority appeal) were selected and coded verbally; these were combined with specific gestures validated in a pilot study. A study was conducted with 200 people, who played the jelly bean game (involves visually estimating the number of specific jelly beans in a bottle). Prior to their guessing, the robots gave their suggestions and attempted to influence the users' decision. The task was performed with two NAO robots, and unique strategies were randomly assigned to each robot. The results show that affect and logic strategy gained the most compliance. A followup study <cit.> using the same game investigates further the effect of affect and logic strategy. This study includes a control condition. Specifically, one robot with no strategy was added as a control condition and compared with another robot that was equipped with a persuasive strategy. The control robot stated neutral messages (e.g., “There are x number of jelly beans in the jar") gestures (standing). The persuasive strategy-equipped robot used verbal cues to persuade the user (e.g, in emotional condition, the robot stated, “It would make me happy if you use my guess of x beans in the jar," and in the logic condition, the robot stated, “My computer vision system can detect x number of beans in the jar."). The results indicate that the emotional strategy was more persuasive than the logic and control condition. No statistically significant difference was found between the logic and the control condition. Another interesting work investigates the effect of foot-in-the-door (FITD) technique, which starts by a small and moderate requests that a person accepts and then continues to get a person to agree with a larger request <cit.> . To be more specific, the robot attempts to persuade the user using the sequential-request strategy, starting from an easy one. The authors argue that persuasiveness might depend on the performance and credibility of robots. Bearing this in mind, the authors ran a user study with 44 people in four conditions: 2 (robot performance: helpful vs. unhelpful) × 2 (message strategy: direct request vs. foot-in-the-door). The results indicate that this technique can be used by robots to persuade human users. However, the persuasion effect was independent of the robot's expertise and credibility. Using a similar approach, in <cit.>, the authors attempt to investigate the effect of incremental representation of information on persuasiveness of social robots. In a between-subject study with two conditions of incremental and non-incremental information presentation, the NAO robot tried to persuade the users to do a higher number of tasks (ten simple tasks in total). The tasks used in the two conditions were the same. In the non-incremental condition, the information about all the tasks was given at once, while in the incremental condition, the participants received the information when the next task was about to start. The result did not yield any significant differences regarding the number of the task and the likeability of the robot. However, the participants were persuaded to stay longer to do the tasks after they had intended to leave. In addition, Andrist et al. <cit.> studied the effects of rhetorical ability in expertise communication of informational robots using psychological and linguistic theories. They ran a study with 44 participants, using two Lego Mindstorm robots in four conditions by expressing four types of expertise by means of low/high practical knowledge and low/high rhetorical ability. The robots employed linguistic cues in their speech to provide expertise effectively in order to raise trust and gain compliance. Each robot expressed different levels of expertise and rhetorical ability depending on the condition. To express linguistic ability, the robot used any one of the five following linguistic cues: goodwill, prior expertise, organization, metaphor, and fluency. The results indicate that the speech using linguistic cues was more effective than the one with practical knowledge and simple facts. Thus, the increase in linguistic cues leads to higher persuasion. In sum, although some research studies have already explored how robots can be more persuasive, we aim to study how social power can also be used by social robots as a persuasion mechanism. Our approach to this research question is discussed in the following sections. § METHODOLOGY   We designed and performed three user studies to investigate how different bases of social power contribute to the persuasiveness of social robots. In Study 1, we programmed two robots using either expert or reward social power in an adversarial setting. In other words, we programmed one robot to express expertise by giving information to the users. Additionally, as a specific instance of reward, we used social rewards (“telling a joke" by the other robot). This study is further discussed in Section <ref>. In Study 2, a single robot used two different strategies to persuade the user (reward and coercion), and the results were compared to that of the control condition. Within the three conditions (reward, coercion, and control), the robot attempted to persuade the users to select a less desirable choice among others. Section <ref> presents this second study. Last, in Study 3, a robot used one power strategy (reward) but with different strengths, and the obtained results were compared with those of two control conditions: with and without the presence of any robot. In this study, the persuasion attempt was repeated over a series of interactions. Details of this study are presented in Section <ref>. Overall, the results of the three user studies endorse that social power (in particular reward, coercion, and expert bases) endows persuasiveness to social robots and that different persuasive strategies could be perceived and preferred differently depending on users’ profiles and the persuasion context. § STUDY 1 The first study is designed to achieve two main goals: understanding how social power makes the robots more persuasive and learning how different sources of social power lead to different user perception. (Fig. <ref> represents the setup of the study.) Particularly, we aim to investigate the effect of reward and expert power strategies on the persuasibility of social robots. With this aim, to operationalize persuasive attitudes of robots, we employ these two strategies that are inspired by two different social power bases, i.e., reward and expertise <cit.>. That is to say, we design persuasive strategies inspired by these two sources of power, which from now on we refer to as reward/expert persuasive strategies. In so doing, we assign the role of an actor to robots and investigate their persuasiveness based on the specific power strategy in use. We built the reward strategy using social rewards. As a matter of fact, social interaction is rewarding for social species and can drive an individual's behavior <cit.>. Assuming the CASA paradigm <cit.>, robots are perceived as social beings; hence, social rewards from robots would positively affect users' mental systems in a similar way <cit.>. In this context, in <cit.>, the results show that among children and adolescents, tangible social reward has stronger incentive power than monetary reward. In addition, we argue that social rewards, unlike material rewards, could be unlimited and are, therefore, always available. The concept of using social reward is not new and has already been used in a number of recent studies. For instance, positive facial expressions, such as smile and admiration, have been used in prior studies targeting children or adolescents <cit.>. Inspired by such investigations on human–human interaction, recent studies on HRI have investigated the role of social rewards. For instance, in <cit.>, the authors investigate the relationship between the effects of social rewards and offline improvements on motor skills. The results show that people who received the social reward performed better in the sequential finger-tapping task and that higher degree of satisfaction toward the robot's speech is achieved when social rewards were applied. In <cit.>, the social feedback was observed to have a stronger effect than factual feedback in persuading human users. In this study, we use “telling a joke” as a social reward. Recently, researchers have shown an increased interest in humor in human–computer interaction (HCI)/HRI. Previous studies have investigated the concept of humor and telling a joke using computers or robots <cit.>. Overall, these studies indicate that humor and jokes can modify the relationship and positive affect. Hence, we argue that telling a joke would be rewarding in a similar manner as other social rewards. According to the elaboration likelihood model (ELM) <cit.>, there are two major routes to persuasion: central route, in which the persuasive message is relevant to the persuadee and the quality of the arguments has an influence on attitudes; peripheral route, in which the persuasive messages are less relevant to the persuadee and the expertise of the source influences the change in the attitudes. Considering the dual process of persuasion, humor is considered to persuade via the peripheral route <cit.>. The other power base that we employ in this experiment is expert social power. Although robots, in general, hold a great potential as informational assistants, they must use an expert language to shape how helpful they are perceived by human users <cit.>. This is because as informational assistants, people expect them to be experts in their area of specialty <cit.>. In this direction, a number of recent studies have investigated different factors that can effect representation of expertise by informational social robots. For instance, in <cit.>, the authors investigate the degree to which an expert robot needs to represent information depending on the expertise level of the user. Specifically, they state that presenting too much information by a robot to a person who is an expert in that field might be rude or presenting too little information to a person who has no clue about a subject might be confusing or misleading. In a different study <cit.>, the same authors insinuate that softening the conversation by using expressions such as “I think,” “maybe,” and “so forth” might lead to a more polite robot. Further, Andrist et al. <cit.> claim that by using simple facts and rhetorical cues, robots can be perceived as experts in the targeted field. In this study, we use a number of discrete facts and goodwill rhetorical abilities to design an expert social robot. Since the detailed design and main results of this study are already reported in <cit.>, here we just present a summary of the finding. At the end of the experiment, 51 people (17 females, 34 males) participate in the experiment voluntarily. The participants' age ranges from 20 to 55 years, with a mean of 29.45 ± 6.4. The study follows a within-subject design. Two EMYS robots (one called Emys and the other Gleen) promote one coffee capsule each. We add a third coffee option to control for random choice. One robot attempts to persuade the users using expert power by presenting information about the coffee (we call it the Expert). The other robot tries to persuade the users by giving them social rewards, i.e., by telling a joke (we refer to this robot as the Joker). The third coffee is not promoted by any of the robots and represents the control condition. In sum, in a competitive scenario, one robot plays the role of an expert and the other tries to influence the user by giving them a social reward. They both try to persuade the user to select their own coffee brand. We measure participants' personality and coffee drinking habit (CDH), i.e., how much they like coffee/how much coffee they drink, before interaction. Then, we record their coffee selection (which coffee they select), robot preference (which robot they prefer to interact with in general), perceived persuasiveness of robots (how persuasive they find each robot with a specific power strategy), robot perception (how they perceive each robot in terms of warmth, competence, and discomfort—using the Robotic Social Attribute Scale [RoSAS] questionnaire <cit.>), and future compliance (FC) toward robot (the likelihood of following the robot's suggestions in the future). In this study, we construct the following hypotheses: * H1: The expert persuasive strategy would be more effective than reward. * H2: The robot using a reward power strategy would be preferred more than the one using an expert strategy. * H3: Reward increases the warmth score of the robot and expertise increases the competence score. * H4: The robot uses an expert strategy to be perceived more persuasive. * H5: People would be more compliant with the expert robot in the near future. * H6: Perceived persuasiveness of expert or reward strategy is dependent on participants' personality traits. §.§ Results Since the study was performed in English and with non-native English speakers, we ensured that the participants understood the robots' dialogues. We asked them to rate on a 5-point Likert scale the extent to which they understood each of the robots' speech (“Please specify how much you perceived EMYS/Gleen's speech: I understood Emys/Gleen … [1] never–[5] all the time”). The results indicated that majority of the participants (31 out of 51) fully understood the robots' speech (Option 4/5), 12 people understood moderately (Option 3), minority of people (8) had basic understanding (Option 2), and no one reported never understanding the robots (Option 1). Overall, the robots were understood 4.16 and 4.25 out of five times on the average. The results of the t-test indicated that these two scores are significantly higher than 3 or the mid-score (Emys: M = 4.16, SD = .967, S.E. = .135, t[50] = 8.544, p = .000; Gleen: M = 4.25, SD = .913, S.E. = .128, t[50] = 9.815, p = .000). The scores of RoSAS questionnaire revealed that the Joker succeeded in presenting itself as more friendly, since it scored significantly higher on warmth (Z = -4.409, p = .000). Conversely, the Expert succeeded in proving itself as more knowledgeable, skilled, and informative, since it scored higher on competence (Z = -4.286, p = .000). Also, since none of the two robots performed any manipulations on the discomfort dimension, no differences were observed between them in this regard (Z = -.199, p = .842). In other words, in this design, none of the robots showed signals of aggressiveness, danger, etc., on the discomfort dimension but rather expressed either competence (knowledgeable, intelligence, etc.) or warmth (sociable, kind, etc.). The preceding statements acknowledge the effect of our manipulations on the participants. Considering the perceived persuasiveness of the two robots, on the one hand, no statistically significant differences were found between the Joker and the Expert (Z = -.944, p = .345); on the other hand, the third coffee (the control option) was selected much less frequently (by 8 out of 51 people). Hence, the robots were able to perform some persuasion. Together, these two findings indicated that the two power strategies are effective and the two robots were able to persuade people, although they were perceived differently with regard to competence and warmth. It should be noted that the persuasiveness mean score corresponding to the two robots is higher than the medium score (3.4 for the Joker and 3.6 for the Expert), which endorses their ability to persuade and influence the participants. Results also show that there is a correlation between perceived persuasiveness of the Expert and the extroversion dimension of personality (R^2=0.136, p = 0.008). The positive correlation indicates that higher extroverted people are more likely to be persuaded by the Expert robot. However, no other correlation was found regarding other personality dimensions or regarding the perceived persuasiveness of the Joker. Although previous studies have found positive correlations between persuasive strategies and agreeableness as well as emotion stability <cit.>, we could not confirm them in this study. This might be attributed to our limited sample size or due to the nature of the persuasion task. Moreover, we hypothesized that personal characteristics might play a vital role in being persuaded by one specific type of power strategies, but this difference did not stand out in the results. A potential reason to this might be a number of hidden factors other than what we measured in this study, such as the need for cognition <cit.>, which might have influenced the participants' decision-making. Need for cognition is a personality factor that taps individual differences in the tendency to enjoy thinking and to engage in abstract deliberation. Since processing the expert argumentation requires higher level of thinking, people ranked higher on this factor might prefer the Expert. Finally, we asked the participants to indicate to what extent they are willing to follow the suggestion of each robot in the future (FC). The results evidenced that people were more eager to follow the Expert's suggestions even though no significant differences between persuasiveness of the two robots were found. This could be due to the expertise of Expert and the fact that he stated more logical and rational statements. In other words, we can infer that people found Expert more reliable in the future context. In spite of this, the Joker was perceived equally persuasive as Expert. Thus, we can conclude that the Joker's persuasion was based on the effect of the reward strategy, not the information. To put it another way, some people can be easily persuaded by means of rewards. Further, this finding also highlights the role of “reward power strategy” in persuading people: Although people found the Expert more trustworthy to be followed in the future, the Joker was also equally successful in persuading them to choose its coffee. In addition, we would like to highlight that the obtained results do not depend on the CDH of the participants. Results of Chi-square tests revealed that no significant association exists between CDH and robot preference (LikingCoffee: X^2(4)= 5.180, p= .269; CoffeeTimes: X^2(4)=7.604, p=.107) or coffee selection (LikingCoffee: X^2(4)= 1.958, p= .743; CoffeeTimes: X^2(4)=3.942, p=.414). Moreover, no association exists between CDH and satisfaction or perceived social power of any robots. One might argue that people who like coffee might be more sensitive to its quality and would opt for coffee advertised from by the Expert, but the results did not confirm this. Another potential reason for this might be the Expert's arguments, which address the flavor. Hence, people who do not like coffee flavor might opt to go with one of the other two options instead. Hence, we suggest that future studies could consider other characteristics of coffee apart from taste and flavor. Furthermore, we asked the participants to indicate on a 5-point Likert Scale how they perceived the robot having social power. Regrettably, the question was hard to understand for most of the participants and the experimenter was asked a number of times about the meaning of “social power.” To be more specific, measuring social power of the robots might not be truly reflected using a single question. Thus, we skipped this question and excluded it from the analysis. §.§ Exploratory Findings In this subsection, apart from what was alluded to earlier <cit.>, we investigate the effect of other factors that might influence the results and were not reported earlier. As discussed earlier, evidence suggests that when there is a power match between the persuader and the persuadee, i.e., when both sides have high power or low power, there is a higher chance of gaining compliance <cit.>. Hence, apart from the social power question, we asked the participants to fill out the personal sense of power (PSP) questionnaire <cit.>. Our results suggest that for the Expert, this difference was not statistically significant (t[49] = -.095, p = .925); this conclusion was derived based on power scores (power match exists: M = 3.62, S.E.= .224; no power match exists: M = 3.59, S.E. = .204). Similarly, a higher mean score of persuasiveness was observed for the Joker when there was a match (M = 3.59, S.E. = .230) vs. no power match (M = 3.23, S.E = .237). However, this difference was not statistically significant (t[49] = -1.071, p = .290). As social power of robots was not measured reliably in this study, we directly checked if PSP measure is associated with persuasion and found only weak correlation between PSP and the Joker's persuasiveness (r = .390, p = .005). Further, to investigate if the personal sense of power leads to any effect on the perception of each robot's power, we checked potential correlation between these two factors. The result does not lead to any correlation between PSP and social power (Expert: r = .111, p = .436; Joker: r = .123, p = .391). Additionally, we asked the participants to rate their satisfaction regarding the coffee they opted for. We expected to observe higher satisfaction in the participants with higher level of social power (based on the PSP scores). However, in contrast to previous evidence <cit.>, we could not verify this relationship between satisfaction and personal power sense (r = .205, p = .149). We split the participants by the median score of PSP and compared the two groups regarding the reported satisfaction. We could not find any difference among the two groups. Although the average satisfaction was higher in the group of high-power people (Low power: M = 3.91, S.E. = .173; high power: M = 4.17, p = .172), the difference was not statistically different. §.§ Qualitative Analysis At the end of the questionnaire, we added an open-ended question asking the participants why they had selected the specific coffee. Answering this question was not mandatory, but only seven people skipped it. Qualitative analysis of the obtained contextual data provides new insight over the preceding results. The qualitative analysis was conducted to better understand the participants' motivation behind their decisions. We used a combination of the conventional and the summative approach proposed by <cit.> to investigate how people experienced the interaction with the robot and made decisions under different circumstances and conditions of the study. First, regarding the people who selected the third coffee (i.e., the control coffee), most of them selected this option because of feeling empathetic toward the robots. Specifically, four out of eight indicated sympathy toward robots in their decision-making. For instance, “If I had selected one robot, the other robot would get [sic] sad. I selected the middle coffee not to make any of them sad” or “I did not want to break their heart, so selected the other coffee.” Additionally, two people selected the middle coffee based on their curiosity: “The middle coffee seems mysterious to me, so I chose it because I was curious” and “I hoped robots would comment my selection regardless of my choice.” Regarding the other two, one was compelled by the two robots and wanted both the joke and the well-advertised coffee and the other was non-compliant and stated, “I don't like advertisement[,] and they were advertising their coffee. So, I selected the one that was not advertised.” From these statements, we can infer that the selection of this group was not based on a random selection, but rather, this coffee was selected due to equal persuasiveness of the two strategies (except for the non-compliant and the curious user). Among 22 people who selected the Joker's coffee, 20 people answered the open-ended question. Overall, only one participant mentioned that he was not interested in the coffee as he had a cup of coffee right before the experiment, so he selected the Joker. Moreover, 11 people indicated, in short, that they wanted to hear the joke. The rest provided more information and stated that they selected this option because of the Joker's social behavior and characteristics, such as its personality and sense of humor or its joy and emotional interaction. Two participants provided very interesting information: “Just because of an emotional decision instead of applying my rational mind”; “I wanted to hear the coded joke. If the two would be [sic] humans[,] I would definitely not have chosen Emys (Joker).” These statements highlight the role of Joker's strategy in persuading people for social robots. In sum, owing to either the joke or its funny attitude (as an instance of social reward), the Joker could successfully influence a number of participants and manipulate them to select its coffee. Finally, among the 21 people who selected the Expert's coffee, 12 users answered the open-ended question. These people can be categorized in two groups based on their answers: the ones focused on the coffee characteristics (such as good or bad, origin, roasted or not, ingredients) and the ones focused on the robot's behavior (highlighting facts, being knowledgeable, and displaying seriousness). For instance, the participants made following comments: “He described it very well,” “Emys expressed why he thought his coffee was better,” and, more interestingly, “Emys looked like an expert.” Altogether, these statements prove that the Expert could persuade the users by using its expert social power strategy and influenced them to select its option among the others. Furthermore, we asked the people who selected the Joker to rate the joke out of 5. Two people did not find the joke funny; however, this did not affect their satisfaction negatively (one was moderately satisfied [4] and the other was somehow satisfied [3]). In fact, one participant who found the joke a bit funny reported the least score for its satisfaction (2 out of 5). He is the only participant whom we can suspect to be unsatisfied due to receiving an unfunny joke. Finally, we would like to highlight that there was not always an agreement between the robot that the users followed and the robot that they preferred (refer to Table <ref> for more details). Some people followed the advice of one robot but preferred to interact with the other one: ten people who selected the Expert's coffee indicated higher preference toward interacting with the Joker, and six people stated they preferred interacting with the Expert but selected the Joker's coffee. Only 25 people agreed that they preferred to interact with the robot promoting the coffee they selected. Regarding the former group, it is not unlikely that the participants selected the Expert's coffee as they found it a better choice but preferred to interact with the joker as it was more friendly and funny. Thus, they were persuaded objectively by the Expert's expertise and compelled subjectively by the Joker's social reward (funny and lively interaction). The latter group selected the social reward and focused on hearing a joke but claimed that they preferred the Expert. These people were persuaded objectively by receiving the joke as the social reward but were attracted subjectively by the expertise of the Expert. These findings support that the two robots were persuasive, either in one way (subjectively or objectively) or both ways. §.§ Summary of Findings In summary, in Study 1, we investigated the influence of two different persuasive strategies in an adversarial setting. To do so, we performed a user study regarding an actual decision-making process within a persuasive setting. Our main goal was to examine the effect of different persuasive strategies that are based on social power. The second purpose of the study was to investigate the perception of people having different personalities regarding such persuasive strategies. To the best of our knowledge, the use of social power as a persuasive strategy has not been explored before this study. Altogether, the results of this study provide important insights into persuasion in HRI. First, this study identified two different persuasive strategies that were selected and preferred equally. However, these strategies led to different perception of robots based on personal characteristics of each user, such as their personality also affects which strategies are deemed to be more effective. The second major finding was that using social reward is effective. To be more specific, in the two persuasive settings, the user was rewarded ultimately by receiving a coffee capsule, whether they selected any of the two promoted coffees or none of them. However, selecting the Joker's coffee yielded another dimension of reward, hearing the joke, as an example of a social reward. Undoubtedly, social rewards are cheaper than monetary ones and are easily applicable in any type of social robots. The result of this study shows not only its effectiveness but also it applicability in persuasion. These findings suggest that, in general, robots are capable of persuading people; however, personal differences should be taken into account. Further, it should be noted that only two bases of power have been tested here, and the rest are to be examined in future attempts. The result of the current study showed that the two strategies used were preferred equally; however, it should be noted that different power strategies might lead to different outcome. Also, the level of power exerted might influence the results. For example, a stronger reward strategy might be preferred more. In other words, the comparability of such power strategies is inherently problematic since the power of an implemented strategy depends to a large extent on its concrete implementation. In sum, the key contributions of this study are 1) testing the persuasive strategy effectiveness in a real-choice task and 2) having a within-subject design that allows for testing competitive persuasion. This study tested the effect of persuasion in an incentivized real-choice task, which increases the external validity of the design and has an implication for robotic persuasion in a consumer choice setting. In the task, the participants chose a coffee capsule after interacting with the persuasive robots, which is an advancement over hypothetical choices used in other research studies on robotic persuasiveness. In addition to the real choice, the study also measured participants’ willingness to follow the robots’ advice in the future, which can potentially reveal a difference in short-term and long-term persuasive results. Moreover, the study examined the participants' perception of the robots' warmth and competence, which offered opportunities to understand the mechanism of how social power strategies affect persuasion. In addition, the study used a within-subject design in which two robots adopting two different strategies interact with the human subject at the same time. Given the sample size, the within-subject setting increased statistical power for comparisons between the strategies. Further, this design also provided a unique opportunity to test the effectiveness of the persuasive strategies in a competitive persuasive setting. §.§ Limitation and Lessons Learned for Designing a Future Study After analyzing the overall results, we acknowledge a number of limitations of the study, which can be helpful in our future research. First, the specific design of this study only allowed for comparison between the relative effectiveness of the two strategies. With the current design, we cannot compare the two strategies, and the two may work together for the observed effect. It might also be fruitful to conduct a study to determine the effectiveness of each strategy and to directly measure the persuadability of the robots. Second, we attempted to measure social power of the participants using the task-specific questionnaire, but no correlation was found in the collected data regarding this single question. We argue that information about power level of each side would give us a better understanding of the interaction. In general, the individuals have been categorized under two psychological states: high power vs. low power. On the other hand, individuals dealing a negotiation were assigned two roles: communicators (those who deliver message) vs. audience (those who receive a message, or the targets) <cit.>. The power level of each side, either the communicator or the audience, affects the result of the persuasion attempt. Thus, it is necessary to measure social power levels more profoundly than what we did here in order to provide evidence supporting that participants’ perception of the social power of the robots increased because of the two strategies. In other words, manipulating humor and expertise might not warrant the achievement of social power. We need further investigation to better comprehend how the participants perceived the robots, for instance, if the expertise gave expert social power to the robot. As stated earlier, to measure the power level of each robot, we used a single item in the questionnaire. However, measuring social power of the robots in this way might not lead to reliable findings. One potential reason for this might be misunderstanding in the interpretation of “social power” expression; this interpretation must be same for all the participants. Similarly, we need to verify if the joke gave reward power to the robot. Hence, in the future step, we would like to apply a standard questionnaire of social power. Specifically, a future study is required to investigate if telling jokes counts as a social reward. To be more specific, telling a joke might promote liking toward the source of humor and hence induce referent power. Thus, it must be considered carefully using self-reports to determine if the participants perceived the joke as a reward. As mentioned earlier, as the control condition we put a third coffee to decrease the probability of selecting randomly one of the two strategies. Simply put, having three options decreases the randomness probability to 33% (compared to 50% in case of two options). A more suitable control condition could have been “absence of power strategy” or “neutral product presentation”; however, in the specific design, it might have led to a silent robot or a less intelligent robot, leading to a bias toward the other robot. Hence, we introduced a condition that excludes the presence of both the robot and the strategy. As a consequence, the responses observed in the control condition may not be interpreted as a result of the absence of power strategy only. These limitations motivated us to design another scenario in which only one robot interacts with people. §.§ Recommendations for Further Research Apart from these modifications, we suggest other avenues for potential future research. A potential question raised by this study might be that if combining the two strategies would lead to higher persuasion, it is worth investigating in the future. Recent studies have found correlation between ostensible gender of the robot and perceived persuasiveness <cit.>. Although EMYS does not clearly appear to be either female or male, the two voices we assigned to the robots were both male voices. A potential future work worth performing is using voices with different genders to see whether its combination with persuasive strategies leads to a higher effect. Moreover, in this study we did not measure the trust toward the robots. Investigating the potential interrelation between trust and persuasiveness of the robot would be of great value. Further, when people are subjected to strong persuasive attempts, they may respond negatively toward the attempt, a behavior that is known as psychological reactance <cit.>. A future study could assess this by measuring the strength of the perceived persuasiveness message of the robot from the perspective of the participants. Also, participants' culture and background may affect how they perceive the over-the-top language used by the expert. Thus, a further study could also assess the effect of subjects' trust regarding such arguments <cit.>. One important question regarding these kinds of persuasive social interactions with robots is related to the long-term effects of several persuasive attempts, which might be a fruitful avenue for further research. In addition, future studies should be performed with a more homogeneous (gender-balanced) sample. Further research should focus on using more specific questions about the perception of the joke and if the subjects find it as a positive reward or if they really find the other robot as an expert. Furthermore, a social power scale is required to implicitly measure the perceived level of social power, or validating the dialogues by using experts'/judges' criteria may resolve this issue. We videotaped the sessions using two cameras to record the participants' behavioral and non-verbal responses apart from the self-report measures (e.g., perceived persuasiveness) in the final discussion of the results. This would be a fruitful area for further work. Further research could usefully explore the participants' social responses toward robots' persuasive messages, using participants' behavioral cues and body language, facial expressions, gesture and postures, to further investigate their decision-making process while facing the two power strategies. A more balanced discussion could be achieved by giving more importance to the behavioral results (i.e., the actual decisions that participants made) and by considering the self-report measures, just as a source of hypothesis to be tested in future studies. In a nutshell, according to French and Raven's theory, power arises from different sources. In this study, we equipped robots with two different sources, i.e., reward and expertise, and designed them so as to generate persuasive strategies based on their power sources. Overall, this study shows that using different sources of power, and hence power strategies, appears to be an equally viable solution to designing social robots capable of persuading people. Moreover, we argue that the result of this study signify that social rewards can be effective at persuading users and that unlike material rewards, they are unlimited and always available. Although the rest of power bases similarly lead to corresponding persuasive strategies, they are left as future work. The two strategies were selected mainly because they were most applicable in the context of our designed task. § STUDY 2 The previous study proved that social power can be used as a persuasive strategy for social robots. However, with the specific design of the study, we could only infer which robot is preferred over the other one. To be more specific, Study 1 compared the effectiveness of the two forms of social influencing strategies (rewards vs. expertise); in addition, it would be interesting to (separately) show the effectiveness of each of the two. Additionally, as we discussed earlier, despite the acknowledged role of message strategies in persuasion, little is known about how social robots' attempts may achieve higher persuasion using such strategies. Earlier studies on HCI examine compliance gaining behavior (CGB) in interpersonal persuasion. Evidence shows that four strategies of emotion, logic, reward, and punishment are effective in persuading computer-mediated communication (CMC) <cit.>. Further, in the field of HRI, previous research has established that two of these strategies—emotion and logic—lead to higher persuasion <cit.>. However, less is known about the reward and punishment strategies. Hence, we designed another study to further investigate persuasiveness of social robots using social power strategies. In this design, a single robot attempts to persuade the users in two different conditions, which are compared to a control condition (three conditions in total). In one condition, the robot aims at persuading the users by giving them a reward; in another, it tries to persuade them by punishing. In the control condition, the robot does not use any strategy. (Fig. <ref> depicts the setup of the second study.) Specifically, in Study 2, the robot presents two coffee capsules hidden in two boxes and labeled with the star classification based on reviews of other people. In the control condition, the robot asks the participant to select any of the two coffees that they prefer. In the reward condition, it offers a pen to the participant if they select the lower-rated coffee. In the coercion condition, it first gifts the participant a pen and then asks them to return it if they select the higher-ranked coffee. In Study 2, we would like to investigate how different levels of reward and punishment may affect the persuasion. To investigate the effect of loss on the persuasiveness of the robot, we consider two different coffee ratings. In one scenario, we assign a rating of 3.8* vs. 4.8*, and to resemble a higher loss, we assign a rating of 3* vs. 4.8*. To be more specific, selecting a 3* coffee has a higher probability of receiving a bad coffee, i.e., a loss in achieving a better coffee. Overall, considering two different levels and three conditions, we designed a 2×3 between-subjects study to investigate these effects (we call each sub-group of the study as follows: 3*/3.8* ×C/R/ctrl). As discussed, we explore the potential effect of the two other strategies, i.e., reward and punishment (coercion), in a persuasion task. This section discusses the empirical study we conducted with the goal of understanding the extent to which these strategies used by social robots are persuasive in influencing a person's choice while facing a better vs. a worse option. Thus, in this design, we investigate the effect of message strategy on participants' decision-making while they face two comparable options in an interpersonal persuasion with a robot. Here, we investigate the following hypotheses: * H1: A stronger punishment would lead to a lower compliance. In other words, under the same circumstances, when a robot's request leads to a higher loss, we expect the human user to be less compliant to the robot, facing a higher loss due to the punishment. * H2: Coercive strategy leads to higher persuasion. Here, inspired by <cit.>, we hypothesize that people would be more sensitive to losing an owned reward than gaining a reward. * H3: Coercive strategy decreases warmth and increases discomfort. We expect the participants to perceive the coercing robot negatively as it imposes a penalty <cit.>. In the remainder of this section, we present a summary of the results of the user study performed to investigate if social robots are able to persuade people to opt for a less favorable choice (for more details, see  <cit.>). We initially compared two different conditions (persuasions) with a control group and then investigated the difference between the two strategies by comparing the persuasion groups. §.§ Results At the end of the experiment, 90 people (38, or 42.2%, females and 52, or 57.8%, males) participated in the experiment voluntarily. To start with, we first checked if the difference in coffee ratings would influence the decision-making of subjects. Specifically, we hypothesized that the higher difference in ratings would lead to lower compliance, i.e., a higher difference between the scores would lead to a higher risk of receiving a bad coffee (lower-ranked coffee). We assumed that the high difference in the ranking presents a higher risk, thereby leading to a higher resistance to the persuasion. Since the level of reward/coercion is fixed but that of the resistance is not, it hence leads to lower effects of reward and coercion in conditions with lower rankings. We presumed that this effect leads to less compliance. However, the results of logistic regression tests implied that coffee rankings used in Study 2 are not a good predictor of decision-making of the participants. This means, although the robot could persuade a large number of people to select the lower-ranked coffee (57.8%), this difference was not significantly higher in the 3* vs. 3.8* ranking (Wald[1] = 1.255, p = .263). Hence, we reject the hypothesis H1. A potential reason for this finding might be the minor differences between the two rankings (3 vs. 3.8). A further study could assess this effect to determine a threshold so that the rating of the less desirable choice is not too low or too high; this would make decision-making easy. In other words, when the lower rank is too low, the participant might not risk and reject the persuasion easily. Conversely, when the lower rank is too high, it becomes very close to the other option, and the participant would accept the persuasion to benefit from the two rewards (a pen and a good coffee). Furthermore, we hypothesized that coercion would lead to high persuasion compared to the control condition: On comparing the coercion strategy with control condition, the results indicated that the model is statistically significant (Wald[1] = 4.95, p = .026). In this case, extroversion (Wald[1] = 4.786, p = .029) and openness (Wald[1] = 4.330, p = .037) are good predictors of participants' choice to make decision. On the contrary, the results indicated that reward is not a significant predictor of persuasion (Wald[1] = .029, p = .864). A potential reason might be the uncontrolled distribution of the participants in different conditions of the study. Actually, a closer look of the data highlighted that people who had interacted with robots prior to this study acted differently compared to the others who were new to robots (novelty effect). Specifically, the results of a t-test showed that people who had already interacted with robots (M = 1.6, SD = .49) were more compliant and got persuaded by the robot (M = 1.33, SD = .47). Moreover, the collected data revealed that most of the participants who had already interacted with robots fell in the reward group; hence, the interaction effect of this earlier encounter with robots might have diminished the effect of reward. Furthermore, the gain of coffee itself on participating in the experiment already put the participants in somewhat of a reward situation, which could additionally have interfered with the actual reward strategy of gifting a pen. In other words, the participant will always get a free coffee (which they would not have had otherwise) but only get a pen if they take the lower-ranked coffee. An important factor here would be how much the participant values any of these two gifts, which unfortunately was not measured in this study. We also hypothesized that coercive strategy would be perceived negatively (higher on discomfort), while reward strategy would be perceived more positively (higher on warmth). We assumed that giving a reward would make the robot more friendly, and in contrast, coercion would be a negative predictor of liking. However, we could not verify this hypothesis based on the collected data, and the results indicated contrasting findings, as the coercing robot was scored higher on warmth. A potential reason for this might be that some of the participants did not perceive the coercive action of the robot as a punishment but rather perceived it as being funny and laughed out loud after the robot asked them to return the pen. Another potential reason behind this might be that the coercive action was weak, since the participants had no intrinsic attachment to the pen. Another factor might be the minor differences in the dialogues. Particularly, we hypothesized that different strategies will affect the participants differently, hence causing a different perception of the robot. However, actually, the two scenarios differed only in two sentences. Further, in the two scenarios, the robot showed the same instances of social interaction, such as facial expressions and gaze. Conversely, to get perceived negatively, the robot needs to show samples of a bad attitude, for instance, being rude. Hence, we cannot confirm the hypothesis. However, this finding should be interpreted cautiously. To be more specific, we could not verify all presumptions of the test due to the bias of earlier interaction. Hence, the results might not be generalizable to other studies. Hence, this hypothesis needs to be further investigated in different scenarios with significantly different dialogues and social cues (and probably with longer duration of time). Further, in Study 2, we measured persuasion both objectively (the selected coffee) and subjectively (robot perception). The results indicated that the reward and punishment strategies make the robot more persuasive, as measured objectively through user compliance with the robot's request (objective behavior). As depicted in Figure <ref>, the lower-ranked coffee was selected less frequently in the two control conditions. However, we could not verify if these strategies impact participants' perception of the robot's persuasiveness and social attributes (subjective perception). In sum, although the robot could objectively persuade the users (to select the lower-ranked coffee), the subjective facet of persuasion was not significant. A potential reason could be the difficulty in accurately measuring the perception of a robot using subjective measures <cit.>. Additionally, recent evidence suggests that one single factor may have different influences on persuasion: In one circumstance, it might influence the degree of elaboration; in another, it might influence the valence of elaboration, while in the third situation, it might serve as a peripheral cue <cit.>. These differences can give rise to different effects on persuasion and hence inconsistencies in research finding considering a single factor. Hence, further investigation is required to determine in which direction the persuasion influences the user. Thus, we expected that the difference in ratings would lead to different levels of power, thereby leading to higher persuasion in case of higher loss. However, the results of this study failed to verify this hypothesis, so we designed the next experiment with a significant valuable reward, as described in the next subsection. §.§ Exploratory Findings In this subsection, apart from the previous hypotheses <cit.>, we further investigate the effect of other factors that might influence the results. To this end, we investigate the role of the participants' cultural background and its relation to the language used by the robot. In doing so, we consider the nationality of participants as a general measure of culture. In this study, the participants were from ten different ethnicity: Portuguese (70%), Iranian (15%), and the rest (Angolan, Brazilian, Chinese, French, German, Guinea Bissau, Ukrainian, and American). Based on a seminal work by anthropologist Edward Hall <cit.>, cultures (as well as language constructs) are generally categorized into “high-context” and “low-context” cultures. As per this perspective, different cultures lie on a continuum based on how explicit messages are exchange as well as how much the context is important in communication. In other words, in some cultures, a message is delivered mainly using words, whereas in others, the context and the way a message is delivered (using non-verbal cues) also affect the meaning of sentences. (For instance, in Asia or the middle-east, messages could be delivered through more indirect ways.) A non-small body of research in HRI investigates whether the conclusions found to be true in human–human interaction contexts are also valid in HRI scenarios when the robot is acting in accordance with the rules of a culture. As an example, it has been found that the wording of a sentence (explicit/implicit) impacts people differently according to their culture and leads to different results in terms of how likely a person is to follow the robot’s advice <cit.>. As the study was performed with mostly non-native English speakers, we verified that there were no statistically significant differences in the level of English proficiency among the six groups (F[5,89] = 1.013, p = .415). Moreover, we checked if the nationality of the participants (as an estimate of their culture) had no effect on the results (F[9,89] = .810, p = .609). Further, as we had few samples from some ethnicity, we divided the participants based on high- and low-context cultures <cit.>. The result of the analysis of covariance (ANCOVA) indicated no significant differences among people of high- and low-context culture regarding their decision-making (F[2,89] = .401, p = .528). Finally, we checked if the personality traits of the participants influenced their decision-making and added them as covariates in the analysis. The results did not show a significant difference in decision-making due to personality traits, but extroversion demonstrated a marginally significant effect (F[1,89] = 3.409, p = .069). In addition to cultural differences, according to what was discussed earlier, as the study was inspired by social power theory, we measured other factors such as robot's social power and PSP to better understand how participants perceived the persuasion and if the persuasion evoked any social power in the robot. However, these factors were not predictors of decision-making, nor significantly affected the perception of the robot (PSP: F[1,85] = .732, p = .395; social power: reward: F[1,84] = .556, p = .458; coercion: F[1.84] = .149, p = .701). Hence, to check to what extent the participants find the robot rewarding/coercing, a more direct questionnaire would be required in a future study. We devised a number of questions in the questionnaire designed specifically for this task to further investigate the perception of the participants. Specifically, we asked the participants to answer the following questions on a 5-point Likert scale: * How persuasive did you think EMYS was? * Consider a situation in which you have an opinion different from EMYS's; will you change your opinion in such a way so as to be consistent with EMYS's? * Imagine a situation where EMYS gives you a bit of advice in the future. Please specify the likelihood that you would follow EMYS's advice? We averaged the scores against these questions and compared them considering the effect of earlier interaction with robots. The results suggested that the average scores against the task-specific questions were not significantly different across the scenarios (F[2,86] = 2.724, p = .071, η^2_p = .060). They led neither to any significant or strong correlation with robot perception (W: r = .276, p = .009; C: r = .216, p = .042; D: r = .183, p = .086) nor to the final decision of the users (r = .062, n = 87, p = .567). §.§ Qualitative Analysis We also checked the answers to the open-ended question qualitatively, which are reported in this section. Among the 90 participants, 73 answered the open-ended question; among these, 29 selected the lower-ranked coffee. Overall, six people were assigned to the control condition (two people in 3ctrl, and four people in 3.8ctrl). Most of these people indicated curiosity toward the rankings (e.g., “To see if other people's assessment was correct”) or inherent non-compliance (e.g., “I have free will to choose. It's just my rebel way of living”). Moreover, 14 people belonged to the reward conditions (five subjects in 3R and nine subjects in 3.8R). These people mostly highlighted the role of the pen or the gift as their main motivation to select the lower-ranked coffee. For instance, “3.8 is not a bad rating[,] and the reward of a pen seemed worth the lower[-]rated coffee”. Interestingly, people in 3.8R highlighted the minimal difference between the two ratings and their interest in the pen. For instance, “I don't care much about the coffee taste, since all taste more or less the same. Even if one is slightly WORSE for me, since I would get a pen[,] I would prefer the WORSE one.” In sum, they selected this option as they found it “more rewarding,” since they received coffee as well as a pen. Finally, nine people in the coercion condition (four people in 3C and five people in 3.8C) indicated an attachment and interest toward the pen. For instance, “It was a good coffee despite being the lower[-]ranked one[,] and I could keep my pen” or “3.8 is not a bad score[,] and I get a pen”. The remaining 44 people selected the higher-ranked coffee; among these, 24 belonged to the control condition (13 in 3ctrl and 11 in 3.8ctrl). Most of these participants highlighted the higher rank of the coffee as their motivation. Although some suspected the credibility of the rankings (“If it's ranked higher, the chances of being better are higher, although this depends on how many people rated it”), they still did not want to risk receiving the bad coffee (“Higher probability of being good because of rating” or “There is no reason to pick the WORSE[-]rated one. Even if I don't trust the robot[,] picking left [higher-ranked coffee] is not WORSE than a blind pick.”) In addition, nine people in the two reward conditions (four in 3R and five in 3.8R) selected either based on their curiosity (“Just to see why it had such a high ranking”) or because they were simply not interested in the reward (“Do not need the pen”). Moreover, they valued the coffee more than the pen (“Because, assuming the ratings are correct, I prefer having a better coffee than a [bad] coffee and a pen”). Finally, 11 people were assigned to the coercion condition (four in 3C and seven in 3.8C). Most of these people highlighted the low value of pen (e.g., “A pen is not worth drinking bad coffee” or “I don't need a new pen, and prefer better coffee”). In addition, one subject was curious about the high rank of the coffee (“To know if the rank it was correct or not” [sic]). §.§ Summary of Findings As discussed in <cit.>, the data analysis highlighted a significant difference (t[88] = 2.469 , p = .015) between coffee selection (if the subjects selected the higher- or the lower-ranked coffee) of participants who had already interacted with any robot (M = 1.6, SD = .49) vs. the others (M = 1.33, SD = .47). People who had already interacted with robots were more compliant and got persuaded more by the robot. To overcome this effect and the potential bias of prior interaction with robots, we considered this confounding variable as the covariate and included it in one-way ANCOVA (in case of continuous dependent variable, i.e., RoSAS questionnaire) and logistic regression (in case of categorical dependent variable, i.e., participants' decision-making or which coffee they selected). It should be noted that there was no significant difference regarding prior interaction with EMYS robots (t[88] = 1.54, p =.128). Withal, the results showed that the robot has the potential to persuade the users and make a bias on their decision making. To be more specific, comparing to the control group, in which no persuasion was used, the robot could bias a number of participant's decisions toward a less-desirable choice (Wald(1)=6.627, p=.010). So, the robot could change people's behavior in the expected direction. However, the subjective measures used in this study did not yield significant findings in our expected direction and that would be a fruitful area for further work. To measure the subjective perception of the participants, we applied the RoSAS questionnaire <cit.>. We postulated that coercive strategy decreases warmth and increases discomfort. the findings failed to verify this hypothesis. To be more specific, regarding the discomfort scores the results did not indicate any significant difference between the score of discomfort in any conditions (p=0.543, effect size: 0.053). Additionally, the results of different ANCOVA tests show that there is a statistically significant difference in the scores of warmth between the coercion scenario and the control (p=0.039, effect size=0.227). And, surprisingly, this score is higher in the coercive condition (Coercive condition: M=4.41, S.E.=.17; Control condition: M=3.91,S.E.=.33) which is in contrast to our expectations. Furthermore, the results of this study did not find any evidence that a stronger loss would lead to higher persuasion (Wald(1)=.266,p=.606). However, the findings indicated that coercion is a good predictor of decision making (Wald(1)=5.692,p=.017). §.§ Limitation and Lessons Learned for Designing a Future Study One source of weakness in this study is that the results failed to indicate how the users perceived the robots in terms of social power. We require more evidence that the robot's social power is manipulated. In other words, we require a specific questionnaire measuring this more carefully. In addition, the design of task-specific questions did not establish any significant finding and should be verified using more attentive questions. Furthermore, a stronger manipulation check would be of great value to see how the participant perceived the pen as a reward/coercion and how much they value each of them (as seen in the qualitative analysis). Measuring how much participants were attached to the pen and how much they desired having coffee would enhance our understanding of their behavior. In other words, as discussed earlier, the coffee itself was a gift in the experiment. Depending on how much participants actually like/wanted the coffee might affect their assessment of the options presented by the robot. We have made an attempt to examine both the role of the persuasion actor (social robots) and the persuasion target (human participants) in its theoretical model by measuring the personality of the subjects. However, the result of this study did not yield any findings. This might have happened due to the small number of participants in each bin. Collecting a higher number of data might open up more insight in this direction. Also, due to the interaction effect of previous interaction with robots, we had to apply a logistic regression to analyze the data. This test also requires a large number of samples. Another limitation of the study might be the design of the control condition. In the current design, in the control condition, the robot lets the subjects select an option freely without exerting any power. This gives us a baseline of decision making in the absence of any power or persuasion. Another control condition could be designed in such a way that the robot asks the participants to take a worse choice with no persuasive strategy. Rather than letting them to freely select their coffee. Finally, to have a more interactive and believable scenario, we designed dialogues dependent on the participants' responses. For instance, in the beginning of the interaction, the robot asks the subject if s/he has already met the robot. Depending on the answer of the subject, the robot responds differently to induce the illusion of having a real-world interaction. Specifically, if the participant has seen the robot before, the robot responds with a personal affective statement “I am very pleased to meet you again”. This could be perceived to show goodwill, shown to influence robot persuasiveness <cit.>. Also, when the subject does not provide any response when questioned, the robot says “I didn't hear you”. However, depending on the specific interactions and what triggered this behavior, this could reduce robot credibility by suggesting a technical error/lack of understanding compared to the positive/negative responses. Finally, the reward dialogue was implemented with an additional affective signal (joy animation) with no equivalent present in the coercion strategy. These minor differences might have influenced the perception of the user and might have affected the results. In sum, under specific dialogues, some of the condition/response-dependent dialogue may have additionally influenced persuasiveness and caused inconsistently with the main reward/punishment strategies. While, similar displays are not present for the other response conditions. This might have indirectly biased the participants perception and responses that was not aimed by this study (full dialogues are listed in Table <ref> in the appendix). As future work, the study could be repeated using a higher number of participants who already interacted with robots. Or it might be applied to people new to robots in multiple sessions to decrease the novelty effect. Considering the current dataset, we are not sure how people perceived the strategies and further work needs to be done to establish this. Also, we have recorded the behavioral and non-verbal responses of the participants using two cameras. Behavioral analysis of the user would be of great help in determining their perception. It would be interesting to see if people's susceptibility to persuasion, specifically coercion, or to reward, would have an impact. This design could be extended to other studies. For instance, the current design might provide an opportunity to investigate the “endowment effect” and “loss aversion” <cit.> theory in a future study. More broadly, research is also needed to determine a prior validation of the dialogues to check if they lead to the desired power sources. Moreover, the task-specific questions were designed in a direct way and might influence participants to respond by social desirability. Finally, since the robot does not physically interact with the participants, it might be a good idea to compare the results with a virtual character or in a control condition without any robot. § STUDY 3 Having discussed the necessity of a new study, this section discusses the design we used to address a number of limitations of the previous studies. In this study, we only focus on one of the bases of power, i.e., “reward base” which is in common with the two previous studies. Other bases could be investigated in a similar approach in future studies. §.§ Design This design is inspired by a conceptualization of power introduced in <cit.>. Based on Equation 1 in <cit.>, this model indicates that reward power has a linear relationship with the amount of promised reward (rew), probability of giving the reward (p) and the the way the actor induces (induction) the rewarding action (equation <ref>). Power_rew = rew × p × induction Hence, having other parameters fixed, increasing the value of the reward increases the force of social power. Also, considering a proportional linear relationship between social power and persuasion, this increase in power leads to higher persuasion (to some extent before a reaction happens). The main research questions of this study are 1) to analyze how the different levels of reward influences the decision making of participants and 2) how this effect changes over a series repeated interactions. 3) if the novelty effect would influence the decision making in this design. To answer these questions, we devised a mixed-design study within a decision-making scenario, in which we manipulated the level of rewards a robot gives to participants. To be more specific, the study contains two reward values (levels) and two control conditions: one with zero reward and one with no interaction with the robot (one-fourth of the participants were assigned to each group). In other words, in one control condition, social power is not activated; and in the other control condition, social power is activated but without the presence of a robotic persuader. In this design, after a decision making process, the robot tries to persuade the user to change their mind and select another alternative. To persuade, the robot uses a reward social power strategy and the task is repeated to investigate if the effect of social power on persuasion decays. In sum, considering formula <ref>, in the designed experiment, we assume that p and induction are fixed (as explained later). And we manipulated two independent variables: one is the reward the participants receive. And the other variable is presence/absence of a social robot. We also considered two dependent variables: 1) the participants' decisions or if the they accept/reject the offer (objective measure), 2) how the participants perceive the robot (subjective measure). §.§ Hypothesis In this context, we expect to observe the following outcomes: * H1. Higher social power (resulted from higher social reward) leads to higher persuasion. * H2. People who are new to robots might be affected by the novelty effect. And this effect might interfere with the manipulation and diminish the effect of higher social power utilized to persuade. * H3. Over a repeated interaction, the effect of power on persuasion does not decay, considering that the level of power is fixed. * H4. Giving rewards increases the robot's likeability. * H5. The presence of robot leads to a higher persuasion comparing to a situation that the robot is not present. This study used a repeated between-subject design with four conditions: Low Reward (LR), High Reward (HR), a control condition with 0 Reward (0R) and a condition with No interaction with the Robot (NR). It should be mentioned that in the last condition we used the low value of the reward. More specifically, we investigate the effect of repeated interactions within subjects. In addition, we investigate the effect that different level of exerted power may have between subjects. §.§ Measures The participants were requested to fill out a pre-questionnaire including demographics (Age, gender, Nationality, Occupation, and Field of study). As we ran the experiment in English with mostly non-native English speakers, we asked participants to rate their English proficiency on a 5-point Likert scale (1 Basic - Professional 5). Previous studies indicated different attitudes among people who interacted with robots earlier. So, similar to previous studies, we checked if the participants had already interacted with robots in general, and if they had already interacted with Emys before this experiment. Next, the participants were asked to respond the Personal Sense of Power (PSP) questionnaire <cit.> that gives us an idea of their social power level. In addition, the participants were also requested to complete a short version of the Eysenck personality questionnaire <cit.>, that gave us information on their levels of Neuroticism (N), Extroversion (E), Psychoticism (P), and Lie scale (L). Also, after finishing the task, we asked the participant to respond a post-questionnaire to have a better understanding of their perception. To measure how they perceived the robot, we applied the RoSAS questionnaire <cit.>. As the robot was giving rewards to the participants, we measured the extent to which this action gave the robot Reward Social Power. Finally, we asked them specifically if they changed the selected category at any iteration to make sure they understood the game. To better understand why they made such decisions, we asked them to clearly state why they have accepted/rejected the robot's offer (an open-ended question). Finally, as multiple numbers of factors contribute to the processing of persuasive messages, we use the Susceptibility to persuasion scale <cit.> to measure a relatively broad spectrum of factors leading to persuasion. To further investigate the interactions of the participants within this task, we added a number of questions to the pre- and post-questionnaire (Table <ref>). To investigate if the interaction with the robot influences trust and how they believe the robot would give them the reward if promised, we added a question in pre- and in post questionnaires measuring this. Next, participants were requested to indicate on a 5-point Likert scale how much they like Quiz-type games, and how often they go to Cinema. Furthermore, with items 6-10 in Table <ref>, we specifically check how persuasive they found the robot, if the robot was trying to change their mind, if they were convinced to change or they felt compelled to change their initial selections. §.§ Participants In this experiment, 118 people (54 females) participated voluntarily in the response of receiving cinema tickets. To recruit participants, we put several advertisements around the university, as well as the university's Facebook group. The participants' age ranges between 18 and 79 years old (28.6±16.9 and 1.6 S.E.). The participants signed an informed consent form before participating approved by the Ethical Committee of the University. Then we randomly assigned the subjects to the four conditions of the study and counterbalanced the data to have approximately equal number of females in each condition [30 people in LR (13 females), 30 people in HR (13 females), 30 people in 0R (13 females), 28 people in NR (15 females)]. §.§ Procedure §.§.§ Task, Robot, and Environment In the designed task, persuasion is operationalized within a game. The participants were asked to play a trivia game in three trials with different categories of questions. The game contains 6 categories ("Animals", "Arts", "Astronomy", "Geography", "Science", "Sport", "TV and Movie") and each category can be selected only once. Each category contains 5 questions and a correct answer to each question carries 1 point. The order of the questions in each category is the same for all participants to avoid the order effect on the responses. To provide the incentive of the games, cinema tickets are given to the participants depending on the scores they collect. The higher the score the more tickets they gain. Specifically, the participants could get more than one cinema tickets (up to three tickets) based on a pre-defined rule (the first 7 points in score grants a cinema ticket, each 8 more points lead to another ticket). In this task, we selected cinema ticket as the final reward, which is more valuable than the pen used in the second study. In each round of the game, the robot proposes two of the mentioned categories and the participant selects one (without seeing the contents). To have a better understanding of the user preferences, we ask them to define an ordering of the topics based on their interest or knowledge (after the re-questionnaire and before starting the game). Based on this preference, the highest rated option will be offered against the lowest. We expect the participant to select his/her own highest ranked and the robot tries to change his/her mind. The robot always offers an option which has not been selected by the participant. For instance, if an arbitrary participant selects the following preferences: "Geography", "Science", "Astronomy", "TV and Movie", "Sport", "Animals", "Arts". In the first round, s/he will be asked to choose from these two categories: "Geography" vs. "Arts". We expect the participant to select the "Geography" category, as s/he indicated as her/his preference. And the robot asks the participant to change and select the "Arts" category. This process is repeated for the other rounds (but the chosen category is removed from the list). The iterative manner design of the game gives us the opportunity to test H3. A flowchart of the full task is depicted in Fig. <ref>. In this task, similar to previous studies, we used the Emys robot mounted on a table in front of a touch-screen that is located between the subject and the robot (Figure <ref> depicts the study setup). The study took place in an isolated room. Each subject participated individually and during the game, the researcher stayed in the room to make sure no one cheats in the game (for instance, by searching the correct answers on the Internet). The robot mediated the game by introducing the procedure and the scoring rules (introductory and ending dialogues are listed in Table <ref>). Unlike the previous two studies, in this task the robot was fully autonomous (further details in Section <ref>). §.§.§ Implementation In this task, the robot performed in a fully autonomous manner. The core of our system architecture was the SERA Ecosystem <cit.> which is composed by a model and tools for integrating an AI agent with a robotic embodiment in HRI scenarios. Figure <ref> shows the overall system architecture. We developed an application in C# (displaying the game on the touch-screen and getting the answers of the participants), which is integrated with the decision state module using a high-level integration framework named Thalamus <cit.>. This framework is responsible to accommodate social robots and provides the opportunity of including virtual components, such as multimedia applications <cit.>. In addition, we used the Skene <cit.> behavior planner that provides the robot's behaviors, such as gazing, pointing, making speech, etc. And a text-to-speech (TTS) component is used as a bridge to the operating system's built-in TTS. In the experiments using the Emys robot, we used a symbolic animation engine based on CGI methods called Nutty Tracks <cit.> which provides the capability to animate a robot in a graphical language. A control module was developed to provide the communication between the display screen module, the Thalamus and the Skene. This module identifies and informs Skene, through Thalamus, about the utterance to be performed by the robot. Besides, this module controls the screen to be presented to the participant and processes the inputs made by him/her. With these inputs, the control module is able to send messages to Skene, that then sends to the robot module in order to perform the robot's animations. Hence, the overall interaction between the participant and the robot becomes fully autonomous in this way. Since the persuasion happens in repeated interactions, we designed a pattern for the creation of dialogues to make sure the persuasion attempt is similar in all trials. In doing so, each persuasive message is consisted of four parts: 1. Call for change, 2. Reward condition, 3. The goal, and 4. Motivation. Table <ref> lists the four parts of this pattern inline with the dialogues used in the experiment. Note that in this table, the first column represents the order of the dialogues in the final setting similar to Table <ref> and <ref>. The composition of these tables based on the order, composes the full dialogue used in this study. The first part, “call for change”, is the starting part of the persuasive message. First, the robot invites the participant to consider changing the selection, and next starts to influence him/her as follows. In the second part, the robot indicates the condition of giving the reward, i.e., if they change, they will receive some extra points (depending on the condition). In the third part, the robot emphasizes the goal of the game and highlights the role of points in gaining cinema tickets. And finally, in the last part the robot motivates the participant to accept the reward and informs them the extra points could help them to win more easily. This pattern was checked by four researchers of this study and they agreed the creation pattern generates sentences with equivalent semantic loads. In this way, we perform a fixed level of induction in every persuasion attempt. Apart from the persuasive strategy, the rest of repeated dialogues were displayed in a random order. This was done for two reasons: 1) to avoid repetition in scripting robots dialogues 2) also to have more diverse dialogues by combining different parts of smaller sentences. These dialogues are listed in Table <ref>. For instance, when it comes to the decision making part of the game, the robot uses any of the three sentences of the first row. Exceptionally, on the very first trial the robot uses the first line, but on the second or third trial, it might use any of the remaining two on a random order avoiding repetition. Or, after each time that the user answers a question, the robot might use any of the “gap fillers” to start asking the next question. §.§ Results Overall, 5 participants were excluded from the sample due to robot error. Before analyzing the data, we checked if there is any significant difference among the four conditions regarding any of the demographic variables. The results indicated that no significant difference exists between the samples in each condition regarding their age (F(3,119)=1.558,p=.203) and personality traits (N: F(3, 119) = .855, p=.467, E: F(3, 199)= .886, p= .451, P: F(3,119) = 1.011, p=.390, L: F(3,119) = 1.409, p=.244). Moreover, we verified that a prior interaction with robots (t(85.595)=.972, p=.334) or Emys (t(21.467)=1.011, p=.324) had no influence neither on the decision making of the participants nor their perception of the robot (Warmth: t(60.370)=-.559, p=.578; Competence: t(116)=.133,p=.894; Discomfort: t(166)=-.255,p=.799). Similar to previous studies, we investigated the results both objectively (participants' decisions to accept or reject the offer) and subjectively (task-specific questions). In the latter case, initially we checked the dimensionality of the scale using factor analysis for items 6-10 of Table <ref>. The Cronbach's alpha indicated that removing item 7 increases the reliability of this measure (from .715 considering all 5 items to .779 when item 7 was removed). Hence, to measure persuasiveness subjectively, we averaged the 4 remaining items (6 and 8-10) that have more internal consistency. §.§.§ Hypotheses Testing We investigated the first hypothesis both subjectively and objectively. Considering the objective measure (decisions), having a binomial repeated measure among the independent groups (LR and HR), we analyze the data using Generalized Estimating Equations (GEE). With a significance of 0.901, there is not enough evidence to conclude that whether the higher reward has an effect on the outcome (being persuaded). Similarly, from the subjective perspective, the results of t-test indicated that no significant differences exist among LR and HR groups considering the persuasiveness score of the robot (t(55.913)=-.567, p=.573). Hence, we cannot verify the first hypothesis (H1). To investigate the second hypothesis, we added having/not having interactions with robots as another predicting factor of the GEE model. However, with a significance of 0.825, there is not enough evidence to conclude that whether having an earlier interaction with robots has an effect on the persuasion. Furthermore, adding this factor increased the QIC (Quasi Likelihood under Independence Model Criterion) value, which also endorses that this item is not a good predictor for the model. To check this hypothesis subjectively, we added having/not having interactions with robots as a covariate to ANCOVA analysis. The results indicated that the covariate is not a significant predictor of perceived persuasiveness (F(1,57)=.438, p=.511, η^2=.008). Hence, having prior interactions with robots does not affect the decision making of the participants and we reject H2. With the third hypothesis (H3), we postulated that over repeated interactions, the effect of social power on persuasiveness does not change (decays/grows), considering that the level of power is fixed. We note that as the subjective measure was applied only at the end of the test, we cannot check this hypothesis subjectively. And, we can only investigate this hypothesis objectively. In doing so, we included the trials as a factor in the GEE model. The results indicated that the repeated interaction has an effect on the first and third conditions, i.e., LR and 0R. To be more specific, on average, people in LR group were more likely to accept the offer at the third trial compared to the first trial (Wald(1)=4.807, p=.028). On the contrary, people in the 0R condition were less willing to accept the offer at the third trial compared to the first trial (Wald(1)=5.703,p=.017). Figure <ref> represents these findings. The acceptance rate decays only in the control condition, in which no persuasion is exerted. The rate stays unchanged in HR and NR conditions, but gets an increase in the LR condition. Hence, the effect does not decay and under specific conditions it grows over time. Hence, we accept the third hypothesis. We would also like to highlight that the NR has lower acceptance rates, and the persuasion of 0R decreases, while with LR or HR it does not decay. The fourth hypothesis (H4) investigates the robot's perception based on the RoSAS questionnaire. With this hypothesis, we expect to observe a higher score of warmth in conditions that the robot gives rewards to the participants (persuasion conditions, LR and HR). The results of ANOVA test indicated that, although in HR condition, the robot was scored higher on warmth and competence compared to the other groups, however these differences were not significant (Warmth: F(3,114)=2.174, p=.095; Competence: F(3,114)=2.299, p=.081; Discomfort: F(3,114)=.395, p=.757). Hence, we reject the fourth hypothesis. Similar to previous hypotheses, we checked the fifth hypothesis both objectively and subjectively considering LR and NR groups that differ only in one factor, which is the function of the robot. From the objective point of view, GEE indicated a significant effect of robot presence on the decision making of the people in LR group in comparison to NR (Wald(1)= 7.838, p=.005). Furthermore, people in LR group are more likely to accept the offer of the robot (Wald(1)=10.759,p=.001). From the subjective perspective, the result of a t-test indicated a significant difference between the score of persuasiveness of the robot (t(56)=2.461, p=.017) and the higher mean in LR condition indicated that people found the robot more persuasive than the computer application (M=3.3167, S.E.=.19288 in LR vs. M=2.6339, S.E.= .19940 in NR). Hence, the results verify H5, i.e., although the robot did not have any physical interaction, but its presence itself leads to higher persuasion. §.§.§ Exploratory Findings Apart from the postulated hypotheses, we further investigated the data to have a better understanding of the interaction under these conditions. As mentioned earlier, apart from the standard questionnaires, we added 5 other questions to the post questionnaire, specifically designed for this task (Table <ref> item 1-5). For instance, the first two items measure the trust before and after the interaction. As the interaction with the robot and receiving the rewards (in reward conditions) might influence the trust indirectly, we checked if participants' trust in the robot (or how much they believe the robot fulfills the promised reward) remains unchanged during the interaction. The result of a paired sample t-test indicated that there is a significant difference between the scores of trust before and after the interaction (t(117)=-2.854, p=.005). And the higher mean after interaction indicated that the trust increased after playing the game (trust score before interaction: M=3.64, S.E.=.089 vs. after the interaction: M=3.86, S.E.=.092). A post-hoc analysis indicated that this difference is only significant among the participants of HR group (t(29)=-2.362,p=.025) [M:3.46, S.E.=.18; M:3.83, S.E.=.19]. To control the bias of the trust change, we need to include it as a covariate. However, in order to add it as a covariate in GEE (regarding decision making), it should have been measured at each trial, while we had measured it only on the first and the last trials. To handle this, we averaged the trust score before and after and considered it as the trust score of the middle trial. And then we included these three scores as a covariate in the GEE model. This covariate increased the goodness of fitness, meaning that the new model is less fitted to the data. In addition, it was not a good predictor of decisions (Wald(1)=.523, p=.469). Hence, there is not enough evidence if the trust factor was a good predictor of the model. However, this might have happened due to the missing measurement of trust in the middle trial. In other words, the average score might not be a good estimation of trust in the middle score. To overcome this doubt, we put aside the HR condition (as it was the only groups with significant differences in trust scores before and after the interaction) and skipped the trust factor which was not statistically different among other groups. The results indicated that the goodness of fitness of the new model was lower than the previous one, that is to say, we achieve a better model without HR. To check this intervention subjectively, we added the trust difference between before and after interaction as a covariate in ANCOVA to check its potential influence on perceived persuasiveness. The results indicated that there is no statistically significant difference between adjusted means of persuasiveness with regard to trust difference (p=.654, effect size .004). In sum, although the trust in the robot was increased after the interaction, this factor did not significantly influence the persuasiveness. We further checked if personal preferences (i.e., liking trivia games or cinema) have affected the results. From the objective perspective, the results indicated no significant effect of liking cinema (Wald(4)=4.176, p=.383). However, liking trivia games turned out to be a good predictor of behavior (Wald(4)=9.671,p=.046). In other words, the more the participants liked trivia games lead to a higher likelihood of accepting offers (Wald(1)=5.594,p=.018). From the subjective perspective, we performed ANCOVA with liking quiz as a covariate. The results indicated that this covariate does not significantly predict the dependent variable, i.e., persuasiveness of the robot (p=.827, effect size: .000) Furthermore, earlier research indicated that when there is a power match between the persuader and the persuadee, higher persuasion is achieved <cit.>. To investigate this effect, we labeled the participants as high/low power based on their PSP (Personal Sense of power) scores (the ones scored higher than the average were labeled as high power). Moreover, we labeled the robot as being high/low power based on the scores associated with the reward social power (higher than the average score was labeled as high). Then we checked if a power match exists between the participant and the robot and included it as a covariate in an ANCOVA analysis [We would like to highlight that scores of power match were equal in case of median- and mean-split.]. The results indicated that the covariate does not adjust the association between the predictor and the outcome variable. Finally, in <cit.> Ghazali et al. considered the total number of accepted offers as an indicator of compliance. Similarly, to check if this feature is a predictor of behavior, we applied an ANOVA test and the results indicated a significant difference among the four conditions (F(3, 114)= 4.682, p=.004, η^2=.110). Post-hoc comparisons using the Tukey HSD test indicated that the mean score for the LR condition (M=1.633, S.E.=.183) was significantly higher than NR condition (M=.821, S.E.=.189). Similarly, the mean score in HR condition (M=1.600, S.E.=.183) was significantly higher than NR. This results endorse the verification of H5, in other words, the presence of the robot has significantly affected the decision making of the participants. From the subjective point of view, the result of a Pearson correlation test indicated a strong, significant, and positive correlation between the total number of accepted offers and perceived persuasiveness. Particularly, the higher the perceived persuasion, the higher the number of accepted offers (r(118)=.679, p=.000). Further analysis indicated that this correlation is stronger among LR (r(30)=.795, p=.000), then 0R (r(30)=.781,p=.000), then HR (r(30)=.490, p=.006) and the least for NR (r(28)=.443,.018). This finding is inline with <cit.>, that when the persuasion is stronger, the compliance decreases due to potential reactance. §.§.§ Analyzing the Game Log Apart from the data obtained from the questionnaire, we analyzed the game logs to further investigate how the participants acted and made decisions during the game. In this subsection, we investigate a number of these features. One of the game features that might influence the decision making of participants to accept/reject the offer is the remaining scores they require to achieve a cinema ticket. Adding this feature to the GEE model indicated that it is a good predictor of the behavior (Wald(1)=6.386, p=.012) and the test of goodness of fit showed a decrease (470.024 vs. 476.080) in the Quasi Likelihood under Independence Model Criterion (QIC) meaning that it is a good predictor of decisions. In addition, the results indicate that an increase in distance to a ticket increases the likelihood of compliance (log odds: B: .116±.0458, Exp(B)=1.123). Another fruitful feature of game logs might be the collected score in each trial. The test of model effects indicated that this is also a good predictor of decision making (Wald(1)=27.409, p=.000). Additionally, the goodness of fit test showed a decrease in QIC (451.646 vs. 476.080), meaning that a model using this factor as a predictor is a better fit to the data. On the other hand, further analysis indicated that an increase in the score leads to a lower likelihood of acceptance, i.e., the lower the score they collected, the higher the probability of accepting the offer in the next trial (odds: -.444±.0848; EXP .642). Based on this, we grouped the observations by the current score at each round (as an indirect measure of distance to the ticket) and compared the persuasiveness score of the robot considering the condition as the covariate. The results indicated no significant difference among the scores of persuasiveness with regard to the score (F(8,119)=.369, p=.935). However, regarding the objective measure (decisions), there was a significant difference between the decision making of the participants only on the first trial (T1: F(8,119) = 2.224, p=.031). The result did not yield to any significant difference on the second (T2: F(8,119) = 1.241, p=.282) or third trial (T3: F(8,119) = 1.826, p=.079). Another game feature that might be informative is the cumulative score or the overall score they gained before making a decision. The test of model effects indicated it as a good predictor (Wald(1)=9.760, p=.002) and the goodness of fitness test showed a decrease in QIC (460.854 vs. 476.080) further endorsing it is a good predictor. However, the negative log odds, i.e. -.226±.0724 indicated that people having collected high scores are less likely to accept the suggestions. In this game, the participants could change their mind and select options contrasting their stated preferences during the game, if they understand the strategy of the robot. This feature gives them an opportunity to cheat the robot. In other words, as the robot always offered the opposing option (the option that participants did not choose), they could initially select the option that they do not prefer, and accept the suggestion of the robot to answer their most preferred option (plus receiving points in persuasion scenarios). In this regard, it is not unlikely to observe less cheating on the first trial, when the participants are not aware of robots' function. However, the cheating might reach to its maximum on the third trial when the participants become more familiar with the robot's function. Analyzing the game log indicated that only one person selected his nonpreferred choice and changed to the preferred option in all trials. In addition, two people did this both in the second and the third trials, however, one of them belonged to the 0R condition (s/he might do this randomly), and one belonged to NR. Hence, although the participants had a chance to cheat the robot, only two persons did this (one in HR and one in NR). Finally, we investigated how the participants made decisions considering their initial preferences. In this regard, they could change their opinion in three possible directions: not accepting the offer or no change, accepting the offer and select a less-desirable choice, accepting the offer and select a more desirable choice. Having a repeated test, we applied GEE model using change directions as responses and trial and conditions as predictors. Table <ref> summarizes the findings. In sum, we can infer that, in the first trial, the robot was effective and all conditions having the robot were higher persuading, even the one with 0 reward. In other words, on the first trial, the robot could persuade the users to change their initial selection significantly more frequently than the NR condition. In the second trial, the differences between groups were not significant. There is not enough evidence to make any conclusion on this trial. However, on the third trial, all conditions are higher than 0R, meaning that the reward has an effect on decision making (manipulation checked). Here is the only place were NR become more persuading than 0R. And when the robot is not giving any reward and does not have any power on the participants, over a repeated interaction, it acts less persuading over time in comparison to a computer application giving reward constantly (Figure <ref>). A potential reason might be the differences in their selection during the game and what they stated in the initial questionnaire. As we mentioned earlier, it is not probable that the participants wanted to cheat the robot. They might have not paid enough attention while they were filling out the preference questionnaire, but when it came to reality in the game, they paid more attention to what they prefer more. In the LR condition, people were struggling to get at least one ticket, while in the HR condition, people mostly received one ticket and were struggling to achieve an additional ticket. Maybe in LR one extra point could help them to win at least one cinema ticket, but in HR there was a different situation. §.§.§ Qualitative Analysis To further investigate and learn more from the data, we took a closer look by revisiting the data qualitatively. With this new view, we aim to interpret and describe data to find and understand potential patterns of behavior in the decision making of the participants. That is to say, the open-ended question aimed at exploring participants' unique perspectives and motivation behind their decision making. As the result of a qualitative analysis is directly dependent on the coding scheme used, three researchers contributed in coding (data retention) to have a more credible and reliable coding scheme <cit.>. Initially, the three researchers labeled the data freely and individually to achieve categories as general as possible and not to get biased by each other. We used the WebQDA software [https://www.webqda.net/] to code and label the open-ended question. Then they discussed and compared the labels to enhance the validity of the coding. Each of the three researchers identified 13, 11, and 9 categories (listed in Table <ref>) in the first phase. To determine the trustworthiness of the codes, we further discussed them through checking the consistency and reasons for inconsistency. We reached an agreement of 8 categories in the end (low self trust, unknown, reward, high self-trust, Emys, non-compliance, good offer, game experience). In the next step, we relabeled the data using this coding scheme with the goal of minimizing inconsistency. The inter-rater reliability between each two researchers gained K=.653 (p=.000), K=.515, p=.000, and K=.476, p=.000. After another discussion, we agreed to use one of the ratings with the highest inter-rater reliability. We analyzed the collected data using the coding scheme and the findings are summarized as follows: * There is a statistically significant difference in the number of accepted offers between people categorized with regard to different tags (f(7,117)=10.160, p=.000). People who had high self-trust were less compliant. These people who valued their own knowledge than the robot's offer tended to reject more than the rest (Figure <ref>). * Participants' perception of the robot was significantly affected by their statement regarding robot persuasiveness (F(7,117)=7.924, p=.000) but no difference exists in RoSAS scores (Warmth: F(7,117)=.787, p=.599, Competence: F(7,117)=.678, p=.690, Discomfort: F(7,117)=.559, p=.787). * The distribution of the tags is significantly different in different conditions (X^2(21)=41.248, p=.005), as depicted in the Figure <ref>. A Bonferroni post-hoc tests indicated that these difference stands out regarding the game experience tag among HR vs. NR and HR vs. 0R. Specifically, people in HR condition indicated that they accepted the offer to have a higher game experience by accepting the challenge to answer their less-desirable choice. And this higher number (11 in HR and 1 in NR and 1 in 0R) is statistically significant. Apart from this quantified reasoning, Figure <ref> depicts that LR and NR (conditions with the same reward level, i.e., 1 extra point) have the same number of low self-trust, and all conditions have the same number of reward, except 0R. Furthermore, as discussed earlier, the quantitative results could not verify H1 (i.e., no significant difference between HR and LR regarding decision making). Interestingly, we observe a higher number of people with low self-trust in the first condition (LR). The responses to the open-ended question indicate that people in the high reward condition (HR) were less compliant and that could be related to the fact they wanted a challenge, and maybe they were more risk-prone because they did not have to worry too much about the ticket (they had more freedom to change or not and still secure the ticket). In addition, a closer look to the data indicates that in the first control condition (0R) on the first trial, people tend to accept, because they expected to receive some points. For instance, an individual indicated that “I hoped it asks easier question when I changed the subject.” But when they received none, they stopped accepting the offer (for example, as indicated by the individual “When I did, I did not change anymore”). Therefore, after not receiving any points, they stopped accepting the offer. Overall, the qualitative analysis opened up new insights to the data. As “no qualitative methodology is exclusive” <cit.>, we do not claim that the coding we used is the only applicable scheme. Although three researchers coded the sentences individually and in a group to enhance content validation; however, as one's coding changes over time <cit.> there might be other interpretations of qualitative results. Hence, these findings should be considered cautiously. Furthermore, apart from the coding process, some participants might be shy to directly indicate they wanted the ticket, i.e., the reward. §.§ Discussion In this section, we presented the results of a user-study performed to investigate the effect of different levels of social power (particularly reward social power) and repeated interactions on persuasion. We hypothesized that a higher level of social power would lead to higher persuasion, and having a fixed level of social power, this effect would not decay over time. The result of this study did not verify the former neither subjectively nor objectively, and hence, we could not conclude if the increase in power leads to higher persuasion. This finding is similar to the results of the second study, in which the increase in ratings (that indirectly increased the level of reward) did not lead to any significant difference in the decision making of participants. Hence, we may conclude that persuasion does not have a linear relationship with the level of power exerted. This is inline with recent research that indicated a nonlinear relationship between power and persuasion <cit.>. On the other hand, Ghazali et al. endorsed that exerting a strong persuasion attempt acts negatively and hence causes reactions and leads to low compliance <cit.>. Also, they indicated the reaction is associated with higher negative cognition and feeling of anger, which might be equivalent to a higher score of discomfort dimension of the RoSAS questionnaire. However, our results did not lead to any significant differences in the score of discomfort for people who rejected more frequently compared to others (ANOVA: F(3,114)=1.330, p=.268). In this case, although in HR condition the persuasion was stronger, reactance has not happened. In other words, the rejection was not due to any reaction felt measured by the RoSAS questionnaire. Hence, our study verifies that power and persuasion do not have a linear relationship, however, further investigation is required to determine this nonlinear relationship. In addition, further evidence is required to assess the reactance threshold. Apart from this, another potential reason for this finding might be the small difference between the scores in LR and HR conditions. Although we considered the higher reward to be more than half of the maximum potential achievable score (3 out of 5), participants might have valued this extra score different from our expectations. A clear information about the state of their mind might be a clue to interpret the results. Further, the results lead to a contradicting finding regarding the latter, i.e., repeated interactions. Specifically, although we expected that the effect of power on persuasion remains unchanged over a repeated interaction, this hypothesis was verified only in two conditions, HR (high reward) and NR (no robot). Particularly in LR (low reward) and 0R (zero reward), the persuasion was not the same on the three trials. In the case of 0R or the first control condition, people tend to accept the suggestion to change less frequently at the third trial. When they were not gaining any scores for changing, they trusted their own knowledge and did not accept. Hence, not using any sort of power strategy, the robot did not have any persuasive power and people did not comply with the request. However, despite our expectation, in LR condition, using the same level of power, the robot gained higher persuasion at the end. And in this case, the persuasion was even higher than HR condition. Interestingly, this finding is inline with Ghazali et al. <cit.> that the robot with mid-level of persuasion power was more successful than high-power or not robot. Although earlier, the result indicated no reaction in HR. We argue that this inconsistency may be due to the value of the reward that the participants associated with in each trial. In other words, it seems possible that the value of the reward was not equal in all conditions. That is to say, when the participants were more near to gaining a cinema ticket, a single score might have a value more than one score in the first trial when they are far away from getting a ticket. As an example, imagine a participant needing only one score to gain a cinema ticket. This one single score means more to him/her, compared to a person needing 5 scores. However, as reported earlier, based on the collected data, we did not find any significant difference of persuasion between groups of people considering their scores. To Furthermore, in spite of what we hypothesized, people new to robots did not show a significant different behavior compared to the others. This finding is contradicting to Study 2 findings. A possible explanation for this might be the small sample size of study 2. Around one-fourth of the sample of study two were new to robots and they mostly fell in the same condition. Similar to the previous study, 33% of sample had already interacted with the robots. However, in this study, not only the sample size is doubled, but also the sample was more uniformly spread in the groups (each group had around 70 percent people new to robots, except in the RL group that 57 percent of the sample were new to robots). Unlike the two previous experiments, we did not find any significant differences in the perception of the robots. In the first study, the two robots used two sets of completely different dialogues in their interactions with the participants. Also, in the second study, the robot used two different strategies and correspondingly different dialogues in the persuasive strategy. However, in this study, the difference between the conditions was minor and only one single strategy was used in the persuasion conditions. In addition, the reward did not increase the likeability of the robot. Hence, we rejected the fourth hypothesis. Finally, we considered the fifth hypothesis (H5) to investigate if the presence of the robot has any effect on the persuasion. Specifically, one might argue that since the robot has no physical interaction with the participants, the persuasion is gained only due to the scores that people receive. In other words, a sole application would do the same job. This is the main reason for adding the fourth condition (NR). However, the results indicated that this argument is not true and the robot attendance leads to another channel of persuasion due to its social presence. Hence, the sense of presence of the robot should not be neglected in this case. This finding is inline with Ghazali et al.  <cit.> as well. Although their results indicated that in one condition, i.e., low psychological involvement, the increase in social agency did not influence compliance, but in another condition, or high psychological involvement, compliance remained the same for the medium social agency but dropped for the high social agency condition. However, our finding shows a different trend, i.e., when there was no robot, we have achieved significantly less compliance. A potential difference might be that in our study the robot was present, but turned off. Although most of the participants thought “Emys” was the application, we made the expectation that they will interact with the robot later on in another phase shortly after finishing the current task. Importantly, we would like to highlight that the results indicated no significant difference between the two control groups, i.e., 0R and NR, in which there was no manipulation <ref>. This finding further highlights the effect of the manipulation we made. Specifically, this finding presents consistency and decreases the probability of unobserved bias in the data due to selection. §.§ Summary of Findings The findings of this study are four-fold: first, inline with other studies, the results indicated that an increase in reward social power (and more generally a stronger persuasion) does not necessarily lead to a higher compliance. In other words, a robot with medium level of power could be more persuasive than another with higher power. Second, a prior interaction with robots does not influence the decision making of the participants, unlike what we observed in the second study, which might have occurred due to the smaller sample size and using a single persuasion attempt. Thirdly, over repeated interactions, the compliance might change due to the specific circumstances (either the study condition or the user's valuation of the reward) of the study. However, further evidence is required to determine how these circumstances affect decision makings. And finally, the qualitative analysis of the contextual data gathered in the study revealed new insights to the data. For instance, people with high self-trust were less compliant with the robot. In addition, people in the HR condition felt a higher game feel and were more willing to challenge by accepting the offer. Furthermore, in the 0R condition, people expected to gain something by accepting the offer in the first trial. And that is why they were compliant with the robot. §.§ Limitations and suggestions for future studies One limitation of this study is the use of the questionnaire only before and after the study. In other words, we do not have enough information about the user at each single trial. Hence, we could not measure the subjective factor (robot perception regarding persuasiveness or RoSAS). Furthermore, we did not have enough information about how they perceived the trustworthiness of the robot on the second trial. Another potential limitation of this study is the unknown value of each single point to the subject. As discussed earlier, 1 single score might have a different meaning to each individual. This becomes particularly important considering the cumulative scores over the three trials mentioned earlier. Thus, maybe at each trial we can reset the scores, so that the next round would be independent from the distance to score. Or, we may have a large pool and compare people with the same amount of remaining scores separately. Or more ideally, we may inquire more subjects to check how valuable they find each single point. Like any other self-report measure, the primary questionnaire asking about the preference might not be a good measure of users' preferences. In fact, some people selected their less favorable choice initially and indicated in the open-ended question that they did not indicate their preference carefully before the game. Hence, considering that there may be a cheating incentive, we cannot make sure if they really selected their preferences carelessly or they decided to cheat. Although discomfort is supposed as an indicator of negative cognition, but it might not be a good predictor of reactance. For instance, different evidences indicated less compliance in HR than LR. This might have happened due to reactance to the robot's suggestion when using a higher level of power or that might be due to the remaining score to get a cinema ticket. As we discussed earlier, the analysis of the game log indicated that at the third trial, people in LR accepted the robot's suggestion more frequently than HR. The study is limited by the lack of information on reactance and a better measure is required. In addition, if the evidence suggests the occurrence of reactions, a future study could assess the effect of different power levels to indicate the level threshold at which reactance happens. In other words, considerably more work will need to be done to determine the relationship between power level and persuasion with regard to reactance. Further research could also be conducted to determine the effectiveness of behavioral analysis of the participants using the recorded videos. Apart from the contextual data that we analyzed earlier, these behavioral cues could enrich the qualitative analysis. In final words, these findings provide insights for future research that reward social power endows persuasiveness to robots. Further work needs to be done to establish whether other power bases are effective in persuasion. § CONCLUSION In sum, our contributions provide new empirical findings and design implications for using social robots in the compliance gaining and behavior change context. Specifically, the link between power and persuasion investigated in this work may contribute in addressing some HCI/HRI research problems. Using social power bases, we attempt to design social robots, a specific case of social agents, equipped with social power bases. We selected social robots due to their physicality and the higher sense of presence compared to virtual agents. We operationalized these power bases within persuasion tasks and attempt to investigate this potential application of social power in human-agent interaction, i.e., persuasion. The link between power and persuasion, as well as the recent application of persuasive technology, motivated us to investigate this link further. We designed three persuasive strategies inspired by social power, particularly expert, reward, and coercion. Together, the results of these studies provide important insights into persuasion in HRI. We argue that our contribution advances the study of robot persuasion by testing new factors (social power strategies) that may affect persuasion effectiveness. In this direction, our contributions are as follows: * We identified that the use of social power by social robots is effective for persuading people. * We investigated this effect using incentivized real choice and nonimaginary tasks that increase the external validity of the design. * We used different within- and between-subject studies as well as mixed-designs and investigated the power-persuasion link both objectively and subjectively in the three studies. * We concluded that one strategy could influence the users both objectively and subjectively. And these two channels of persuasion might not happen both at the same time (as observed in the first study). * We argue that social rewards can be effective at persuading users and, unlike material rewards, they are unlimited and always available at a lower cost. * We observed that people who are new to robots might be affected by the novelty effect and this threatens the external validity of results. In this case, a longer interaction might mitigate this effect (Study 2). * We found that to achieve a significantly different perception of the robot in case of warmth, competence, and discomfort, the robot dialogue and social cues should be notably different. In other words, minor differences in dialogue sentences might not lead to a high difference in these scores (as observed in the third study). * Having a fixed level of social power, the effect of power on persuasion does not decay over repeated interactions (Study 3). The effect might become stronger under specific circumstances. * An increase in the level of power does not linearly give rise to persuasion (Study 3). * The social presence of a robot increases the chance of gaining higher persuasion (Study 3). * We showed that the use of social power strategies (expertise, coercion, and reward) increases robots' power to influence persuasion outcomes. * We considered both the role of the persuasion actor (social robots) and the persuasion target (human participants) in the success of the persuasion. Hence, our approach has the promise of capturing the dynamic effects of actor and target characteristics on persuasion outcomes. * Qualitative analysis of the data and using open-ended questions opens up further insight on the findings that might not be easily interpreted using questions with predefined answers. Taken together, these findings suggest a role for social power in promoting persuasion. The findings will be of interest to enhance social interaction and engagement with social robots. Our contributions provide new empirical findings and design implications for robotic persuasion to change attitudes and behavior, such as in a consumer choice setting. In final words, we suggest that the findings are particularly relevant for the design and development of social robots aiming to overcome the human-robot social barrier. §.§ Future Work Our findings provide the following insights for future research: The three user-studies have thrown up many questions in need of further investigation. First and foremost is to add more qualitative approaches to better understand the attitudes and behaviors of the subject. We suggest running systematic interviews after the study using direct or indirect questions. This is an intriguing issue which could be usefully explored in further research. Further research might explore the effect of social power on persuasion in groups and social collectives. A considerable amount of literature exists on grouping people and robots. However, less is known about the dynamics of social power within groups of humans and robots. On the other hand, as discussed earlier, power exists in bidirectional relationships. In a next step, further research might explore the problem by putting the power within the user. This would ease designing scenarios that are more feasible with less ethical issues. For instance, having a robot with legitimate power might not be as believable or practical as a legitimate human user. This would be a fruitful area for further work to design social robots capable of processing social interactions that deal with social power in the next step. Furthermore, it is necessary to investigate the ethical quandary of persuasive robots. Digital technology has changed the nature of persuasion in several key respects. It has increased complexity, blurring the lines between information, entertainment, and influence. With the advent of technology, new tools for persuasion have been provided, for instance, social agents. This gives rise to ethical concerns about the use of persuasion that needs to be considered within the new persuasive technology. Persuasive and powerful robots could support and foster the human user's interests (e.g., in therapy sessions, diet monitoring, or suicide prevention) but could also deceive and manipulate the user (e.g., in sales and political propaganda). Persuasive technologies have demonstrated their effectiveness to negatively impact a user's behavior and generate addictions towards current social technologies. Additionally, a recent study investigated the security risks of persuasive social robots that aim to manipulate people <cit.>. Using three proof of concept, the results suggested that the over-trust in robots could provide a risk of being misused and to hack into sensitive information. This does not lie in the scope of this work, however, a future study is worth investigating it due to its importance in human society. And last but not least, robotic persuaders leading the persuadee might be considered as a specific case of recommender systems. For instance, considering that the "Expert" robot is providing an explanation of why the human should follow its persuasive advice, it would be interesting to put the work presented in this paper in the context of explainable AI; i.e. explainable recommender systems. This would be another important practical implication for future practice. § ACKNOWLEDGMENT The authors would like to thank the participants for their precious time and taking part in the studies. We would like to express our deepest gratitude to Dr. Koorosh Aslansefat. His extensive knowledge and unwavering support have been invaluable to the completion of this work. His guidance has not only enriched this research, but also inspired us to pursue our goals with diligence and integrity. This work was partially supported by national funds through Fundação para a Ciência e a Tecnologia (FCT) with reference UIDB/50021/2020, through the AMIGOS project (PTDC/EEISII/7174/2014), and the research project RAGE (Realising an Applied Gaming Eco-System) funded by the EU under the H2020-ICT-2014-1 program with grant agreement No 644187. IEEEtran 10 url@samestyle keltner2008reciprocal D. Keltner, G. A. Van Kleef, S. Chen, and M. W. Kraus, “A reciprocal influence model of social power: Emerging principles and lines of inquiry,” Advances in experimental social psychology, vol. 40, pp. 151–192, 2008. fiske1993controlling S. T. Fiske, “Controlling other people: The impact of power on stereotyping.” American psychologist, vol. 48, no. 6, p. 621, 1993. fiske1992four A. P. Fiske, “The four elementary forms of sociality: framework for a unified theory of social relations.” Psychological review, vol. 99, no. 4, p. 689, 1992. clark1990emotions C. Clark, “Emotions and micropolitics in everyday life: Some patterns and paradoxes of “place.”,” Research agendas in the sociology of emotions, pp. 305–333, 1990. hall1994subordination J. A. Hall and A. G. Halberstadt, ““subordination” and sensitivity to nonverbal cues: A study of married working women,” Sex Roles, vol. 31, no. 3-4, pp. 149–165, 1994. nass1993anthropomorphism C. Nass, J. Steuer, E. Tauber, and H. Reeder, “Anthropomorphism, agency, and ethopoeia: computers as social actors,” in INTERACT'93 and CHI'93 conference companion on Human factors in computing systems.1em plus 0.5em minus 0.4emACM, 1993, pp. 111–112. reeves1996media B. Reeves and C. I. Nass, The media equation: How people treat computers, television, and new media like real people and places.1em plus 0.5em minus 0.4emCambridge university press, 1996. pierro2008motivated A. Pierro, L. Cicero, and B. H. Raven, “Motivated compliance with bases of social power,” Journal of applied social psychology, vol. 38, no. 7, pp. 1921–1944, 2008. french1959bases J. R. French, B. Raven, and D. Cartwright, “The bases of social power,” Classics of organization theory, vol. 7, pp. 311–320, 1959. elias2008fifty S. Elias, “Fifty years of influence in the workplace: The evolution of the french and raven power taxonomy,” Journal of Management History, vol. 14, no. 3, pp. 267–283, 2008. raven1988french B. Raven, “French & raven 30 years later: Power/interaction and interpersonal influence,” in XXIV International Congress of Psychology, Sydney, Australia, 1988. siegel2009persuasive M. Siegel, C. Breazeal, and M. I. Norton, “Persuasive robotics: The influence of robot gender on human behavior,” in Intelligent Robots and Systems, 2009. IROS 2009. IEEE/RSJ International Conference on.1em plus 0.5em minus 0.4emIEEE, 2009, pp. 2563–2568. fogg2002persuasive B. J. Fogg, “Persuasive technology: using computers to change what we think and do,” Ubiquity, vol. 2002, no. December, p. 5, 2002. perloff1993dynamics R. M. Perloff, The dynamics of persuasion: Communication and attitudes in the 21st century.1em plus 0.5em minus 0.4emRoutledge, 1993. bohner2008information G. Bohner, H.-P. Erb, and F. Siebler, “Information processing approaches to persuasion. integrating assumptions from the dual-and single-processing perspectives,” Attitudes and attitude change, pp. 161–188, 2008. reardon1991persuasion K. K. Reardon, Persuasion in practice.1em plus 0.5em minus 0.4emSage, 1991. oreg2014source S. Oreg and N. Sverdlik, “Source personality and persuasiveness: Big five predispositions to being persuasive and the role of message involvement,” Journal of personality, vol. 82, no. 3, pp. 250–264, 2014. anagnostopoulou2017exploring E. Anagnostopoulou, B. Magoutas, E. Bothos, J. Schrammel, R. Orji, and G. Mentzas, “Exploring the links between persuasion, personality and mobility types in personalized mobility applications,” in International Conference on Persuasive Technology.1em plus 0.5em minus 0.4emSpringer, 2017, pp. 107–118. dowding2011encyclopedia K. Dowding, Encyclopedia of power.1em plus 0.5em minus 0.4emSage Publications, 2011. cutlip1960power S. CUTLIP, “Power and persuasion-carter, jf,” 1960. brinol2017power P. Briñol, R. E. Petty, G. R. Durso, and D. D. Rucker, “Power and persuasion: Processes by which perceived power can influence evaluative judgments.” Review of General Psychology, vol. 21, no. 3, p. 223, 2017. hovland1949experiments C. I. Hovland, A. A. Lumsdaine, and F. D. Sheffield, “Experiments on mass communication,” Studies in social psychology in World War II, Vol. 3, 1949. dubois2016dynamics D. Dubois, D. D. Rucker, and A. D. Galinsky, “Dynamics of communicator and audience power: The persuasiveness of competence versus warmth,” Journal of Consumer Research, vol. 43, no. 1, pp. 68–85, 2016. lammers2013power J. Lammers, D. Dubois, D. D. Rucker, and A. D. Galinsky, “Power gets the job: Priming power improves interview outcomes,” Journal of Experimental Social Psychology, vol. 49, no. 4, pp. 776–779, 2013. ghazali2019assessing A. S. Ghazali, J. Ham, E. Barakova, and P. Markopoulos, “Assessing the effect of persuasive robots interactive social cues on users’ psychological reactance, liking, trusting beliefs and compliance,” Advanced Robotics, vol. 33, no. 7-8, pp. 325–337, 2019. li2015benefit J. Li, “The benefit of being physically present: A survey of experimental works comparing copresent robots, telepresent robots and virtual agents,” International Journal of Human-Computer Studies, vol. 77, pp. 23–37, 2015. herse2018bon S. Herse, J. Vitale, D. Ebrahimian, M. Tonkin, S. Ojha, S. Sidra, B. Johnston, S. Phillips, S. L. K. C. Gudi, J. Clark et al., “Bon appetit! robot persuasion for food recommendation,” in Companion of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, 2018, pp. 125–126. rossi2017gaze S. Rossi and P. D’Alterio, “Gaze behavioral adaptation towards group members for providing effective recommendations,” in International Conference on Social Robotics.1em plus 0.5em minus 0.4emSpringer, 2017, pp. 231–241. chidambaram2012designing V. Chidambaram, Y.-H. Chiang, and B. Mutlu, “Designing persuasive robots: how robots might persuade people using vocal and nonverbal cues,” in Proceedings of the seventh annual ACM/IEEE international conference on Human-Robot Interaction.1em plus 0.5em minus 0.4emACM, 2012, pp. 293–300. saunderson2019would S. Saunderson and G. Nejat, “It would make me happy if you used my guess: Comparing robot persuasive strategies in social human–robot interaction,” IEEE Robotics and Automation Letters, vol. 4, no. 2, pp. 1707–1714, 2019. saunderson2020investigating ——, “Investigating strategies for robot persuasion in social human-robot interaction,” IEEE Transactions on Cybernetics, 2020. lee2019robotic S. A. Lee and Y. J. Liang, “Robotic foot-in-the-door: Using sequential-request persuasive strategies in human-robot interaction,” Computers in Human Behavior, vol. 90, pp. 351–356, 2019. kobberholm2020influence K. W. Kobberholm, K. S. Carstens, L. W. Bøg, M. H. Santos, S. Ramskov, S. A. Mohamed, and L. C. Jensen, “The influence of incremental information presentation on the persuasiveness of a robot,” in Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, 2020, pp. 302–304. andrist2013rhetorical S. Andrist, E. Spannan, and B. Mutlu, “Rhetorical robots: making robots more effective speakers using linguistic cues of expertise,” in 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI).1em plus 0.5em minus 0.4emIEEE, 2013, pp. 341–348. freeman2018effect S. M. Freeman, N. Rebout, and K. L. Bales, “Effect of reward type on object discrimination learning in socially monogamous coppery titi monkeys (callicebus cupreus),” American journal of primatology, p. e22868, 2018. nass1994computers C. Nass, J. Steuer, and E. R. Tauber, “Computers are social actors,” in Proceedings of the SIGCHI conference on Human factors in computing systems.1em plus 0.5em minus 0.4emACM, 1994, pp. 72–78. okumura2017social S. Okumura, M. Kimoto, M. Shiomi, T. Iio, K. Shimohara, and N. Hagita, “Do social rewards from robots enhance offline improvements in motor skills?” in International Conference on Social Robotics.1em plus 0.5em minus 0.4emSpringer, 2017, pp. 32–41. wang2017development D. Wang, T. Liu, and J. Shi, “Development of monetary and social reward processes,” Scientific reports, vol. 7, no. 1, p. 11128, 2017. kohls2009differential G. Kohls, J. Peltzer, B. Herpertz-Dahlmann, and K. Konrad, “Differential effects of social and non-social reward on response inhibition in children and adolescents,” Developmental science, vol. 12, no. 4, pp. 614–625, 2009. demurie2012effects E. Demurie, H. Roeyers, D. Baeyens, and E. Sonuga-Barke, “The effects of monetary and social rewards on task performance in children and adolescents: liking is not enough,” International journal of methods in psychiatric research, vol. 21, no. 4, pp. 301–310, 2012. midden2009using C. Midden and J. Ham, “Using negative and positive social feedback from a robotic agent to save energy,” in Proceedings of the 4th international conference on persuasive technology.1em plus 0.5em minus 0.4emACM, 2009, p. 12. bechade2016empirical L. Bechade, G. D. Duplessis, and L. Devillers, “Empirical study of humor support in social human-robot interaction,” in International Conference on Distributed, Ambient, and Pervasive Interactions.1em plus 0.5em minus 0.4emSpringer, 2016, pp. 305–316. nijholt2016smart A. Nijholt, “Smart bugs and digital banana peels: Accidental humor in smart environments?” in International Conference on Distributed, Ambient, and Pervasive Interactions.1em plus 0.5em minus 0.4emSpringer, 2016, pp. 329–340. valitutti2016infusing A. Valitutti and T. Veale, “Infusing humor in unexpected events,” in International Conference on Distributed, Ambient, and Pervasive Interactions.1em plus 0.5em minus 0.4emSpringer, 2016, pp. 370–379. khooshabeh2011does P. Khooshabeh, C. McCall, S. Gandhe, J. Gratch, and J. Blascovich, “Does it matter if a computer jokes,” in CHI'11 Extended Abstracts on Human Factors in Computing Systems.1em plus 0.5em minus 0.4emACM, 2011, pp. 77–86. kang2017social S.-H. Kang, D. M. Krum, P. Khooshabeh, T. Phan, C.-Y. Chang, O. Amir, and R. Lin, “Social influence of humor in virtual human counselor's self-disclosure,” Computer Animation and Virtual Worlds, vol. 28, no. 3-4, p. e1763, 2017. vilk2020comedy J. Vilk and N. T. Fitter, “Comedy by jon the robot,” in Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, 2020, pp. 80–80. petty1986elaboration R. E. Petty and J. T. Cacioppo, “The elaboration likelihood model of persuasion,” in Communication and persuasion.1em plus 0.5em minus 0.4emSpringer, 1986, pp. 1–24. gass2015persuasion R. H. Gass and J. S. Seiter, Persuasion: Social Inflence and Compliance Gaining.1em plus 0.5em minus 0.4emRoutledge, 2015. torrey2009robots C. Torrey, “How robots can help: Communication strategies that improve social outcomes,” Ph.D. dissertation, Carnegie Mellon University, 2009. torrey2006effects C. Torrey, A. Powers, M. Marge, S. R. Fussell, and S. Kiesler, “Effects of adaptive robot dialogue on information exchange and social relations,” in Proceedings of the 1st ACM SIGCHI/SIGART conference on Human-robot interaction, 2006, pp. 126–133. hashemian2019power M. Hashemian, A. Paiva, S. Mascarenhas, P. A. Santos, and R. Prada, “The power to persuade: a study of social power in human-robot interaction,” in 2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN).1em plus 0.5em minus 0.4emIEEE, 2019, pp. 1–8. carpinella2017robotic C. M. Carpinella, A. B. Wyman, M. A. Perez, and S. J. Stroessner, “The robotic social attributes scale (rosas): Development and validation,” in Proceedings of the 2017 ACM/IEEE International Conference on human-robot interaction.1em plus 0.5em minus 0.4emACM, 2017, pp. 254–262. cacioppo1982need J. T. Cacioppo and R. E. Petty, “The need for cognition.” Journal of personality and social psychology, vol. 42, no. 1, p. 116, 1982. hashemian2019persuasive M. Hashemian, “Persuasive social robots using social power dynamics,” in Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems.1em plus 0.5em minus 0.4emInternational Foundation for Autonomous Agents and Multiagent Systems, 2019, pp. 2408–2410. anderson2012personal C. Anderson, O. P. John, and D. Keltner, “The personal sense of power,” Journal of personality, vol. 80, no. 2, pp. 313–344, 2012. mourali2013powerful M. Mourali and A. Nagpal, “The powerful select, the powerless reject: Power's influence in decision strategies,” Journal of Business Research, vol. 66, no. 7, pp. 874–880, 2013. hsieh2005three H.-F. Hsieh and S. E. Shannon, “Three approaches to qualitative content analysis,” Qualitative health research, vol. 15, no. 9, pp. 1277–1288, 2005. ghazali2018effects A. S. Ghazali, J. Ham, E. I. Barakova, and P. Markopoulos, “Effects of robot facial characteristics and gender in persuasive human-robot interaction,” Frontiers in Robotics and AI, vol. 5, p. 73, 2018. ghazali2018influence A. S. Ghazali, J. Ham, E. Barakova, and P. Markopoulos, “The influence of social cues in persuasive social robots on psychological reactance and compliance,” Computers in Human Behavior, vol. 87, pp. 58–65, 2018. wilson2003perceived E. V. Wilson, “Perceived effectiveness of interpersonal persuasion strategies in computer-mediated communication,” Computers in Human Behavior, vol. 19, no. 5, pp. 537–552, 2003. kahneman1991anomalies D. Kahneman, J. L. Knetsch, and R. H. Thaler, “Anomalies: The endowment effect, loss aversion, and status quo bias,” Journal of Economic perspectives, vol. 5, no. 1, pp. 193–206, 1991. xiao2005emotion E. Xiao and D. Houser, “Emotion expression in human punishment behavior,” Proceedings of the National Academy of Sciences, vol. 102, no. 20, pp. 7398–7401, 2005. hashemian2020roman M. Hashemian, M. Couto, S. Mascarenhas, A. Paiva, P. A. Santos, and R. Prada, “Investigating reward/punishment strategies in the persuasiveness of social robots,” in 2020 29th IEEE international symposium on robot and human interactive communication (RO-MAN).1em plus 0.5em minus 0.4emIEEE, 2020, pp. 553–560. winkle2019effective K. Winkle, S. Lemaignan, P. Caleb-Solly, U. Leonards, A. Turton, and P. Bremner, “Effective persuasion strategies for socially assistive robots,” in 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI).1em plus 0.5em minus 0.4emIEEE, 2019, pp. 277–285. hall1959silent E. T. Hall and T. Hall, The silent language.1em plus 0.5em minus 0.4emAnchor books, 1959, vol. 948. bruno2017paving B. Bruno, N. Y. Chong, H. Kamide, S. Kanoria, J. Lee, Y. Lim, A. K. Pandey, C. Papadopoulos, I. Papadopoulos, F. Pecora et al., “Paving the way for culturally competent robots: A position paper,” in 2017 26th IEEE international symposium on robot and human interactive communication (RO-MAN).1em plus 0.5em minus 0.4emIEEE, 2017, pp. 553–560. hashemian2018enhancing M. Hashemian, R. Prada, P. A. Santos, and S. Mascarenhas, “Enhancing social believability of virtual agents using social power dynamics,” in Proceedings of the 18th International Conference on Intelligent Virtual Agents, 2018, pp. 147–152. francis1992development L. J. Francis, L. B. Brown, and R. Philipchalk, “The development of an abbreviated form of the revised eysenck personality questionnaire (epqr-a): Its use among students in england, canada, the usa and australia,” Personality and individual differences, vol. 13, no. 4, pp. 443–449, 1992. modic2018we D. Modic, R. Anderson, and J. Palomäki, “We will make you like our research: The development of a susceptibility-to-persuasion scale,” PloS one, vol. 13, no. 3, p. e0194119, 2018. ribeiro2016ecosystem T. Ribeiro, A. Pereira, E. Di Tullio, and A. Paiva, “The sera ecosystem: Socially expressive robotics architecture for autonomous human-robot interaction,” in Enabling Computing Research in Socially Intelligent Human-Robot Interaction: A CommunityDriven Modular Research Platform, 2016. ribeiro2014thalamus T. Ribeiro, A. Pereira, E. Di Tullio, P. Alves-Oliveira, and A. Paiva, “From thalamus to skene: High-level behaviour planning and managing for mixed-reality characters,” in Proceedings of the IVA 2014 Workshop on Architectures and Standards for IVAs, 2014. Ribeiro2013 T. Ribeiro, A. Paiva, and D. Dooley, “Nutty tracks: symbolic animation pipeline for expressive robotics.” p. 1, 2013. richards2014handling L. Richards, Handling qualitative data: A practical guide.1em plus 0.5em minus 0.4emSage, 2014. wolfert2020security P. Wolfert, J. Deschuyteneer, D. Oetringer, N. Robinson, and T. Belpaeme, “Security risks of social robots used to persuade and manipulate: A proof of concept study,” in Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, 2020, pp. 523–525. § APPENDIX The dialogues used by the robots in the first study are listed in Table <ref>. In this scenario, Gleen is Expert and Emys is Joker. Table <ref> lists the robot's dialogues in Study 2. In this table, the variable “namePlayer” carries the participant's name. “Animate” function makes the robot to show the specified Facial Expressions and gestures. “Gaze” function makes the robot to look at the specified target in parentheses. “break” functions cause pauses between sentences to have a more natural and understandable speech.
http://arxiv.org/abs/2307.04681v1
20230710163455
The matrix permanent and determinant from a spin system
[ "Abhijeet Alase", "Owen Doty", "David L. Feder" ]
quant-ph
[ "quant-ph" ]
Quantum Science Group, The University of Sydney, NSW 2006, Australia Institute for Quantum Science and Technology and Department of Physics and Astronomy, University of Calgary, Calgary, Alberta T2N 1N4, Canada Institute for Quantum Science and Technology and Department of Physics and Astronomy, University of Calgary, Calgary, Alberta T2N 1N4, Canada In contrast to the determinant, no algorithm is known for the exact determination of the permanent of a square matrix that runs in time polynomial in its dimension. Consequently, non-interacting fermions are classically efficiently simulatable while non-interacting bosons are not, underpinning quantum supremacy arguments for sampling the output distribution of photon interferometer arrays. This work introduces a graph-theoretic framework that bridges both the determinant and permanent. The only non-zero eigenvalues of a sparse non-Hermitian operator M̆ for n spin-1/2 particles are the nth roots of the permanent or determinant of an n× n matrix M, interpreting basis states as bosonic or fermionic occupation states, respectively. This operator can be used to design a simple and straightforward method for the classical determination of the permanent that matches the efficiency of the best-known algorithm. Gauss-Jordan elimination for the determinant of M is then equivalent to the successive removal of the generalized zero eigenspace of the fermionic M̆, equivalent to the deletion of some nodes and reweighting of the remaining edges in the graph such that only n nodes survive after the last step. In the bosonic case, the successive removal of generalized zero eigenspaces for M̆ is also equivalent to node deletion, but new edges are added during this process, which gives rise to the higher complexity of computing the permanent. Our analysis may point the way to new strategies for classical and quantum evaluation of the permanent. The matrix permanent and determinant from a spin system David L. Feder August 12, 2023 ======================================================= § INTRODUCTION The permanent of a square matrix M of dimension n is the symmetric analogue of the usual determinant, but where the signatures of the permutations (the signs appearing in the expansion of the function) are ignored. This quantity appears in a wide variety of applications in pure mathematics and in physics, among other disciplines. For example, the permanent enumerates the number of perfect matchings of a bipartite graph, which has applications in combinatorics <cit.>, chemistry <cit.>, and physics <cit.>. The permanent arises in the identification of multiple targets <cit.>, with applications to defense. In the context of quantum computation and information, the permanent is central to calculating matrix elements in linear optics for many-photon systems <cit.>, and for determining the entanglement of various permutation-invariant quantum states <cit.>. Despite the fact that both the permanent and the determinant yield the same exponential number of terms, n!∼√(2π n)(n/e)^n for large n, the determinant is efficiently computable classically, i.e. scales as a polynomial in n. The well-known Gaussian elimination approach scales as O(n^3), and the fastest current algorithm scales as O(n^2.373) <cit.>. In contrast, determining the permanent of a general matrix is #P-hard, and that of a (0,1) matrix is #P-complete <cit.>. The discovery of a classically efficient algorithm for the permanent would have profound consequences for the theory of computation, including = <cit.>, an even stronger statement than the famous = conjecture. The runtime of the fastest known algorithm, namely Ryser's algorithm, scales as O(n2^n) <cit.>. That said, the permanent P_n of matrices with non-negative entries or with vanishing mean can be approximated in polynomial time (n,1/ϵ) using randomized algorithms <cit.>, up to additive error ϵ P_n, for arbitrary ϵ>0; likewise for positive semidefinite matrices <cit.>. The #P-hardness of computing the permanent was recast in the framework of linear optics <cit.>, which motivated the realization that quantum devices will always outperform classical algorithms in sampling the output distribution of photons emerging from an optical interferometer apparatus, the so-called Boson Sampling problem <cit.>. Numerous Boson Sampling experiments have been conducted since then; Refs. Arute2019,Zhong2020,Madsen2022 provide some recent examples. In contrast, the ease of calculating the determinant implies that non-interacting fermions are efficiently simulatable on a classical computer <cit.>. It was recently shown that the permanent of the matrix M can be computed as the determinant of a family of matrices M̆ of minimum dimension 2^n-1 <cit.>. These matrices define the adjacency of a directed n-dimensional hypercube graph, whose edge weights correspond to elements of the matrix of interest, and with the first and last vertices sharing the same label to form a cycle. It was subsequently noted that these graphs encode an algebraic branching program <cit.>: the product of edge weights on each of the n! possible branches corresponds to a term in the expansion of the permanent. The present work builds on the above construction by identifying a key feature: the structure of the matrix M̆ coincides with the dynamics of n spin-1/2 particles governed by a non-Hermitian operator. If the permanent of M is non-zero, then the only non-zero eigenvalues of M̆ are the nth roots of the permanent; alternatively, M̆^n diagonalizes into n blocks labeled by the total spin, each of which has the permanent as the only non-zero eigenvalue. Thus, the n-fold product of M̆ on a fiducial state such as |0^⊗ n⟩ immediately yields P_n|0^⊗ n⟩. The n-sparsity of M̆ ensures that this can be effected on a classical computer with n2^n arithmetic operations, matching the performance of Ryser's algorithm. Interpreting the basis states as bosonic occupation states yields the standard expression for the permanent in terms of products of (hard-core) bosonic operators. Interpreting these instead as fermionic occupation states immediately yields the determinant, with signed edge weights in the graph. If M is a full-rank matrix, then Gaussian elimination for the calculation of the determinant corresponds to successively projecting out the generalized zero eigenvectors of M̆, so that after n iterations the initial rank-deficient matrix of dimension 2^n-1 is reduced to an n-dimensional full-rank matrix. From the perspective of the algebraic branching program, each iteration deletes vertices and the edges incident to them, and reweights the remaining edges, until only one path remains in the cycle. This approach uncovers another close connection between fermions and the determinant on the one hand, and between bosons and the permanent on the other. This paper is organized as follows. The permanent and determinant are reviewed in Sec. <ref>, and an example is provided for the representation of the permanent as an algebraic branching program. Sec. <ref> introduces the spin model that maps the problem of computing the permanent of an n× n matrix M to the problem of computing the eigenvalues of a 2^n× 2^n matrix M̆, and provides a classical algorithm for computing the permanent that matches the best current methods. The spin model is expressed in terms of non-interacting fermions and hard-core bosons in Sec. <ref>. In Sec. <ref>, we discuss the connection between Gaussian elimination, generalized zero eigenspaces of M̆ and its visualization on the associated graph. The prospects for the development of a quantum algorithm for computing the permanent based on our approach are discussed in Sec. <ref>. § REVIEW §.§ Permanent and determinant Consider the n× n matrix M, defined as M=[ w_0,0 w_0,1 ⋯ w_0,n-1 w_1,0 w_1,1 ⋯ w_1,n-1⋮ ⋮ ⋱ ⋮ w_n-1,0 w_n-1,1 ⋯ w_n-1,n-1 ]. The determinant and permanent of M are respectively defined as D_n = |M|=(M)≡∑_σ∈ S_n((σ) ∏_i=0^n-1w_i,σ_i); P_n = |M|_P=(M)≡∑_σ∈ S_n( ∏_i=0^n-1w_i,σ_i), where S_n is the symmetric group on the list {0,1,2,…,n-1}, σ is a function that reorders this list (effects a permutation of the elements), σ_i is the ith entry of the list after permutation, and (σ)=(-1)^N(σ) is the signature of the permutation, where N(σ) is the number of inversions needed. While the expansion of the determinant and permanent includes the same n! terms, the signs appearing in the determinant allow for its efficient evaluation. While exceedingly simple, the n=3 case is illustrative and will be revisited throughout this work. The determinant is explicitly written |M| = w_0,0(w_1,1w_2,2-w_1,2w_2,1) - w_0,1(w_1,0w_2,2-w_1,2w_2,0) + w_0,2(w_1,0w_2,1-w_1,1w_2,0). The Gaussian elimination algorithm uses pivoting to reduce the matrix to row echelon form (i.e. an upper triangular matrix), so that the determinant is the product of the diagonal elements. For reasons that will become clear in Sec. <ref>, consider instead a reduction to a lower triangular matrix. The first reduction yields |M|=| w_0,0' w_0,1' 0 w_1,0' w_1,1' 0 w_2,0 w_2,1 w_2,2|, where w_0,0' = w_0,0-w_0,2w_1,0/w_1,2; w_0,1'=w_0,1-w_0,2w_1,1/w_1,2; w_1,0' = w_1,0-w_1,2w_2,0/w_2,2; w_1,1'=w_1,1-w_1,2w_2,1/w_2,2. The second and last reduction yields |M|=| w_0,0” 0 0 w_1,0' w_1,1' 0 w_2,0 w_2,1 w_2,2|, where w_0,0”=w_0,0'-w_0,1'w_1,0'/w_1,1'. The determinant is then |M| = w_0,0”w_1,1'w_2,2 =(w_0,0'w_1,1'-w_0,1'w_1,0')w_2,2 = (w_0,0-w_0,2w_1,0/w_1,2) (w_1,1-w_1,2w_2,1/w_2,2) - (w_0,1-w_0,2w_1,1/w_1,2) (w_1,0-w_1,2w_2,0/w_2,2)w_2,2. While there are eight terms in the expansion, the signs the two cross terms (-w_0,2w_1,0/w_1,2)(w_1,1)w_2,2 and -(-w_0,2w_1,1/w_1,2)(w_1,0)w_2,2 cancel, leaving 6 unique terms in the expansion. The sign structure of the determinant guarantees that these cancellations occur for all values of n, which ensures that Gaussian elimination is classically efficient. For the evaluation of the permanent, one cannot follow the same procedure as above by simply eliminating all signs, because the cross terms arising from expanding the final product [for example in Eq. (<ref>)] will now add instead of cancelling. Our analysis in Sec. <ref> provides insight into why this is the case. §.§ Permanent as an algebraic branching program Building on the work of Grenet and others <cit.>, Hüttenhain and Ikenmeyer <cit.> noted that the matrix permanent for n=3 can be expressed as a binary algebraic branching program. The n! terms correspond to branches, or routes, traversing between antipodes of the n-dimensional hypercube, such that the product of edge weights for each branch corresponds to a term in the expansion of the permanent. Fig. <ref> illustrates the idea for the n=3 case, where the three main branches from the top to bottom vertices (labeled in red) are explicitly shown. The edge weights are chosen so that their products for each branch correspond to a term in the permanent; c.f. Eq. (<ref>) with signs removed. The branching program is the analog of the expansion of the determinant by matrix minors. § SPIN MODEL §.§ Definition and structure The binary algebraic branching program for the 3× 3 permanent <cit.> suggests a general construction for arbitrary n. Suppose one has a system of spin-1/2 particles, located on sites j=0,1,…,n-1. Each particle can access states |0⟩ and |1⟩, corresponding to spin down and spin up respectively. The spin model that is the central focus of the current work is defined by the operator M̃=∑_ i∑_j=0^n-1w_h( i),jσ^+_j | i⟩⟨ i|+∏_jσ^-_j, where σ^+_i=|1⟩⟨ 0|_i and σ^-_i=|0⟩⟨ 1|_i are site-dependent raising and lowering operators. The first sum is over all n-bit strings i so that a complete and orthonormal basis of n-spin states with dimension 2^n is represented by the unit vectors | i⟩ =|{0,1}⟩^⊗ n. The Hamming weight of the bitstring is denoted by h( i), coinciding with the total n-particle spin. Evidently the last term in Eq. (<ref>) is equivalent to | 0⟩⟨ 1|. The operator M̃ defined by Eq. (<ref>) corresponds to the adjacency matrix for a weighted directed graph that effects transitions from the | 0⟩ state to the | 1⟩ state via all possible single-spin raising operations, and then back to | 0⟩ again to complete one cycle. The transition amplitudes are indexed by two integers: the total Hamming weight of the initial state and the target site. With σ^+|1⟩=0, the second index can never be repeated as the value of first index increases; thus, the first term in M̃ encodes all possible transitions from | 0⟩ to | 1⟩ without repetitions. Fig. <ref>(a) depicts M̃ for n=3, and includes the vertex / state labelings for clarity. The orientation is chosen so that each horizontal layer of the hypercube contains vertices labeled by bitstrings with the same Hamming weight h. As discussed in detail in what follows, it is convenient to define an alternate encoding of the cyclic behavior of M̃ by eliminating the transition | 0⟩⟨ 1|, and instead directly transition from states with Hamming weight n-1 to the state | 0⟩. The associated operator is M̆=∑_ i'∑_j=0^n-1w_h( i),jσ^+_j | i⟩⟨ i| +∑_j=0^n-1w_n-1,j| 0⟩⟨ 1|σ_j^+, where the prime on the first term denotes that the sum is over all bitstrings but not including those with Hamming weight h( i)=n-1. In this case, the basis state | 1⟩ is never occupied, and the Hilbert space dimension is reduced to 2^n-1. This alternate operator is depicted in Fig. <ref>(b). Consider next the (n+1)th (nth) power of M̃ (M̆), which will be of central importance in what follows. The derivation is provided in Appendix <ref>, and the result for M̃^n+1 is given in Eq. (<ref>): M̃^n+1 = ∑_j_0,…,j_n-1[ (w_0,j_0σ^+_j_0)⋯(w_n-1,j_n-1σ^+_j_n-1)| 0⟩⟨ 1|. + (w_0,j_0σ^+_j_0)⋯(w_n-2,j_n-2σ^+_j_n-2)| 0⟩⟨ 1| ×(w_n-1,j_n-1σ^+_j_n-1) + …+.| 0⟩⟨ 1|(w_0,j_0σ^+_j_0)⋯(w_n-1,j_n-1σ^+_j_n-1)]. The expression for (M̆)^n is identical save for the leading term in Eq. (<ref>). The above expression can be seen to be of block diagonal form as follows. Each term in the expansion above is defined by the operators ∏_r=0^m-1σ^+_j_r| 0⟩⟨ 1| ∏_s=m^n-1σ^+_j_s, and is labeled by the index m=0,1,…,n. For m=0, the operator is only | 0⟩⟨ 0|, i.e. a block of dimension 1 defined by a single basis state with zero Hamming weight. The m=1 case includes all operators of the form σ^+_j_0| 0⟩⟨ 1|∏_s=1^n-1σ^+_j_s =σ^+_j_0| 0⟩⟨ 1|∏_s=0^n-1σ^+_j_sσ^-_j_0 =σ^+_j_0| 0⟩⟨ 0|σ^-_k_0, which corresponds to a block spanned by the n basis vectors defined by σ^+_j| 0⟩, which are labeled by all bitstrings with unit Hamming weight. Evidently, each block is indexed by the Hamming weight (or total spin) m, and has dimension given by the binomial factor (n m). It is convenient to express M̃^n+1 as the direct sum M̃^n+1=M̃_0⊕M̃_1⊕⋯⊕M̃_n =⊕_m=0^nM̃_m, where M̃_m corresponds to the block matrix labeled by m and is defined as M̃_m=M̃^m| 0⟩⟨ 0|M̃^n-m+1, which has the form M̃_m = ∑_j_0,…,j_n-1(w_0,j_0σ^+_j_0) (w_1,j_1σ_j_1^+)⋯ × (w_m-1,j_m-1σ_j_m-1^+)| 0⟩⟨ 1|(w_n-1,j_n-1σ^+_j_n-1) × (w_n-2,j_n-2σ^+_j_n-2)⋯(w_m,j_mσ^+_j_m), as proven as Eq. (<ref>) in Appendix <ref>. Likewise, M̆_m=M̆^m| 0⟩⟨ 0|M̆^n-m. §.§ Eigensystem Now turn to the eigenvalues and eigenvectors of the spin model, defined by Eq. (<ref>) or its alternative expression Eq. (<ref>). A key observation is that | 0⟩ is an eigenvector of M̃^n+1 or M̆^n. Consider the action of M̃^n+1 on the state | 0⟩, which only involves the m=0 block: M̃^n+1| 0⟩ = ∑_j_0,…,j_n-1 | 0⟩⟨ 1|(w_0,j_0w_1,j_1⋯ w_n-1,j_n-1) ×σ^+_j_0σ^+_j_1⋯σ^+_j_n-1 | 0⟩. The action of σ^+_j_0σ^+_j_1⋯σ^+_j_n-1 defines all possible n spin-flip paths from the | 0⟩ state to | 1⟩, and each is weighted by the factor w_0,j_0w_1,j_1⋯ w_n-1,j_n-1. This is precisely the algebraic branching program discussed in Sec. <ref>; thus M̃^n+1| 0⟩=M̃_0| 0⟩=P_n| 0⟩; the eigenvalue is the permanent of M. Likewise, M̆^n| 0⟩=P_n| 0⟩. The permanent is also an eigenvalue of every other block of M̃^n+1. Defining the block-m state |ψ_m⟩=M̃^m| 0⟩, one obtains M̃_m|ψ_m⟩ = M̃^m| 0⟩⟨ 0| M̃^n-m+1M̃^m| 0⟩ = M̃^m| 0⟩⟨ 0|M̃^n+1| 0⟩ =P_nM̃^m| 0⟩ = P_n|ψ_m⟩. The operator M̃^n+1 therefore has n+1 degenerate eigenvalues corresponding to the permanent, with associated eigenvectors |ψ_m⟩=M̃^m| 0⟩. Likewise, the operator M̆^n has n degenerate eigenvalues P_n and associated eigenvectors M̆^m| 0⟩. For the rest of the discussion in this section, we assume that P_n 0. Because M̃ is a cycle, if λ is an eigenvalue of M̃^n+1, then the eigenvalues λ_j of M̃ must include all (n+1)th roots of λ (see for example Ref. Watkins2004). For the present case λ=P_n, one obtains λ_j=P_n^1/(n+1)e^-i2π j/(n+1), j=0,1,…,n; likewise, the eigenvalues of M̆ are (P_ne^-i2π j)^1/n, j=0,1,…,n-1. Given that M̆ has degeneracy n and therefore only requires n powers to return the state | 0⟩ to itself, it is slightly more convenient to work with M̆ in what follows. The eigenvectors of M̆ with eigenvalues corresponding to the nth roots of P_n can be written as |ϕ_n(k)⟩=e^-i2π k/n∑_j=0^n-1e^i2π jk/n(M̆/P_n^1/n)^j|0^⊗ n⟩, where k,j=0,1,….n-1. The corresponding eigenvalues can be found directly: M̆|ϕ_n(k)⟩=e^-i2π k/n∑_j=0^n-1e^i2π jk/n P_n^1/n(M̆/P_n^1/n)^j+1|0^⊗ n⟩ =e^-i4π k/nP_n^1/n∑_j=0^n-1e^i2π(j+1)k/n(M̆/P_n^1/n)^j+1|0^⊗ n⟩ =e^-i4π k/nP^1/n∑_j=1^ne^i2π jk/n( M̆/P_n^1/n)^j|0^⊗ n⟩ =e^-i4π k/nP_n^1/n∑_j=0^n-1e^i2π jk/n( M̆/P_n^1/n)^j|0^⊗ n⟩ +|0^⊗ n⟩-|0^⊗ n⟩ =e^-i2π k/nP_n^1/n|ϕ_n(k)⟩. The eigenvalues are therefore λ_k(M̆)=e^-i2π k/nP_n^1/n =(e^-i2π kP_n)^1/n. The derivation proceeds analogously for M̃, and one obtains λ_k(M̃)=(e^-i2π kP_n)^1/(n+1). The simplest case corresponds to k=0: |ϕ_n(0)⟩=∑_j=0^n-1(M̆/P_n^1/n)^j |0^⊗ n⟩, with eigenvalue λ_0=P_n^1/n. Consequently, ±λ_0 would be the only real eigenvalues if the elements of M were real and positive. Remarkably, Eq. (<ref>) and (<ref>) constitute the only non-zero eigenvalues of M̆ and M̃, respectively. The periodic nature of M̃ and M̆ gives rise to eigenvectors that are expanded in a Fourier-like series, much like in a translationally invariant system. In Eq. (<ref>) and Eq. (<ref>), the indices j and k label `position' and `wavevector', respectively. In the present case, the position is the index of the block, corresponding to the Hamming weight or total spin, while the `wavevector' serves essentially the same purpose as in uniform systems: as a canonically conjugate quantum number. Conceptually, one can consider successive applications of M̃ or M̆ as moving a walker from `site' | 0⟩ to `site' M̃| 0⟩ or M̆| 0⟩, etc., one step (bit flip) at a time, with all states sharing a given Hamming weight treated as equivalent, until it again reaches its starting state (see also Fig. <ref>). Given that the determination of the matrix permanent corresponds to an algebraic branching program from the state | 0⟩ to itself, effecting the spin transitions in the opposite direction (i.e. reversing the arrows in Fig. <ref>) corresponds to taking the adjoint (complex conjugate transpose) of M̃ or M̆. Eq. (<ref>) then becomes (M̃^†)^n+1| 0⟩ =(M̆^†)^n| 0⟩=P_n^*| 0⟩. One can then construct Hermitian operators M̃_R = M̃^n+1+(M̃^†)^n+1, M̃_I = i[M̃^n+1-(M̃^†)^n+1], satisfying the eigenvalue equations M̃_R| 0⟩=(P_n)| 0⟩; M̃_I| 0⟩=(P_n)| 0⟩. Similar expressions apply to M̆. While the operators (<ref>) and (<ref>) are arguably more physical, their experimental realization could remain challenging given the complexity of the description, Eq. (<ref>). Also, unlike the case for M̃^n+1 or M̆^n alone, the non-zero eigenvalues for the remaining blocks of  (<ref>) and (<ref>) are different from (P_n) and (P_n). §.§ Classical Algorithm for the permanent While the result (<ref>) is a statement about the eigenvalues, it suggests a straightforward approach to the calculation of the permanent without needing to determine the spectrum of M̃ or M̆. Rather, one must only compute M̃^n| 0⟩=P_n| 1⟩ M̃^n+1| 0⟩=P_n| 0⟩; M̆^n| 0⟩=P_n| 0⟩. In other words, apply M̃ or M̆ successively to the state | 0⟩ until all the amplitude is again concentrated on the state | 0⟩, and read out the result. The algorithm for the permanent then corresponds to an n-fold or (n-1)-fold product of matrices with dimension (n i+1)×(n i) (i={0,1,…,n-1}). Each column of the ith matrix contains exactly n-i non-zero elements, so that the matrices are exponentially sparse. The total number of operations (multiplications and additions) is ∑_i=1^n(n i)(2i)=n2^n. In comparison, Ryser's algorithm requires a total of n2^n+1-(n+1)^2∼ 2n2^n operations for large n <cit.>. The scaling of the number of operations in the present case therefore matches that of the fastest-known algorithm, with a straightforward implementation, which could make it useful for practical applications. § FERMIONIC AND BOSONIC REPRESENTATIONS The spin model (<ref>) can be naturally represented in terms of Schwinger bosons, and fermions via the Jordan-Wigner transformation. These are discussed in the next two subsections. §.§ Bosons Spin-1/2 particles can be mapped to Schwinger bosons as follows: σ_j^+ = σ_j^x-iσ_j^y=a_j^†b_j; σ_j^- = σ_j^x+iσ_j^y=b_j^†a_j; σ_j^z = a^†_ja_j-b^†_jb_j. Each spin operator therefore involves two `species' of bosons, satisfying the relations [a_i,a_j^†]=δ_ij; [a_i,a_j] =[a_i^†,a_j^†]=0 and likewise for b-species bosons, where [x,y]=xy-yx is the commutator. These are supplemented with the unit-occupancy condition a_j^†a_j+b^†_jb_j=1, which specifies that each site is occupied by exactly one boson of either species. The Schwinger approach therefore maps spins to hard-core two-species bosons at exactly unit filling. The model (<ref>) expressed in terms of Schwinger bosons is then M̃_b=∑_ i∑_j=0^n-1w_h( i),ja_j^†b_j | i⟩⟨ i|+∏_jb_j^†a_j, where the bit in the string i is unity (zero) if occupied by a boson of species a (b), and the zero state is | 0⟩=∏_j=0^n-1b_j^†|𝒪⟩. The graph associated with M̃_b is indistinguishable from that of M̃, i.e. Fig. <ref> for n=3. It is instructive to write the action of the nth power of M̃_b on the zero state: M̃_b^n| 0⟩ = (∑_j=0^n-1w_n-1,ja_j^†b_j) (∑_j=0^n-1w_n-2,ja_j^†b_j) × ⋯×(∑_j=0^n-1w_0,ja_j^†b_j) ∏_j=0^n-1b_j^†|𝒪⟩ = (∑_j=0^n-1w_n-1,ja_j^†) (∑_j=0^n-1w_n-2,ja_j^†) × ⋯×(∑_j=0^n-1w_0,ja_j^†) |𝒪⟩. In the second line above, all operators for the b-species bosons can be omitted because each creation of an a-species boson must be accompanied by the annihilation of a b-species boson, and after n powers of M̃_b all sites have been accounted for. Furthermore, the hard-core condition acts in the same way as a Pauli exclusion principle: if a b-species boson occupies site j, then the b_j^† operator returns zero. Expansion of the terms in Eq. (<ref>) then returns the permanent because the b-species bosons all commute. §.§ Fermions The Jordan-Wigner transformation corresponds to mapping the spin operators to `spinless' fermions: σ_j^+ = exp(iπ∑_k=j+1^n-1f_k^†f_k)f_j^†; σ_j^- = exp(iπ∑_k=j+1^n-1f_k^†f_k)f_j; σ_j^z = 2f_j^†f_j-1, where the site-dependent fermionic creation (f^†_j) and annihilation (f_j) operators satisfy the anticommutation relations {f_i,f_j^†}=δ_ij; {f_i,f_j} ={f_i^†,f_j^†}=0, and {x,y}=xy+yx. The first of these automatically ensures the Pauli condition forbidding double occupancy of sites; thus, basis states can therefore again be indexed by bitstrings i, but now where 0 (1) signifies the absence (presence) of a fermion at position j. Canonical ordering is assumed, where creation operators appear with indices in descending order; for example |1010⟩=f_2^†f_0^†|𝒪⟩, where |𝒪⟩ denotes the particle vacuum. The phases appearing in Eq. (<ref>) ensure that the fermions anticommute on all sites; alternatively, they ensure the normal / canonical ordering of basis states. For example: f_0^†|1010⟩ = f_0^†f_2^†f_0^†|𝒪⟩=0; f_1^†|1010⟩ = f_1^†f_2^†f_0^†|𝒪⟩=-f_2^†f_1^†f_0^†|𝒪⟩=-|1110⟩; f_3^†|1010⟩ = f_3^†f_2^†f_0^†|𝒪⟩=|1011⟩. The Jordan-Wigner transformation (<ref>) counts the number of fermions to the right of (i.e. with index greater than) where the spin is flipped / fermion is created, and multiplies the transition amplitude by -1 if this number is odd. In this way, the negative signs arising from the fermionic anticommutation are cancelled and the transition amplitudes all remain positive. The model (<ref>) expressed in terms of fermions then becomes M̃_ JW=∑_ i∑_j=0^n-1w_h( i),js_ i,j f_j^†| i⟩⟨ i|+∏_jf_j, where the function s_ i,j incorporates the Jordan-Wigner phases for creation of a fermion at position j on a basis state with occupation indexed by occupation state | i⟩ defined by bitstring i. An explicit example is shown for the n=3 case in Fig. <ref>(a). Consider the |100⟩→|110⟩ and |010⟩→|110⟩ transitions. For the former transition, a fermion is created in site 1, to the right of a fermion already in site 0, so there is no additional Jordan-Wigner phase; likewise, the final state |110⟩=f_1^†f_0^†|𝒪⟩ is already normal ordered. Thus, the edge weight w_1,1 remains unchanged. For the latter transition, the fermion created in site 0 is to the left of a fermion already in site 1, which yields a negative contribution from the Jordan-Wigner transformation, reflected in the signed edge weight -w_1,0 in Fig. <ref>(a). At the same time, the final state |110⟩=f_0^†f_1^†|𝒪⟩ requires one fermionic anticommutation to bring it back to normal ordering, which cancels the negative sign and effectively restores the total edge weight to its original value in the spin representation. Thus, within the context of a binary branching process, the sum of the path weights of Fig. <ref>(a) still constitute the permanent, despite the appearance of signed edge weights. To construct an algebraic branching program for true fermions one either must maintain all edge weights and keep track of the fermionic anticommutation relations defining the occupation states, in which case the model is M̃_f=∑_ i∑_j=0^n-1w_h( i),jf_j^† | i⟩⟨ i|+∏_jf_j and | i⟩ represent occupation states; or one must account for all Jordan-Wigner phases to appropriately sign all edge weights but treat the states | i⟩ instead as ordinary bitstrings, in which case the model is instead M̃_f, alt=∑_ i∑_j=0^n-1w_h( i),j s_ i,jσ^+_j| i⟩⟨ i|+∏_jσ^-_j, and now the σ^± are interpreted as classical bit-flip operators. When expressing the fermionic model in terms of creation and annihilation operators, Eq. (<ref>) is preferable, but Eq. (<ref>) is more convenient in the graph adjacency matrix representation. Now, Fig. <ref>(a) depicts a truly signed binary branching process, and the sum of the path weights constitute the determinant, rather than the permanent, of M for n=3. The w_0,0w_1,1w_2,2 path serves as the reference, where the second indices for the weights in this product constitute the integer list {012}. All other paths are characterized by an overall minus (plus) sign if the integer list derived from the second index of the weights for that path corresponds to an even (odd) number of inversions of the reference list; for example, the odd permutations {021}, {102}, and {210} correspond to paths with negative total weights -w_0,0w_1,2w_2,1, -w_0,1w_1,0w_2,2, and -w_0,2w_1,1w_2,0, respectively. Thus, one expects that the eigenvalues of the operator (<ref>) are the determinant, as will be discussed further below. Another noteworthy property of the fermionic graph is that it is unbalanced: there is no vertex sign switching that can remove all of the minus signs <cit.>; alternatively expressed, there is no diagonal matrix whose entries are {1,-1} that can map Eq. (<ref>) to a form without any s_ i,j factors. (This is another way of stating that the determinant derived in this way cannot be mapped to the permanent by a local unitary, though this is already obvious as unitary transformations preserve the eigenvalues). There remains the intriguing possibility that there is a non-unitary operator that can effect the map, but this is not explored in the present work. Similarly, it is not possible to map existing weights to their negatives in order to map the determinant to the permanent. For the n=3 case, one could reassign w_1,2→ -w̅_1,2 and w_2,1→ -w̅_2,1 to remove the negative signs on all edges with these labels in Fig. <ref>(a), but this still leaves signs on edges labeled by w_1,1 which cannot be removed. As in the bosonic case, it is worthwhile to express the action of the nth power of M̃_f, alt on the vacuum state: M̃_f^n| 0⟩ = (∑_j=0^n-1w_n-1,jf_j^†) (∑_j=0^n-1w_n-2,jf_j^†) × ⋯×(∑_j=0^n-1w_0,jf_j^†) |𝒪⟩. This simple and apparently separable representation for the output state, as products of similar terms, is possible because of the Pauli principle and the anticommutation relations: any attempted creation of a fermion in an already-occupied site is zero, and the signs of the final many-fermion states will reflect the number of permutations required to express them in normal ordering. Furthermore, the result is clearly the determinant D_n (or its negative) rather than the permanent. The states (<ref>) and (<ref>) reveal a close connection between the determinant and the permanent expressed in terms of indistinguishable particles. Consider explicitly the n=3 case: M̃_f^3| 0⟩ = (w_2,0f_0^†+w_2,1f_1^† +w_2,2f_2^†) × (w_1,0f_0^†+w_1,1f_1^†+w_1,2f_2^†) × (w_0,0f_0^†+w_0,1f_1^†+w_0,2f_2^†) |𝒪⟩. = w_2,2f_2^†(w_1,1f_1^†w_0,0f_0^†+w_1,0f_0^† w_0,1f_1^†) + w_2,1f_1^†(w_1,2f_2^†w_0,0f_0^† +w_1,0f_0^†w_0,2f_2^†) + w_2,0f_0^†(w_1,1f_1^†w_0,2f_2^† +w_1,2f_2^†w_0,1f_1^†)|𝒪⟩ = [w_2,2(w_1,1w_0,0-w_1,0w_0,1). + w_2,1(-w_1,2w_0,0+w_1,0w_0,2) + .w_2,0(-w_1,1w_0,2+w_1,2w_0,1)] f_2^†f_1^†f_0^†|𝒪⟩ = D_3f_2^†f_1^†f_0^†|𝒪⟩. Recapitulating the arguments of Sec. <ref> but for M̃_f instead of M̃ or M̆, one obtains that the only non-zero eigenvalues of M̃_f are given by the (n+1)th roots of D_n. § ROW REDUCTIONS The exponentially small rank of the matrices M̃ and M̆, discussed in Section <ref>, suggests that it might be possible to apply row reductions to reduce their dimension without affecting the non-zero eigenvalues. Just as Gaussian elimination reduces a matrix to upper (or lower) triangular form, so that the determinant (which would otherwise require summing n! terms) can be evaluated by a product of the diagonal elements, row reduction of M̃ or M̆ reduces the n! paths of the algebraic branching program to a single path by deleting vertices and reweighting edges. As shown below, row reductions in the fermionic model correspond precisely to the Gaussian elimination approach to evaluating the determinant. The bosonic version provides a roadmap for row reductions to evaluate the permanent, but doesn't appear to provide a speedup over the direct matrix multiplication method discussed in Sec. <ref>. As shown in Sec. <ref>, the blocks M̃_m of M̃^n+1 and M̆^n have dimension (n m) but are all unit rank, so that the ranks of M̃^n+1 and M̆^n are n+1 and n, respectively. In contrast, the matrices M̃ and M̆ are not block diagonal, and their eigenvectors are no longer given by M̃^m| 0⟩ and M̆^m| 0⟩, respectively. Consider M̆ for concreteness. While the n non-zero eigenvalues correspond to the nth roots of the permanent, the zero eigenvalues have multiplicity 2^n-1-n; thus, the kernel of M̆ is comprised of generalized zero eigenvectors of rank 1 up to n-1. The set of linearly independent vectors spanning these defective matrices must therefore be obtained sequentially. The standard procedure is to obtain the set of r_m generalized zero rank-m vectors |v_i^(m)⟩, 1≤ i≤ r_m, such that M̆^m|v_i^(m)⟩ =|𝒪⟩. The rank-nullity theorem ensures that n+∑_m=1^n-1r_m=2^n-1. In practice there is a more efficient iterative procedure to obtain the kernel. First generate the reduced row echelon form (also known as the pivot matrix) B_1 for M̆, via Gauss-Jordan elimination. For any rank-deficient matrix such as M̆, the deviation of B_1 from the identity is driven entirely by the r_1 rank-1 zero eigenvectors; thus the (2^n-1-r_1)× (2^n-1) matrix B_1 annihilates the null space: B_1|v_i^(1)⟩=0. One can then find an n×(2^n-1-r_1) matrix A_1 such that M̆=A_1B_1; its matrix elements coincide with those of M̆ but with r_1 columns removed whose indices correspond to the location of the first non-zero element of each |v_i^(1)⟩. The (2^n-1-r_1)×(2^n-1-r_1) matrix M̆^(2)≡ B_1A_1 therefore has the same eigenvalues as M̆ but now with r_1 fewer zeroes. The rank-2 generalized eigenvectors are the solutions of M̆^2|v_i^(2)⟩=A_1B_1A_1B_1|v_i^(2)⟩ =|0⟩, for 1≤ i≤ r_2, which can be rewritten as A_1M̆^(2)(B_1|v_i^(2)⟩)=0. At the same time, the zero eigenvectors of M̆^(2) are the solutions of M̆^(2)|ṽ_i^(2)⟩=0. Thus, with the identification |ṽ_i^(2)⟩≡ B_1|v_i^(2)⟩, Eq. (<ref>) is automatically satisfied by Eq. (<ref>), and solving the latter is more efficient than the former due to the smaller matrix dimension. It is straightforward to verify that the non-zero eigenstates of interest from Eq. (<ref>), |ϕ_n^(1)(k)⟩=e^-i2π k/n∑_j=0^n-1e^i2π jk/n(M̆/P_n^1/n)^j|0^⊗ n⟩, are transformed into |ϕ_n^(2)(k)⟩ = e^-i2π k/n∑_j=0^n-1e^i2π jk/n(M̆^(2)/P_n^1/n)^j|0^⊗ n⟩ = B_1|ϕ_n^(1)(k)⟩. The procedure is then repeated for M̆^(2)=A_2B_2. After n-1 iterations, the original rank-deficient (2^n-1)-dimensional matrix M̆ is reduced to a full-rank n-dimensional matrix M̆^(n-1) with eigenvectors ∏_i=1^n-1B_n-i|ϕ_n(k)⟩ and corresponding eigenvalues P_n^1/n. As shown below, this procedure is equivalent to Gaussian elimination of M for the evaluation of the determinant, and also provides an equivalent systematic approach to the evaluation of the permanent. §.§ Example: Three Fermions Consider row reductions of M̃_f, alt, Eq. (<ref>), for the specific case n=3, depicted in Fig. <ref>). The (unnormalized) rank-1 generalized zero eigenvectors can be written as |v_1^(1)⟩ = |001⟩+w_1,1/w_1,2|010⟩ +w_1,0/w_1,2|100⟩; |v_2^(1)⟩ = |011⟩-w_2,0/w_2,2|110⟩; |v_3^(1)⟩ = |101⟩+w_2,1/w_2,2|110⟩, so that one may eliminate vertices labeled by the bitstrings 001, 011, and 101. The matrix B_1 must satisfy B_1|v_i^(1)⟩ =0; a sufficient construction is B_1 = I-|v_1^(1)⟩⟨ 001|-|v_2^(1)⟩⟨ 011| -|v_3^(1)⟩⟨ 101| = [ 1 0 0 0 0 0 0 0 -w_1,1/w_1,2 1 0 0 0 0 0 -w_1,0/w_1,2 0 0 1 0 0 0 0 0 w_2,0/w_2,2 0 -w_2,1/w_2,2 1 ], where rows and columns indices are labeled by bitstrings with the least significant bit on the right. Here, B_1 is expressed in the somewhat unconventional lower-triangular reduced row echelon form. Likewise, A_1 = M̆_f, alt\{⟨ 001|, ⟨ 011|, ⟨ 101|} = [ 0 0 0 w_2,2 w_0,2 0 0 0 w_0,1 0 0 0 0 w_1,2 0 0 w_0,0 0 0 0 0 0 w_1,2 0 0 -w_1,0 w_1,1 0 ]. It is straightforward to verify that A_1B_1=M̆_f, alt. One then obtains M̃_f, alt^(2)=B_1A_1=[ 0 0 0 w_2,2 w_0,1' 0 0 0 w_0,0' 0 0 0 0 -w_1,0' w_1,1' 0 ], where w_0,0', w_0,1', w_1,0', and w_1,1' coincide with the reweighted terms in M derived from a first round of Gaussian elimination, defined in Eqs. (<ref>) and (<ref>). It is illuminating to view this first round as an operation on the graph representing the binary branching process, as depicted in Figs. <ref>(b) and (c). Vertices labeled by bitstrings 001, 011, and 101 are deleted, reducing the total number of branches from six to two. The contributions to the determinant of the branches through the deleted vertices are incorporated by reweighting remaining edges. For example, the weight -w_1,2w_2,1 of the path from |100⟩ to |000⟩ through vertex |101⟩ is incorporated into the new path weight w_1,1'w_2,2; similarly, the path from |010⟩ to |000⟩ through deleted vertex |011⟩ is incorporated in w_1,0'. Perhaps surprisingly, these edge reweightings can also compensate for both of the deleted paths from |001⟩ to |000⟩ through the two deleted vertices |011⟩ and |101⟩. Crucially, for fermions, the total path weights after the transformation are products of revised edge weights; as discussed in Sec. <ref>, cancellation of signed terms ensure that the total weights for the reduced branching process still coincide with the determinant. The remaining (unnormalized) rank-2 generalized zero eigenvector can now be efficiently written as |v^(2)⟩=|010⟩+w_1,0'/w_1,1'|100⟩, so that one may eliminate the vertex labeled by the bitstring 010. The matrix B_2 must satisfy B_2|v^(2)⟩=|𝒪⟩: B_2 = I-|v^(2)⟩⟨ 010| = [ 1 0 0 0 0 -w_1,0'/w_1,1' 1 0 0 0 0 1 ]. Likewise, A_2 = M̆_f, alt^(2)\⟨ 010| = [ 0 0 w_2,2 w_0,1' 0 0 w_0,0' 0 0 0 w_1,1' 0 ]. Again, it is straightforward to verify that A_2B_2 =M̆_f, alt^(2). One then obtains M̆_f, alt^(3)=B_2A_2=[ 0 0 w_2,2 w_0,0” 0 0 0 w_1,1' 0 ], where w_0,0” coincides with Eq. (<ref>). The eigenvalues of M̆^(3) are the cube roots of D_3=w_0,0”w_1,1'w_2,2. In this example, the second and final round of Gauss-Jordan elimination corresponds to deleting the vertex labeled by bitstring 010, as depicted in Fig. <ref>(d), yielding only one path from |000⟩ to |000⟩ and the rescaled weight w_0,0”. The product of the edge weights for this path, w_0,0”w_1,1'w_2,2 is precisely the product of diagonal terms of M in lower-triangular form, Eq. (<ref>). It is instructive to write the consequences of row reduction for the fermionic representation of the eigenstate, Eq. (<ref>), for the example considered above. After the first round, the state becomes M̃_f^3| 0⟩ = w_2,2f_2^†(w_1,0'f_0^†+w_1,1'f_1^†) × (w_0,0'f_0^†+w_0,1'f_1^†)|𝒪⟩, using the Pauli principle. After the second round, one obtains M̃_f^3| 0⟩ = w_2,2f_2^†w_1,1'f_1^† w_0,0”f_0^†|𝒪⟩ =D_3f_2^†f_1^†f_0^†|𝒪⟩. Thus, for fermions, no explicit expansion of the state (<ref>) is required; rather, the initial product of factors with three terms is reduced to a product of factors with two terms, and finally a product of single terms. The general strategy is the same for all n. This reduction of the evaluation of the determinant to a product of n terms is at the heart of its efficiency. §.§ Example: Three Bosons Consider next row reductions for the bosonic case, again using n=3 as an example to illustrate the procedure for general n. The procedure works in much the same way as for fermions, and is depicted in Fig. <ref>). The initial graph is equivalent to that for the original spin model, and is shown in Fig. <ref>). The (unnormalized) rank-1 generalized zero eigenvectors of M̆_b, Eq. (<ref>), can be written as |v_1^(1)⟩ = |011⟩-w_2,0/w_2,2|110⟩; |v_2^(1)⟩ = |101⟩-w_2,1/w_2,2|110⟩. Comparison with Eq. (<ref>), one notices the similarity with |v_2^(1)⟩ and |v_3^(1)⟩ in the fermionic case, and also that there is no rank-1 zero eigenvector involving h=1 states. The vertices labeled by the bitstrings 011 and 101 can be eliminated choosing the projector B_1 = I-|v_1^(1)⟩⟨ 011|-|v_2^(1)⟩⟨ 101| = [ 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 w_2,0/w_2,2 0 w_2,1/w_2,2 1 ], and A_1 = M̆_b\{⟨ 011|,⟨ 101|} = [ 0 0 0 0 w_2,2 w_0,2 0 0 0 0 w_0,1 0 0 0 0 0 w_1,1 w_1,2 0 0 w_0,0 0 0 0 0 0 w_1,0 0 w_1,2 0 0 0 w_1,0 w_1,1 0 ]. Again, it is straightforward to verify that A_1B_1=M̆. One then obtains M̃^(2)=B_1A_1=[ 0 0 0 0 w_2,2 w_0,2 0 0 0 0 w_0,1 0 0 0 0 w_0,0 0 0 0 0 0 x w_1,0' w_1,1' 0 ], where w_1,0', and w_1,1' coincide with the expressions in Eq. (<ref>) but with minus signs replaced with plus signs; and a new term is introduced, x=w_1,0w_2,1+w_1,1w_2,0/w_2,2. The first round of row reductions, shown in Fig. <ref>)(a), corresponds to deleting two vertices in the h=2 layer but none in the h=1 layer, in contrast with the fermionic case. Deleting vertices in only a single layer avoids generating paths with rescaled weights on two adjacent edges, which would yield unphysical cross terms in their product (c.f. the discussion in Sec. <ref>). But this comes at a high cost: vertices cannot be deleted in an adjacent layer simultaneously if they share edges with vertices in the first layer, as is possible in the fermionic case. This clearly decreases the efficiency of the reduction. Furthermore, deleting vertices in one layer requires adding new edges from the remaining vertex in that layer to all vertices in the next layer that had (now deleted) edges; in this case, the additional edge has weight x. The second and final round of row reductions in this example is governed by the rank-2 generalized zero eigenvectors: |v_1^(2)⟩ = |001⟩-x/w_1,1'|100⟩; |v_2^(2)⟩ = |010⟩-w_1,0'/w_1,1'|100⟩. As shown in Fig. <ref>)(b), the vertices labeled by the bitstrings 001 and 010 are eliminated: B_2 = I-|v_1^(2)⟩⟨ 001|-|v_2^(2)⟩⟨ 010| = [ 1 0 0 0 0 0 x/w_1,1' w_1,0'/w_1,1' 1 0 0 0 0 0 1 ], and A_2 = M̆^(2)\{⟨ 001|,⟨ 010|} = [ 0 0 w_2,2 w_0,2 0 0 w_0,1 0 0 w_0,0 0 0 0 w_1,1' 0 ]. One then obtains M̃^(3)=B_2A_2=[ 0 0 w_2,2 w_0,0' 0 0 0 w_1,1' 0 ], where w_0,0'=w_0,0+w_0,1w_1,0'+w_0,2x/w_1,1'. As is shown in Fig. <ref>(b), two vertices in the h=1 layer are now deleted, requiring a rescaling of the w_0,0 weight, and one obtains a single path from |000⟩ to |000⟩, as desired. The permanent is then P_3=w_0,0'w_1,1'w_2,2, which is expressed as a product of three single terms, much like the expression of the determinant in Eq. (<ref>). §.§ Example: Four Bosons Given the superficial similarities between row reductions for the bosonic and fermionic cases when n=3, it is instructive to discuss the n=4 case to gather a better understanding of why the permanent is nevertheless exponentially more difficult to compute with this method. The rank-1 generalized zero eigenvectors are |v_1^(1)⟩ = |0011⟩-w_2,1/w_2,3|0110⟩ -w_2,0/w_2,2|1001⟩ +w_2,0w_2,1/w_2,2w_2,3 = (|01⟩-w_2,0/w_2,2|10⟩)_0,2(|01⟩-w_2,1/w_2,3|10⟩)_1,3; |v_2^(1)⟩ = |0101⟩-w_2,2/w_2,3|0110⟩ -w_2,0/w_2,1|1001⟩ +w_2,0w_2,2/w_2,1w_2,3; = (|01⟩-w_2,0/w_2,1|10⟩)_0,1(|01⟩-w_2,2/w_2,3|10⟩)_2,3; |v_3^(1)⟩ = |0111⟩-w_3,0/w_3,3|1110⟩ = (|01⟩-w_3,0/w_3,3|10⟩)_0,3 |11⟩_1,2; |v_4^(1)⟩ = |1011⟩-w_3,1/w_3,3|1110⟩ = (|01⟩-w_3,1/w_3,3|10⟩)_1,3 |11⟩_0,2; |v_5^(1)⟩ = |1101⟩-w_3,2/w_3,3|1110⟩ = (|01⟩-w_3,2/w_3,3|10⟩)_2,3 |11⟩_0,1. The eigenvectors can all be written in explicitly separable forms, where the indices outside the parentheses denotes the label partitions; note that these also match the second indices in the weight ratios. Evidently the the generalized zero eigenvectors for n=3, Eqs. (<ref>) and (<ref>), can be written in a similar product form. This is due to the fact that the Hamiltonian (<ref>) itself can be written as a sum of permutations of separable terms. For example, the terms in the n=4 Hamiltonian that map h=2 states to h=3 states can be expressed as M̆_(h=2→ 3) = 1/2[()_0,1I_2,3 +I_0,1()_2,3+()_0,2I_1,3+I_0,2()_1,3 + ()_0,3I_1,2+I_0,3()_1,2], where ()_i,j = |11⟩_i,j(w_2,i⟨ 01|+w_2,j⟨ 10| )_i,j, I_i,j = (|01⟩⟨ 01|+|10⟩⟨ 10|)_i,j. The `identity' operator is the sum of all idempotents with h=1, enumerated by the 1/2 prefactor in Eq. (<ref>). Thus, M̆_(h=2→ 3) has the form of Cartesian products of operators over all four-site bipartitions restricted to states with specific Hamming weight. It is straightforward to verify that the |v_1^(1)⟩ and |v_2^(1)⟩ in Eq. (<ref>) are zero eigenvectors of the first and second Cartesian product, respectively, and have no support on the third. Similar expressions can be obtained for the other terms in the Hamiltonian. Construction of A_1 and B_1 proceeds analogously to the n=3 case, and generates M̆^(2)=B_1A_1 with basis states (graph vertices) {|0011⟩,|0101⟩,|0111⟩,|1011⟩,|1101⟩} removed. However, all but one of the 23 non-zero terms in the resulting matrix is unique. From the graph perspective, only 5 edges are reweighted, 9 edges have unchanged weights, and 9 new edges with unique weights must be added. Generically, in the bosonic case, the number of unique terms that need to be evaluated during the row reduction procedure grows exponentially with n. There doesn't appear to be any way to exploit the separable nature of the generalized zero eigenvectors to simplify the calculation. § DICUSSION: PROSPECTS FOR A QUANTUM ALGORITHM As discussed in Sec. <ref>, the permanent of an n× n matrix M relates to the eigenvalues of another 2^n-dimensional matrix M̃ or M̆ (the dimension of the latter is one smaller so that one basis state is unused). This matrix has several attributes that would appear to favor the development of an efficient quantum algorithm for the evaluation of the permanent: the dimension of M̃ is a power of two, which would be the case for an n-qubit operator; matrix elements of M̃ are easily indexed by address, which correspond to their original positions in M; M̃ is n-sparse (no row or column has more than n-1 elements); and the permanent is the maximal eigenvalue of M̃^n. Despite these nice features, however, the construction of an efficient algorithm for the permanent using this approach is not straightforward for one principal reason: neither M̃ nor M̆ is Hermitian or unitary. As a first attempt at a quantum algorithm, one might leverage the relation P_n=⟨0|M̆^n|0|$⟩, Eq. (<ref>). The quantity on the right-hand side can be computed using any of the known algorithms for evaluating expectation values <cit.>. Unfortunately, such algorithms haveO(1/ϵ)or worse dependence on additive error <cit.>, and thus an even worse dependence on the multiplicative error. Moreover, the operator norm ofM̆^nis not polynomially bounded in general. Consequently, this approach fails to suggest an avenue toward an efficient quantum algorithm. A more lucrative approach might be to make use of the fact that all non-zero eigenvalues ofM̆have absolute value|P_n|^1/n, Eq. (<ref>). Thus, if there exists an efficient procedure to generate one of the corresponding eigenstates, then|P_n|^1/ncan be computed efficiently to constant or polynomially small additive error. Note that an additive approximation of|P_n|^1/nprovides significantly more resolution, at least for the unitary matricesMthat would be relevant to boson sampling, in contrast to an additive approximation of|P_n|. As noted by Aaronson and Arkhipov <cit.>, since|P_n|is typically exponentially small for unitary matrices sampled from a Haar random distribution, an additive approximation of|P_n|to polynomial accuracy would almost always return zero. On the other hand, the average of|P_n|^1/nis inΩ(1)for unitary matrices sampled from a Haar random distribution, and therefore an approximation of|P_n|^1/nto polynomially small or even constant additive error provides more resolution. Unfortunately, generating any eigenstates ofM̆, Eq. (<ref>), is not straightforward. The coefficients in the linear combination depend on the value ofP_n, which is unknown and in fact the goal of the computation. Even if one could obtain a sufficiently good approximation ofP_n(via randomized classical algorithms), taking appropriate linear combinations using the techniques developed in Ref. <cit.> would only generate the targeted eigenstate with exponentially small probability, limiting the runtime of the algorithm. Specifically, it is not obvious how to adapt block-encoding techniques <cit.> to generate, with high probability, the stateM̆^n-1|0⟩/M̆^n-1|0⟩, which is one of the terms in the desired linear combination. Consider instead leveraging the useful property that the state|0⟩is an equal superposition of all eigenstates corresponding to non-zero eigenvalues ofM̆. IfM̆were unitary, this fact would have been sufficient for computing all eigenvalues ofM̆efficiently using repeated application of the phase estimation algorithm <cit.>. Unfortunately, the extension of phase estimation to non-unitary operators is generally inefficient <cit.>. For the present problem, we expect phase estimation to take an exponentially long time, as the eigenvalues being estimated lie well inside the unit circle. A more sophisticated approach to computing the eigenvalues ofM̆is based on quantum linear-system solvers <cit.>. The complexity of this approach is limited by the condition number of the eigenvectors, however. We have verified numerically that the condition number is exponentially large for typical real and complex matricesM. To summarize, mapping the problem of computing the permanent ofMto calculating the eigenvalues ofM̆would seem to suggest new routes for designing an efficient quantum algorithm to obtain a multiplicative approximation of the permanent. Yet, such an algorithm does not follow from the immediate application of the currently available algorithmic tools for linear algebra in a quantum setting. In all likelihood, if such an algorithm exists, it would rely on more subtle properties of the permanent than are made apparent by the present mapping. § BLOCK-DIAGONAL REPRESENTATION This section derives the expression for M̃^n+1, where M̃ is defined by Eq. (<ref>). Consider first M̃^2: M̃^2 = (∑_ i_0∑_j_0=0^n-1w_h( i_0),j_0σ^+_j_0| i_0⟩⟨ i_0|+| 0⟩⟨ 1| )(∑_ i_1∑_j_1=0^n-1w_h( i_1),j_1σ^+_j_1| i_1⟩⟨ i_1|+| 0⟩⟨ 1| ) = ∑_ i_0, i_1∑_j_0,j_1w_h( i_0),j_0σ^+_j_0 | i_0⟩⟨ i_0|w_h( i_1),j_1σ^+_j_1| i_1⟩⟨ i_1| +∑_ i_1∑_j_1| 0⟩⟨ 1|w_h( i_1),j_1σ^+_j_1| i_1⟩⟨ i_1| +∑_ i_0∑_j_0w_h( i_0),j_0σ^+_j_0 | i_0⟩⟨ i_0| 0⟩⟨ 1|. For the evaluation of the ⟨ i_0|w_h( i_1),j_1σ^+_j_1| i_1⟩ factor in first term of Eq. (<ref>), there are three possibilities: σ^+_j_1| i_1⟩=0, σ^-_j_1| i_0 ⟩=0, or σ^+_j_1| i_1⟩=| i_0⟩ (equivalently | i_1⟩=σ^-_j_1| i_0⟩ or ⟨ i_1|=⟨ i_0|σ^+_j_1), and the first two possibilities contribute nothing to the sum. Similar arguments apply to the second and third terms, and one obtains M̃^2 = ∑_ i∑_j_0,j_1w_h( i),j_0w_h( i)-1,j_1σ^+_j_0| i⟩⟨ i|σ_j_1^+ +∑_j_0(w_n-1,j_0| 0⟩⟨ 1|σ_j_1^+ +w_0,j_0σ^+_j_0| 0⟩⟨ 1|). Next consider M̃^3. After elementary algebra along the same lines as above, one obtains M̃^3 = ∑_ i∑_j_0,j_1,j_2(w_h( i),j_0σ^+_j_0)| i⟩⟨ i| (w_h( i)-1,j_1σ_j_1^+) (w_h( i)-2,j_2σ_j_2^+) +∑_j_0,j_1(w_0,j_0σ^+_j_0)( w_1,j_1σ^+_j_1)| 0⟩⟨ 1| + ∑_j_n-1,j_n-2| 0⟩⟨ 1| (w_n-1,j_n-1σ_j_n-1^+)(w_n-2,j_n-2σ_j_n-2^+) +∑_j_0,j_n-1(w_0,j_0σ^+_j_0)| 0⟩⟨ 1|(w_n-1,j_n-1σ_j_n-1^+). The form of leading term in M̃^n+1 should now be evident: ∑_ i∑_j_0,…,j_n(w_h( i),j_0σ^+_j_0)| i⟩⟨ i|(w_h( i)-1,j_1σ_j_1^+)⋯(w_h( i)-n,j_nσ_j_n^+). In the above expression, the ⟨ i|∏_kσ_j_k^+ term is zero unless i= 1, but then σ^+_j_0| i⟩=0, so that the leading term vanishes. The remaining terms are straightforward generalizations of those found in Eqs. (<ref>) and (<ref>), and one obtains M̃^n+1 = ∑_j_0,…,j_n-1[ (w_0,j_0σ^+_j_0)⋯(w_n-1,j_n-1σ^+_j_n-1)| 0⟩⟨ 1| +(w_0,j_0σ^+_j_0)⋯(w_n-2,j_n-2σ^+_j_n-2)| 0⟩⟨ 1| (w_n-1,j_n-1σ^+_j_n-1). + …+.(w_0,j_0σ^+_j_0)| 0⟩⟨ 1|(w_1,j_1σ^+_j_1)⋯(w_n-1,j_n-1σ^+_j_n-1)+| 0⟩⟨ 1|(w_0,j_0σ^+_j_0)⋯(w_n-1,j_n-1σ^+_j_n-1) ]. A corollary is that the expression for arbitrary powers p is M̃^p = ∑_ i∑_j_0,…,j_p-1(w_h( i),j_0σ^+_j_0)| i⟩⟨ i| (w_h( i)-1,j_1σ_j_1^+)⋯(w_h( i)-p+1,j_p-1σ_j_p-1^+) + ∑_j_0,…,j_p-2[ (w_0,j_0σ^+_j_0)⋯(w_p-2,j_p-2σ^+_j_p-1)| 0⟩⟨ 1| +(w_0,j_0σ^+_j_0)⋯(w_p-3,j_p-3σ^+_j_n-2)| 0⟩⟨ 1| (w_p-2,j_p-2σ^+_j_p-2). + …+. | 0⟩⟨ 1|(w_n-1,j_0σ^+_j_0)⋯(w_n-p+1,j_p-2σ^+_j_p-2) ], which can be used to prove Eq. (<ref>), i.e. that M̃_m=M̃^m| 0⟩⟨ 0|M̃^n-m+1. First, M̃^p| 0⟩=∑_ i∑_j_0,…,j_p-1(w_h( i),j_0σ^+_j_0)| i⟩⟨ i| (w_h( i)-1,j_1σ_j_1^+)⋯(w_h( i)-p+1,j_p-1σ_j_p-1^+)| 0⟩. Only bitstrings i with Hamming weight p-1 will contribute, so M̃^p| 0⟩=∑_j_0,…,j_p-1(w_0,j_0σ^+_j_0) (w_1,j_1σ_j_1^+)⋯(w_p-1,j_p-1σ_j_p-1^+)| 0⟩. Second, following similar reasoning, ⟨ 0|M̃^q = ∑_ i∑_j_0,…,j_q-1⟨ 0|w_h( i),j_0σ^+_j_0| i⟩⟨ i| (w_h( i)-1,j_1σ_j_1^+)⋯(w_h( i)-q+1,j_q-1σ_j_q-1^+) + ∑_j_0,…,j_q-2⟨ 1|(w_n-1,j_0σ^+_j_0)⋯(w_n-q+1,j_q-2σ^+_j_q-2) = ∑_j_0,…,j_q-2⟨ 1|(w_n-1,j_0σ^+_j_0)⋯(w_n-q+1,j_q-2σ^+_j_q-2). Putting these results together: M̃^m| 0⟩⟨ 0|M̃^n-m+1 = ∑_j_0,…,j_m-1 k_0,…,k_n-m-1(w_0,j_0σ^+_j_0) (w_1,j_1σ_j_1^+)⋯(w_m-1,j_m-1σ_j_m-1^+)| 0⟩ ×⟨ 1|(w_n-1,k_0σ^+_k_0) (w_n-2,k_1σ^+_k_1)⋯(w_m,k_n-m-1σ^+_k_n-m-1) = ∑_j_0,…,j_n-1(w_0,j_0σ^+_j_0) (w_1,j_1σ_j_1^+)⋯(w_m-1,j_m-1σ_j_m-1^+)| 0⟩ ×⟨ 1|(w_n-1,j_mσ^+_j_m) (w_n-2,j_m+1σ^+_j_m+1)⋯(w_m,j_n-1σ^+_j_n-1). Comparison with the terms in Eq. (<ref>) immediately yields M̃_m=M̃^m| 0⟩⟨ 0|M̃^n-m+1. apsrev.bst
http://arxiv.org/abs/2307.06139v2
20230709045849
Constructing Maximal Extensions of the Vaidya Metric in Israel Coordinates: I. Integration of the Field Equations
[ "Sheref Nasereldin", "Kayll Lake" ]
gr-qc
[ "gr-qc" ]
APS/123-QED [email protected] [email protected] Department of Physics, Queen's University, Kingston, Ontario, Canada, K7L3N6 This paper explores a complete representation of the Vaidya model, a radial flux of radiation in the eikonal approximation, used for modeling various phenomena in both classical and semi-classical General Relativity and Astrophysics. The majority of the applications of the Vaidya model have been formulated in an incomplete representation. A complete representation is obtained here by direct integration of the Einstein field equations. We present the methodology to obtain this complete representation, and its utility in the modeling of general relativistic phenomena. Constructing Maximal Extensions of the Vaidya Metric in Israel Coordinates: I. Integration of the Field Equations Kayll Lake August 12, 2023 =================================================================================================================== § INTRODUCTION The Schwarzschild metric <cit.> has been used to study the exterior geometry of spherical stellar objects undergoing gravitational collapse <cit.>, where it is assumed that the radiation emitted by the object is insignificant. However, during the advanced stages of stellar collapse, these objects are expected to emit a considerable amount of mass in the form of radiation, see for example <cit.>. Therefore, the exterior of a collapsing stellar object is no longer empty, and the Schwarzschild vacuum metric is no longer suitable for its description. The Vaidya metric <cit.> is more suitable for this situation and has been widely used to classically study the geometry outside [With suitable boundary conditions, such as Israel's conditions, see <cit.>, on the spherical surface, this exterior solution can be matched to some proper interior solution, see for example <cit.> and <cit.>.] radiating spherical stellar objects, see for example <cit.>. Thus, one can treat this dynamical mass distribution with its envelop of radiation as an isolated system existing in otherwise vacuum, asymptotically flat spacetime that is described by the Schwarzschild vacuum metric. The “self-similar" Vaidya metric has been used to construct spacetimes that exhibit a visible strong singularity, demonstrating the potential for the failure of the Penrose “Cosmic censorship hypothesis" <cit.>. This conjecture states that singularities arising from regular initial conditions do not have any causal influence on spacetime. If the hypothesis were to fail, it would be a major flaw in the theory of general relativity and would make it impossible to predict the events in any region of spacetime containing a singularity, as new information could emerge in an unpredictable manner. The growth of curvature along non-spacelike geodesics has been examined (see for example, <cit.>), and the visible singularity in self-similar spacetimes has been classified as strong. Furthermore, Lake and Zannias <cit.> showed that the emergence of naked singularities in these spacetimes is due to the self-similarity assumption, rather than spherical symmetry. On the semi-classical level, the Vaidya metric has been utilized to explore black hole evaporation, possibly due to Hawking's radiation <cit.>, (see for example <cit.>). Furthermore, the Vaidya metric in the double-null coordinates (the mass function must be linear) <cit.> has been used to study the quasi-normal modes (QNM) as a model that supposedly will give deeper insights on the gravitational excitations of black holes (see for example <cit.>). Despite the fact that the majority of applications were structured with the Vaidya metric written in the Eddington-Finkelstein-Like (EFL) coordinates, these coordinates have been known for some time to be incomplete (see for example <cit.>), leaving the Vaidya manifold not maximally covered. Thus, to ensure the accuracy of all applications, it is required to construct a complete set of coordinates and thoroughly assess the impact of this set of coordinates. This is the primary objective of this paper. We organize this paper as follows. In the next section, we review the EFL coordinates and provide a proof of incompleteness of this set of coordinates, which is the main motivation for any subsequent coordinate representation. In Section <ref>, we review the use of Israel coordinates <cit.> to write the Vaidya metric <cit.>, and discuss why the derivation of these coordinates resulted in unsatisfactory results when attempting to obtain maximal coverings of the Vaidya manifold. The main results of this paper are outlined in Section <ref>, in which we introduce an algorithmic method to obtain Israel coordinates by direct integration of the field equations, without relying on any coordinate transformation. In Section <ref>, we present necessary physical restrictions that must be imposed on the flux of radiation. In Section <ref>, we provide a general derivation regarding the location of the apparent horizon in the Vaidya manifold. It is emphasized that the location of the apparent horizon is established before introducing any expressions to the characterizing functions. In Section <ref>, we demonstrate that our construction can be used to obtain both EFL and Israel coordinates by choosing different expressions for the functions that arise from integrating the field equations; such functions, as well as the coefficient of the cross term in the general metric that is presented, shall be referred to as the “characterizing functions". In Section <ref>, we briefly calculate some of the invariants of the Vaidya metric in Israel coordinates. The last section highlights the main results of the paper and discusses the possible extensions of the current work. § THE EFL COORDINATES The Vaidya metric, in the EFL coordinates, is a spherically symmetric solution to the Einstein field equations with the energy momentum tensor approximated in “the eikonal form" <cit.>, which expresses a unidirectional radial flow of unpolarized radiation, T_αβ = Φ k_αk_β= ϵ/4π r^2dm(u)/duk_αk_β, where ϵ = ± 1 and k_α = δ^u_α is tangent to radial inward or outward-going null geodesics. The spacetime line element in the EFL coordinates takes the form ds^2 = -(1-2m(u)/r)du^2+2ϵ dudr+r^2dΩ^2_2, where dΩ^2_2 = dθ^2+sin^2θ dϕ^2 is the metric of a unit 2-sphere. For ϵ = +1, the metric expresses inward-directed radiation (towards smaller values of the radius r) with a monotonically increasing m as a function of the “advanced time" coordinate u. If ϵ = -1, the metric is that of outgoing radiation (towards larger values of the radius r) with m being monotonically decreasing as a function of the “retarded time" coordinate u. However, it is conventional, as stated in <cit.>, to assign u as the retarded time and v as the advanced time. Furthermore, it is worthwhile to note that the quantity Φ, usually called as the energy density of the radiation flux, does not have a direct operational meaning because the tangent null vector k_α does not have a natural normalization. Thus, it is preferable, see also <cit.>, to consider the following quantity: ρ = Φ (k_αu^α)^2, which defines the energy density as measured locally by an observer with a timelike 4-velocity u^α. §.§ Incompleteness of the EFL Coordinates In this subsection, we demonstrate why the EFL coordinates (u,r,θ,ϕ) do not provide a complete description of the Vaidya manifold. The incompleteness of these coordinates is the primary motivation for the search for new coordinates in which the manifold is complete, allowing radial null geodesics to continue moving to infinite values of their affine parameter or be terminated upon encountering a gravitational singularity. The incompleteness of the coordinates (u,r,θ,ϕ) becomes evident when studying the behavior of the ingoing radial null geodesics, emanating from the past null infinity ^- or from the past singularity surface r=0, for the case (0<m(∞)<∞). It was suggested, but not proven in <cit.>, that the geodesics appear to approach the future even horizon (FEH) surface, r=2m(∞), as u →∞, though they actually reach it for finite values of their affine parameter, see Fig. <ref>. To support these insightful claims, we present a more articulated proof. We draw attention to the fact that, whereas Fig. <ref> is only valid for outgoing radiation, the forthcoming proof is valid for both ingoing and outgoing radiation. Let us consider the two branches of radial null curves, for which ds^2=0 and θ = ϕ = const. The first branch is given by u=const (red), and the second branch (blue) is given by the solution of the following ordinary differential equation [This differential equation is a special case of Chini's equation <cit.>, which does not have a general solution.], du/dr =2 ϵ r/r-2m(u). We assume the following to hold 0 < m(±∞)< ∞, the question now arises as to whether the affine parameter λ remains finite as r → 2m(±∞) along the second branch. In order to answer this question we write the second branch (<ref>) as a system of 1^st order ODEs ṙ = r-2m(u)/λ, u̇ = 2ϵ r/λ, where an overdot indicates d/dλ, so that differentiation of the previous system with respect to λ produces the geodesic equations of (<ref>) r̈ = - 4 ϵ m^'(u)r/λ^2, ü = - 4ϵ m(u)/λ^2, where use has been made of both (<ref>) and (<ref>). Now let us assume that λ→±∞ as r → 2m(±∞) then by virtue of (<ref>) and (<ref>) we obtain lim_λ→±∞u̇= lim_λ→±∞ü = 0, which is not possible as this changes the second geodesic branch into the first [Note that the first branch is characterized by u=const, which entails u̇ = ü = 0.]. Therefore, our assumption is wrong, and we conclude that λ along the second branch remains finite as r → 2m(±∞). If we write this value of λ as λ_0, we obtain lim_λ→λ_0ṙ = 0, and lim_λ→λ_0u̇ = 4ϵ m(±∞)/λ_0. Evidently, the last equation remains finite because the mass function m(±∞) is assumed finite from the beginning. By virtue of (<ref>), we conclude that the region (r<2m(±∞)) is inaccessible in the EFL coordinates. Therefore, an extension is necessary. § ISRAEL COORDINATES In order to overcome the “incompleteness problem" of the EFL coordinates, Israel <cit.> introduced what he described as the analytic completion of the Vaidya manifold (<ref>). In Israel coordinates (u,w,θ,ϕ), the Vaidya line element reads ds^2 = (w^2/2m(u)r(u,w)+4m^'(u)/U(u)) du^2+2dudw+r(u,w)^2dΩ^2_2, where U(u) = ∫_0^udu/4m(u), r(u,w) = U(u)w+2m(u), and the function m(u) is always positive. Notice that (<ref>) suffers a true singularity at r(u,w) = 0, see (<ref>), and at u=0, if m'(u) does not vanish there, as explained below. To avoid any possible confusion about what is to be said, let us label the EFL retarded coordinate, u, as t. This then shows that (<ref>) is reduced to the outgoing Vaidya metric, (<ref>) with u=t and ϵ=-1, by the transformation t(u) = -∫_0^udu/U(u), regular for (u>0, t<∞). Apart from the cumbersome nature of Israel coordinates, the Vaidya metric in Israel coordinates (<ref>) does not adequately represent both the internal and external fields as long as the mass function m(u) is only defined for u ≥ 0. Since u=0 corresponds to t=+∞ (t(u)∝ -log U(u)), it is impossible to extend the line element to the range (u<0) via a coordinate transformation, as it would require knowledge of the mass function m(t>∞), i.e., beyond FEH. Hence, we believe that the “maximal" extension of the Vaidya manifold, as given by the line element (<ref>), is imprecise. It is worth noting that there was an attempt <cit.> to extend the Vaidya metric in terms of Israel coordinates. However, this approach faced the same problem as the original Israel extension of relying on coordinate transformations and the necessity of knowing the mass function m(u) beyond the FEH in advance. It is also worthy of notice that although Israel coordinates have obvious advantages over the EFL coordinates, the Vaidya metric in Israel coordinates has not gained enough attention. To our knowledge, the metric has only been used once (see <cit.>) to study the complete gravitational collapse of a radiating shell of matter. Prior to the attempt given in <cit.>, all the work done to investigate the gravitational collapse in the presence of radiation was not complete. That is, the gravitational collapse was not followed beyond the event horizon because the Vaidya manifold in the EFL coordinates only describes the external field around a collapsing radiating object. § GENERAL COORDINATE CONSTRUCTION Consider the following general spherically symmetric metric expressed in the coordinates (u,w,θ,ϕ) <cit.> ds^2 = f(u,w) du^2+2h(u,w) du dw + r(u,w)^2dΩ^2_2, where r(u,w) measures the area of the 2-sphere u=w=const. The energy momentum tensor is once more taken to be of the eikonal form, T^αβ = Φ k^αk^β, where k^α = δ^α_w is a radial null vector and the quantity Φ(k^αu_α)^2 is the energy flux, measured by an observer with tangent u_α. Straightforward calculations <cit.> show that the only non-zero component of the Einstein tensor is G^w w from which Φ can be directly obtained. If we take radial null trajectories with four-tangent k^α to be radial null geodesics affinely parametrized by w, i.e., k^β∇_βk^α = 0, this yields ∂ h(u,w)/∂ w = 0. Thus, the function h(u,w) reduces to a function of only u, h(u,w)≡ h(u). While we will limit ourselves to the choice h(u) = ±1, we will keep the function as is for potential future use. §.§ Solving the Einstein Field Equations First [This approach of solving the field equations was first introduced in <cit.> to express the Schwarzschild-de Sitter vacuum metric in Israel coordinates, and was later utilized in <cit.> to obtain the Vaidya metric in the same set of coordinates.], we benefit from the vanishing of the G^uu component to obtain ∂ ^2/∂ w^2 r(u,w)= 0. This leads, by integration, for a general expression [We also note that this expression can be deduced by assuming that (<ref>) has a vanishing second Ricci invariant <cit.>. This result is particularly important because it is directly obtained from the geometry of the spacetime before considering the matter content.], to r(u,w) r(u,w) = f_1(u)w+f_2(u). In the sequel all the functions f_n (u) are assumed suitably smooth [ All the functions are assumed to be at least C^2.]. Second, by solving G^θθ = 0, with the aid of (<ref>), we obtain r(u,w)∂ ^2/∂ w^2 f(u,w) + 2f_1(u)∂/∂ wf(u,w) - 4h(u)d /duf_1(u) =0. Integrating (<ref>) yields f(u,w)= 2 f_1^'(u) h(u) f_2(u)^2-f_1(u)f_3(u)/f_1(u)^2r(u,w) +2 f_1^'(u) h(u)w/f_1(u)+f_4(u), where (') denotes ordinary differentiation with respect to the coordinate u. By solving G^uw = 0, we find that f_4(u) is given by f_4(u) = h(u)(2f_1(u)f_2^'(u)-h(u))/f_1(u)^2, where use has been made of (<ref>) and (<ref>). By virtue of (<ref>), (<ref>), and (<ref>) the only non-zero component of the Einstein tensor can be given as G^ww = 1/χ(u)(2h(u)^2f_2(u)^2f_1^”(u)+4h(u)^2f_2(u)f_1^'(u)f_2^'(u) -h(u)f_3(u)f_1^'(u)-2h(u)f_2(u)^2 h^'(u)f_1^'(u) -h(u) f_1(u)f_3^'(u)+2f_1(u)f_3(u)h^'(u) ), where χ(u,w)=h(u)^4f_1(u)r(u,w)^2. The G^ww is conveniently expressed in the following way. First define the Hernandez-Misner mass <cit.> m ≡r(u,w)^3/2 R_θϕ^ θϕ, where R is the Riemann tensor. By calculating R_θϕ^ θϕ for (<ref>) and making the necessary simplifications, (<ref>) can be given in terms of the characterizing functions f_n(u) as m = m(u) = 2h(u)f_2(u)^2f_1^'(u)-f_1(u)f_3(u)/2h(u)^2, where the mass function must always remain positive-valued over its domain. As a result, G^ww can be expressed in a more succinct form, G^ww = 2 m^'(u)/h(u)f_1(u)r(u,w)^2 = 8 πΦ. Similarly, a more convenient expression of the function f(u,w) can be obtained with the aid of (<ref>), (<ref>), (<ref>), and (<ref>) f(u,w) = 𝒜(u) r(u,w)^2 +ℬ(u) r(u,w)+𝒞(u)/f_1(u)^2r(u,w), where 𝒜(u) = 2h(u)f_1^'(u), ℬ(u) = 2h(u)f_1(u)f_2^'(u)-2h(u)f_2(u)f_1^'(u)-h(u)^2, 𝒞(u) = 2h(u)^2m(u). § PHYSICAL RESTRICTIONS ON THE CHOICE OF THE CHARACTERIZING FUNCTIONS The first restriction that we impose, using (<ref>), is given by the following inequality 2h(u)f_2(u)^2f_1^'(u)>f_1(u)f_3(u). This is necessary to ensure that the mass function, m(u), is always positive. The second restriction is that the measured radiation flux is a positive quantity, Φ (k^αu_α)^2> 0. Substituting (<ref>) in (<ref>) and simplifying, we obtain m^'(u)/h(u)f_1(u)>0, which dictates that the signs of m^'(u) and h(u)f_1(u) have to be identical. As our attention is confined to classical matter fields (radiation), a minimum requirement is that this matter distribution must satisfy the Weak Energy Condition (WEC). This requirement implies, with the aid of (<ref>), the following stipulations on the different forms of radiation, summarized in Table <ref>. Table. <ref> clearly illustrates that both ingoing and outgoing radiation can be obtained without changing the sign of the function h(u). However, as will be seen shortly, the direction of radiation in the EFL coordinates is dictated by the sign of the function h(u). § THE APPARENT HORIZON AND THE EVENT HORIZON We begin this section by providing a general derivation to the location of the apparent horizon of (<ref>). To this end, let us examine the congruence of radial null trajectories characterized by the four-tangent ℓ^α, ℓ^α = δ^α_u-f(u,w)/2h(u)δ^α_w, However, it does not satisfy the geodesic equation in the affine-parameter form. This is evident from the equations ℓ^α∇_αℓ^u = κℓ^u and ℓ^α∇_αℓ^w = κℓ^w, where κ = κ (u,w) and it is called the inaffinity. The geodesics equations are: ℓ^α∇_αℓ^u = (2d/d uh(u)-∂/∂ wf(u,w)/2h(u))(1) = κℓ^u, and ℓ^α∇_αℓ^w = (2d/d uh(u)-∂/∂ wf(u,w)/2h(u))(-f(u,w)/2h(u)) = κℓ^w, with the inaffinity κ given by κ = 2d/duh(u)-∂/∂ wf(u,w)/2h(u). The associated expansion scalar Θ^(ℓ) of this non affinley parametrized congruence of radial null geodesics, see <cit.> for the definition of the expansion in this case, is given by Θ^(ℓ) = ∇_αℓ^α-κ, = -r(u,w) ∂/∂ wf (u,w)-2 r(u,w) d/d uh (u)/2 h (u) r(u,w) - 2 f (u,w) ∂/∂ wr (u,w)-4 h (u) ∂/∂ ur (u,w)/2 h (u) r(u,w)-κ, = - 1/h(u)r(u,w)( f(u,w) ∂/∂ wr(u,w)-2h(u)∂/∂ ur(u,w)). The apparent horizon is characterized by Θ^(ℓ) = 0, and thus by virtue of (<ref>) we obtain the following condition 2h(u)∂ r(u,w)/∂ u = f(u,w) ∂ r(u,w)/∂ w. We substitute (<ref>) in (<ref>), which yields 2h(u) ( f_1^'(u)w+f_2^'(u)) = f(u,w)f_1(u). With the aid of (<ref>) the previous equation takes the form 0 = 2f_1^'(u)r(u,w)^2+2h(u)m(u) -( 2w f_1(u)f_1^'(u)+2f_2(u)f_1^'(u)+h(u))r(u,w). We can use (<ref>) once more to reduce the last equation to -h(u)( r(u,w)-2m(u) ) = 0, which immediately gives the sought-after result: r(u,w) = 2m(u). It is thus established that the apparent horizon is located at r=2m(u). We also note that the previous result is established before making any choices for the characterizing functions, f_n(u). Determining the location of the event horizon in the Vaidya metric is not as straightforward as locating the apparent horizon. In fact, the entire future history of the metric, as specified by the functions f(u,w) and h(u), must be predetermined in order to identify the null generators of the event horizon <cit.>. However, we may generically define the future (past) event horizon as a causal boundary for the timelike geodesics terminating at future (past) timelike infinity, i^+(i^-) [For the definitions of these infinities we refer to <cit.>.]. § SPECIFIC COORDINATE REPRESENTATIONS OF THE VAIDYA METRIC In this section, we demonstrate that we can obtain various coordinate representations of the Vaidya metric by selecting different expressions for the characterizing functions, h(u) and f_n(u). Additionally, we emphasize that the meaning of the coordinate u is dependent on the choice of the characterizing functions, and thus the coordinate u in the EFL coordinates has a different interpretation to that in Israel coordinates. §.§ The Vaidya Metric in the EFL Coordinates Let us choose the characterizing functions such that h(u) = ± 1, f_1(u) = 1, and f_2(u) = 0, then we obtain w = r with the help of (<ref>). Furthermore, we get f_3(u) = -2m(u) from (<ref>). Substituting these values in (<ref>) yields f(u,r) = -r+2m(u)/r, and thus the metric (<ref>) becomes ds^2 = -(1-2m(u)/r)du^2± 2dudr+r^2dΩ_2^2, with G^ww = ± 2m^'(u)/r^2. It is clear that, with the help of Table <ref>, we can obtain h(u) = -1 for the outgoing radiation version of the Vaidya metric, where the coordinate u is a retarded time. Similarly, selecting h(u) = +1 yields the ingoing radiation version of the Vaidya metric, with u as an advanced time. §.§ The Vaidya Metric in Israel Coordinates In this subsection, we explore how by introducing different choices to the functions f_n(u), we obtain Israel coordinates. Let us consider the following choices: f_1(u) = U(u), f_2(u) = 2 M(u), and f_3(u) = 0. It follows from (<ref>) that for M(u)=m(u) (which is a choice), U^'(u) = h(u)/4m(u). Thus, with the aid of the first fundamental theorem of calculus we write U(u) = ∫_0^uh(x)/4m(x) dx. However, since our choices for the function h(u) will be confined to either +1 or -1, we set h(u)=h=±1. Consequently, the expression (<ref>) takes the form U(u) = h∫_0^u1/4m(x) dx. It follows that the spacetime line element (<ref>) can be written as ds^2 = (w^2/2m(u)r+4hm^'(u)/U(u)) du^2+2hdudw+r^2dΩ^2_2, where r is no longer a coordinate; it is now a function r=r(u,w) = U(u)w+2m(u) and G^ww = 2hm^'(u)/U(u)r(u,w)^2. Here, u is a null coordinate and (<ref>) describes both outgoing and ingoing radiation. It is interesting to note that the presence of h is not necessary for (<ref>), as demonstrated in <cit.>, particularly when m^'(u)=0. It is noteworthy that, in accordance with (<ref>), the apparent horizon is now located at w=0. There is some ambiguity regarding the sign of u which appears in the definition of the function U(u) (<ref>); for example, in <cit.>, u is always positive, whereas in <cit.> u can be either positive or negative. We shall resolve this ambiguity and demonstrate when u can be negative or positive. To this end, recall that U^'(u) = h/4m(u), which means that the sign of U^'(u) is solely determined by the sign of h. Also, with the aid of the WEC, (<ref>), and (<ref>), we have m^'(u)/hU(u) = m^'(u)/∫_0^udx/4m(x) > 0, where in the last equation we have taken h^2 = 1. Hence, for m^'(u)>0 the integral must be positive (u in the integral must be positive) and for m^'(u)<0 the integral has to be negative (u in the integral must be negative). Consequently, we have seen that the sign of u in the integral is not always positive like in <cit.>, and the dichotomy in the function U(u) based on the sign of u is explained in a more articulated way. We have summarized all the choices we have considered thus far in Table <ref>. Finally, we introduce a restriction on the w coordinate corresponding to the the surface r(u,w) = 0, the physical singularity, see below. Since r(u,w) = U(u)w+2m(u), for r(u,w) = 0 we obtain w = -2m(u)/U(u)≡ w_0(u), and so w_0 > 0 for U(u)<0 and w_0 < 0 for U(u)>0. It turns out that this exactly the case when we study the radial null geodesics in the proposed maximal extensions of the Vaidya metric <cit.>. § INVARIANTS Up to syzygies <cit.>, we find that the only non-differential non-vanishing invariant of (<ref>) is the first Weyl invariant, w1R ≡1/8C_αβγδC^αβγδ = 3/2h(u)^4r(u,w)^6(f_1(u)f_3(u)-2h(u)f_1(u)'f_2(u)^2), which reduces to the following expression in Israel coordinates, w1R ≡1/8C_αβγδC^αβγδ = 6m(u)^2/r(u,w)^6, where C_αβγδ is the Weyl tensor. However, as (<ref>) makes clear, it would be informative to have invariant information for m^'(u). This is obtained by way of the Bach tensor <cit.>, see also <cit.>. First define A_αβδ = ∇^γC_αγβδ, where ∇^γ denotes contravariant derivative. The Bach tensor is given by B_αβ = ∇^δ A_αβδ+R^γδC_αγβδ/2. Since the Bach tensor is trace-free, the first Bach invariant is B≡ B_αβB^αβ. In the present case we find, with the aid of (<ref>), that B = (4U(u)m^'(u)/r(u,w)^4)^2. Nevertheless, the preceding result does not provide the desired invariant definition of m'(u) due to its dependence on the functions r(u,w) and U(u). § SUMMARY AND DISCUSSION We have examined the construction of Israel coordinates for the Vaidya metric and have simplified the problem to finding appropriate expressions for the characterizing functions that arise from integrating the field equations. This construction is systematic and does not necessitate any coordinate transformation, which provides us with the chance to spot potential extensions of the Vaidya manifold by introducing distinct expressions for the characterizing functions, f_n(u). Nonetheless, the main focus of this paper is to reconstruct Israel coordinates for the Vaidya metric. By utilizing the WEC, we have understood the role of the function h(u) in the Vaidya metric. Although the sign of the h(u) is paramount in determining the direction of radiation in the EFL coordinates, we have demonstrated that this is not the case for Israel coordinates. That is, both ingoing and outgoing radiation can be achieved with h=+1 or h=-1. However, the impact of changing the sign of the function h(u) will be further investigated when we discuss the completeness of Israel coordinates in <cit.>. The next step, see <cit.>, is to introduce explicit mass functions as candidates for the three possible Vaidya models and assess the completeness of Israel coordinates in relation to these mass functions. § ACKNOWLEDGEMENT This work was supported (in part) by a grant from the Natural Sciences and Engineering Research Council of Canada (to KL).
http://arxiv.org/abs/2307.03877v1
20230708014525
Designing Mixed-Initiative Video Games
[ "Daijin Yang" ]
cs.HC
[ "cs.HC", "cs.AI", "J.5" ]
[1]Covercover To my family. [1]Table of Contentscontents I would like to first thank my great, beautiful, and strong mother. Since being diagnosed with multiple myeloma nine years ago, she has endured unimaginable suffering, but she has never given up on her life and has even achieved more in her work than healthy people. After contracting COVID-19, her condition deteriorated rapidly, and as I write this paper, she is fighting bravely against cancer in the hospital. She has always inspired and supported me to move forward. She is a great mother and I will love her forever. I am grateful to my father, my mother's sister, and other family members for taking care of my mother and allowing me to focus on my studies. I would thank Professor Elina Tochilnikova, Professor Giovanni Maria Troiano, Professor Bob De Schutter, Professor Casper Harteveld, Professor Leanne Chukoskie, and all other professors in the field of Game Science and Design at Northeastern University for their invaluable guidance and unwavering patience in supporting my work. I would also express my sincere gratitude to Professor Max Kreminski at Santa Clara University for providing crucial feedback and suggestions on my thesis. I would like to extend my appreciation to all of my colleagues who generously provided valuable suggestions and constructive feedback on my work. Additionally, I am grateful to my friends Binyao Jian and Xinyan Deng, who stood by me during the most challenging times. Their unwavering support and companionship have been invaluable to me. The development of Artificial Intelligence (AI) enables humans to co-create content with machines. The unexpectedness of AI-generated content can bring inspiration and entertainment to users. However, the co-creation interactions are always designed for content creators and have poor accessibility. To explore gamification of mixed-initiative co-creation and make human-AI interactions accessible and fun for players, I prototyped Snake Story, a mixed-initiative game where players can select AI-generated texts to write a story of a snake by playing a “Snake” like game. A controlled experiment was conducted to investigate the dynamics of player-AI interactions with and without the game component in the designed interface. As a result of a study with 11 players (n=11), I found that players utilized different strategies when playing with the two versions, game mechanics significantly affected the output stories, players' creative process, as well as role perceptions, and players with different backgrounds showed different preferences for the two versions. Based on these results, I further discussed considerations for mixed-initiative game design. This work aims to inspire the design of engaging co-creation experiences. Keywords - human-AI interaction, gamification of human-AI collaboration, mixed-initiative interface, mixed-initiative game, AI co-writing, playing and creating conflicts headings CHAPTER: INTRODUCTION Recent machine learning (ML) techniques have boosted human creation, enabling humans to co-work with artificial intelligence (AI) to compose music <cit.>, draw illustrations <cit.>, write stories <cit.>, reply emails <cit.>, create characters <cit.>, and develop games <cit.>. In this mixed-initiative co-creation <cit.> process, AI acts as a partner of humans and provides real-time feedback aligned with the creation iteration. Since the algorithm can generate numerous instances with easy inputs in a relatively short time, the mixed-initiative interfaces can help its users quickly explore the solution space, inspire them with unexpected ideas <cit.>, and make creative experiences accessible to non-professional creators <cit.>. Current mixed-initiative co-writing interfaces mainly focus on supporting writers. These systems were designed to help writers to keep the consistency of stories, plan plots, get unstuck <cit.>, and change text-based stories into other forms <cit.>. Users must have basic writing skills to operate these systems. Other work introduced gamified designs such as temporary rules and goals <cit.>, as well as scores <cit.> into mixed-initiative co-writing to make the system more enjoyable to novice writers. However, previous work on human-AI collaboration in the context of creative writing focused on AI as a supporting mechanism to facilitate creative storytelling efforts. Here, I extend prior work by exploring the use of AI for mixed-initiative creative writing as a game mechanic in the context of game design. To design mixed-initiative video games, I aim to explore the following research questions: (1) What patterns of interaction and player identification emerge in the player-AI co-creating process? (2) How do game mechanics impact the creation and role perceptions in the process? (3) How can mix-initiative co-creating be integrated with game mechanics for a unified play experience? To ground my study, I designed and prototyped Snake Story, a mixed-initiative game with the mechanics from “Snake” [https://www.arcade-history.com last accessed 03.06.2023]. The game (referred to as the game version) involved players selecting AI-generated texts or adding their own texts by controlling a growing snake to eat candies on the map, resulting in the creation of a story about a snake. A GPT-3 <cit.> based language model was employed to provide 2 text selections in each round with different preset parameters. The model would consider previous selections and would write an end for the story when the game or the interaction is over. For comparison, a system (referred to as the non-game version) was also developed for players to directly select AI-generated texts without engaging in gameplay. To investigate how players dynamically interact with the game, I conducted a within-subject user study with 11 players (n = 11). Each player was asked to write an approximately 300-word story about a snake in the randomly assigned two versions. Eleven individuals participated in a study where they played Snake Story and their experience was analyzed using a mixed-method approach, including gameplay log data, survey, think-aloud, interview, and observations. Results from the study show: game mechanics significantly affect players' text selection strategies, the quality of the stories, and sense of engagement in creating; players shared a selection strategy for GPT-3 generated texts; different players had different play strategies in both versions and thus perceived themselves and AI differently because of the game mechanics; players with different writing and AI experiences hold different preferences on the two versions. In summary, the thesis makes the following contributions to the mixed-initiative game design: (1) I introduce Snake Story, a mixed-initiative game for collaborative writing with AI. I present techniques that enable players to write with AI, and I develop both game and non-game interactions and interfaces. (2) In a within-subject study with 11 players, I compared the non-game and the game version and defined: (a) Players' usage data. (b) Statistic difference between the two versions. (c) Players' strategies for selecting AI-generated texts to create stories (d) Players' play strategies and role perceptions in the two versions. (e) Players' preferences for the two versions. (3) Based on the results of the user study, I discuss the design implications that mixed-initiative games should: (a) Resolve playing and creating conflicts. (b) Increase narrative engagement in playing. (c) Enhance emotional involvement in creating. (d) Balance playing and creating. (e) Find new evaluation criteria. Taken together, these findings guide the design of future engaging co-creation experiences. CHAPTER: RELATED WORK § NEURAL LANGUAGE MODELS FOR TEXT GENERATION Text generation has appeared as a critical application of natural language processing (NLP) technologies, with various applications such as chatbots <cit.> and content creation <cit.>. The rise of deep learning has enabled significant advancements in the field of text generation, with language models such as the Generative Pre-trained Transformer (GPT) series achieving remarkable success. It has been proven that GPT models have the ability to generate texts that cannot be distinguished from human-written pieces <cit.>. Based on the previous GPT-2 structure <cit.>, the GPT-3 model with larger model size, dataset size, and more training has demonstrated stronger abilities in text completion and outperformed GPT-2 on several metrics, including fluency, coherence, and relevance <cit.>. As a result, GPT-3 was employed in the Snake Story game. By using a few-shot learning approach <cit.>, the GPT-3 model is able to perform specific tasks under natural language instructions inputted by users. For example, in the Snake Story game proposed in this thesis, a prefix "writing a story of a snake" was added to restrict the generated texts under the topic. Despite the impressive advancements in text generation, several challenges remain in using GPT-3, including the issue of bias and the difficulty of producing diverse and engaging content. The issue of bias stands for that the generated text may reflect the biases inherent in the training data. Identical prompts in GPT-3 can result in stereotype-related outputs including biases on sex <cit.>, race, and certain religious groups<cit.>. Also, GPT-3 still has the problem of sometimes generating low-quality texts that repeat themselves, lose coherence over paragraphs, and have contradictory logic <cit.>. This problem will be enlarged when the parameters of GPT-3 are not set properly [https://platform.openai.com/docs/api-reference/models last accessed 03.06.2023]. § MIXED-INITIATIVE CO-WRITING INTERFACES Mixed-initiative interfaces that enable co-creation in various fields have been widely researched <cit.>. The interfaces can take advantage of exploratory creativity from human writers and the fast generation of diagrammatic lateral paths from generative algorithms to create mixed-initiative co-creativity <cit.>. Extensive research has explored the potential of mixed-initiative interfaces to aid human writing through editing human-written texts, as well as generating and expanding ideas. Editing and refining functions are the most common functions in the interfaces. For example, Shuming Shi et al. <cit.> utilized AI technologies to boost users' writing proficiency by enabling them to generate higher-quality text more efficiently. This was accomplished through the implementation of five distinct categories of features in their digital writing assistant, including text completion, error detection, text refinement, keyword-to-sentence conversion (K2S), and cloud-based input methods (cloud IME). Xin Zhao <cit.> developed a writing assistant that can assist non-native English speakers in overcoming language barriers by offering rewriting alternatives with various tones (such as casual or formal) and lengths (like shortening or expanding). In collaborative writing, AI can also serve as an idea generator, contributing to the generation of novel concepts and plot lines. For instance, Melissa Roemmele et al. <cit.> created a system that aids users in brainstorming by providing suggestions for the next sentence in a story. Swanson et al. <cit.> described an interactive storytelling system that utilizes a case-based reasoning architecture to offer a range of options for the subsequent sentences, leading the story in entirely diverse directions. Chung et al. <cit.> introduced an alternative to suggestion-based co-ideation approaches by developing line-sketching interactions that enable users to co-create stories while actively controlling and making sense of the protagonist's fate. Beyond idea generators, AI in <cit.> was AI was granted a more substantial role as an active writer and assumed responsibility for continuing users' narratives through a unique form of story solitaire. In contrast, Biermann et al. <cit.> proposed that AI could jeopardize writers' control, autonomy, and ownership by exceeding co-creative limits, and therefore sought to preserve human control over the writing process in their system. Moreover, AI can assist in bridging the gaps between the skeleton structures of stories. Ammanabrolu et al. <cit.> introduced an ensemble-based model capable of generating event-driven stories. Yuan et al. <cit.> built a text editor that can provide plot points that can contextualize the scene built by humans. Laclaustra et al. <cit.> introduced a system that empowered users to specify characters, locations, and objects within a story world. The system, subsequently, generated a rudimentary story by allotting actions to individual characters and creating a sequence of events. In summary, the aforementioned applications are well-designed to assist creative writers in enhancing language, ensuring consistency, overcoming writer's block, managing reader experience, as well as refining and iterating on expressive intent <cit.>. However, it is crucial to note that more research is indispensable to cater to the needs of casual creators or non-creators in the realm of content creation. § MIXED-INITIATIVE CO-WRITING GAMES In order to broaden the accessibility of co-creative experiences for a wider range of users, various applications have recognized the benefits of integrating mixed-initiative co-writing as a valuable component of narrative instruments <cit.> in games, thereby enhancing the overall interactive experience of "play". For example, a mixed-initiative text-based game, AI dungeon[https://play.aidungeon.io/main/home last accessed 03.06.2023], used AI to generate and respond to players' text-based commands and choices. The AI system produces a distinctive story outcome based on the players' inputs, providing an evolving and personalized gaming experience of exploring and creating stories in pre-set scenes. Moreover, Kreminski et al. <cit.> developed "Why Are We Like This?", a mixed-initiative, co-creative storytelling game that aimed to engage players in investigating the generated history of characters and to bring the story to a satisfying conclusion by selecting and writing actions for the characters. The game involves designed author goals, proper AI suggestions, and player curiosity to encourage co-authorship. While the term "play" is commonly used to denote the interaction between human and mixed-initiative interfaces, it is essential to recognize that "games" bear distinctive dissimilarities from play, as they feature unambiguous goals that encourage participants to engage in the interpretation and optimization of rules and tactics <cit.>. The introduction of goals into a system serves as the most straightforward means of distinguishing mixed-initiative co-writing games from mixed-initiative co-writing interfaces. Xi et al. <cit.> introduced KuiLeiXi, an open-ended text adventure game that required players to interact with the AI to achieve predetermined plot goals. The system was developed to address the lack of incentives for players in AI Dungeon. Additionally, Ben Samuel et al. <cit.> created a mixed-initiative playful tool, Writing Buddy, that integrates the affordances of both authoring and playable media to support creative writing endeavors. The game mechanics prompted players to engage in a puzzle-solving-like experience, where they possessed the freedom to add or eliminate story beats to alter the characters' states within the game and attained the pre-determined narrative goal. Building upon the concept of Writing Buddy, Kreminski et al. <cit.> developed Loose Ends, a mixed-initiative co-creative storytelling play experience that incorporates obligatory storytelling goals that can be parameterized with specific characters and additional constraints. In addition, the AI system in Loose Ends ensures consistency with all previous texts, which emulates the functionality of an active writing partner. The present state of mixed-initiative co-writing games suggests that their full potential has yet to be realized, as they continue to rely on interactions that overlap with mixed-initiative interfaces. While the mobile game designed by Castaño et al. <cit.> represents a step forward in this field, enabling users to collaboratively create a story by arranging a card-game-like system, further exploration of combining mixed-initiative interfaces and game mechanics are required. CHAPTER: SNAKE STORY To ground my study, a mixed-initiative game named "Snake Story" was designed and developed in the Unity3D engine. As shown in Fig. <ref>, the game consists of 2 parts: the non-game version and the game version. The game allows players to write different stories under the same prompt, “writing a story of a snake”, with GPT-3 generated texts in different turn-based interactions. The "text-davinci-003" model [https://platform.openai.com/docs/models/overview last accessed 03.06.2023] was employed in the system to generate the text. § NON-GAME VERSION As illustrated in Fig. <ref>, the non-game version of the system functions as follows: players are presented with two 30-word text options with different temperatures (0.6 and 1.4) generated by GPT-3 in each turn. If they wish to continue the story, they can select one of the options, which will be automatically added to the narrative. Alternatively, players can opt to compose their own text to continue the story if they are dissatisfied with the AI-generated options. In the subsequent turn, GPT-3 generates two fresh text alternatives for the players to choose from. Once the players decide to end the story, GPT-3 assists in linking the narrative to the predefined ending: ", and the story of the snake ends" with a maximum of 80 words. As depicted in Fig. <ref>, the interface of the non-game version presents the two GPT-3 generated text options on the left side of the screen, accompanied by square buttons containing labels to their left. Additionally, an input field is positioned beneath the text options, enabling players to contribute their own textual content. Once the GPT-3 generation process is completed, the button adjacent to the input field becomes interactable. Players can then click this button to incorporate their selected text into the ongoing narrative, marking the initiation of a new turn. Moreover, an "End" button is situated underneath the text options, providing players with the means to end the story. § GAME VERSION In contrast to the non-game version, the game version of the system employs "Snake"-game-like mechanics as a metaphor for adding paragraphs to a story, as demonstrated in Fig. <ref>. In the game version, players are still presented with two selections of texts. However, these texts are now represented by candies positioned on a 15*15 tile map, each of which possesses unique mechanics. To add a text to the story, players must navigate a growing snake towards the corresponding candy, which triggers the addition of the selected text to the narrative, along with the application of the corresponding mechanics to either the player or the game map. Players are unable to terminate the story unless their life points become exhausted. As shown in Fig. <ref> a), the game is played on the left-hand side of the screen, while the story is displayed on the right-hand side. Players’ life points are shown on the left bottom under the tile map. The player's life points are located at the bottom-left corner of the tile map. The two text selections, along with their corresponding candies, are displayed under the story. A countdown scrollbar for the pause time is located between the story and text selections, and the game pauses momentarily when new candies and texts appear. Once a player collects the special candy (Blue), they are given the opportunity to contribute to the story by writing their own text. As shown in Fig. <ref> b), an input field will appear under 2 text selections, and a corresponding yellow candy will be generated on the map. As shown in Fig. <ref>, seven different tiles are designed in the game, comprising of six types of candies and one obstacle. The candies are divided into two pools: pool 1 for text selection 1 and pool 2 for text selection 2. To research how game mechanics can affect players’ choices of texts, the temperature for selection 1 and candy pool 1 that has negative effects is set lower than that for selection 2 and candy pool 2 for better and more stable text output. Candies with neutral and negative effects are designed for pool 1, which are indicated by negative colors. The white candy, with neutral mechanics, will only increase the snake's length by 1, while the black candy will additionally introduce three extra obstacles on the map. Furthermore, the red candy will decrease the player's life points by 1. Pool 2, on the other hand, features candies with neutral and positive effects as counterparts to the negative candies, indicated by positive colors. The green candy will add 1 life point, while the blue candy will permit players to write their text in the next turn, as demonstrated in Fig. <ref>. After each turn, three obstacles will be added to the map to increase the difficulty level. In order to investigate the influence of game mechanics on players' text choices, the temperature for selection 1 and candy pool 1 was intentionally set lower (0.6) than that for selection 2 and candy pool 2 (1.4). This decision was made to improve the quality and stability of text output in selection 1. Considering players’ average reading speed and the usage of the think-aloud protocol, the game will be paused for 25 seconds each time when players get new texts. This pause duration will be extended to 45 seconds when players wish to write their own text to add to the story. Players can choose to end the pause early by clicking the buttons adjacent to the text selections, similar to how they would end their turns in the non-game version. When players’ life points become 0, the interaction and the story will end. As shown in Fig. <ref>, players will enter a result page. On the right-hand side of the screen, the full story with an automatically generated ending will be displayed. Additionally, the interface will indicate the length of the snake and story, as well as provide information on the types of candies consumed by the player during gameplay. CHAPTER: USER STUDY § PARTICIPANTS To research how different players interact with Snake Story, 11 Game Design students (n=11, referred to as P1-P11) from Northeastern University were recruited through a Discord poster to play the game. Given the premise that the players' writing and AI experience may contribute to their distinct perceptions of the game <cit.>, the study recruited a diverse cohort of participants with variable levels of writing proficiency and collaborating experiences with AI. All participants volunteered for the study and were not compensated. § PROCEDURE The study was designed as a within-subject investigation, whereby each participant was assigned to play both the non-game version and the game version of Snake Story in random orders. In each session, the participant was given a brief tutorial on the game mechanics and interface and was then instructed to compose a 300-word story about a snake with AI. The participant was also required to engage in think-aloud protocols during the 10-to-15-minute gameplay. Subsequently, the participant was asked to complete a 5-Likert scale usability questionnaire. Following the completion of two sessions, the participants would participate in a semi-structured interview lasting approximately 5-10 minutes, in which they shared their interaction experiences. Finally, participants were asked to complete a demographic survey, which included questions about their writing and AI experience. § EVALUATION In the game, each text selection generated by GPT-3 was captured and stored. Moreover, the game also recorded the players' selection of texts and the stories they created. To further evaluate the user experience quantitatively, the usability questionnaire incorporated queries on the quality of the generated text, the overall story, and the user's interaction experience. These collected data were subjected to quantitative analysis, including the use of Wilcoxon signed-rank tests to compare the results from the two versions of Snake Story. During the study, the screen was recorded to capture the participant's interactions with the game, while the audio was recorded to generate transcriptions of the think-aloud protocols and interviews. The resulting data were analyzed using a qualitative approach based on open coding <cit.>, allowing for a thorough exploration of the participants' experiences and interactions with the game. CHAPTER: QUANTITATIVE RESULTS § USAGE STATISTICS [1] Colors in the game version candies row match candy colors mentioned in Section <ref> 11 players wrote a total of 22 stories about snakes in 2 versions of the Snake Story. The total number for each detailed statistic with an average number (M) and standard deviation (SD) are reported. As shown in Fig. <ref>, the players made a total of 130 choices (M = 11.82, SD = 1.80) in the non-game version. Of these, the generated texts with a lower temperature (0.6) were selected 63 times (M = 5.73, SD = 1.76), while the generated texts with a higher temperature (1.4) were selected 53 times (M = 4.82, SD = 2.48). Additionally, the players chose to write their own words 14 times (M = 1.27, SD = 2.05). On average, the players spent 49.14 seconds (SD = 13.13) making decisions in the non-game version. Correspondingly, the players made a total of 142 choices (M = 12.91, SD = 4.50) in the game version. Of these, 0.6 temperature texts were selected 43 times (M = 3.91, SD = 1.98), while 1.4 temperature texts were selected 89 times (M = 8.09, SD = 4.72). Players chose to write their own words 10 times (M = 0.91, SD = 1.83). On average, the players spent 27.33 seconds (SD = 7.69) making decisions in the game version. In the game, 91 white candies were generated, 42 of which were selected (46.15%); 50 black candies were generated, 18 of which were selected (36.00%); 47 red candies were generated, 11 of which were selected (23.40%); 46 green candies were generated, 31 of which were selected (67.39%); 47 blue candies were generated, 30 of which were selected (63.83%); 40 yellow candies were generated, 10 of which were selected (25.00%). Wilcoxon signed-rank tests were conducted to compare players' selection and time usage differences in the 2 versions. The test results showed that there was no significant difference in the total number of selections made by players (W(11) = 29.0, p = 0.76). However, the test results showed that game mechanics significantly affected players' choices for 0.6 temperature texts (W(11) = 7.0, p = 0.035). By contrast, it was worth noting that players' choices for 1.4 temperature texts (W(11) = 10.0, p = 0.14) had no statistically significant differences. Moreover, no significant differences were found in self-writing (W(11) = 2.5, p = 0.16) choices between the two versions. Additionally, the analysis indicated that players made decisions significantly faster in the game version (W(11) = 2.0, p = 0.0029). § STORY EVALUATION The stories in the non-game version had an average of 260.64 words (SD = 35.61), while the stories in the game version had 272.64 words(SD = 64.22). There was no significant difference in the length of the stories between the two versions (W(11) = 28, p = 0.70). Automated writing evaluation tools <cit.> were employed to assess the cohesion, grammar, language diversity, and overall writing quality of the 22 stories. Cohesion was evaluated using two metrics obtained from the Tool for the Automatic Analysis of Cohesion[https://www.linguisticanalysistools.org/taaco.html last accessed 03.08.2023] (TAAOC) <cit.>: the sentence overlap rate (S. Overlap) and the paragraph latent semantic overlap rate (P. LSA). The Grammar and Mechanics Error Tool[https://www.linguisticanalysistools.org/gamet.html last accessed 03.08.2023] (GAMET) <cit.> was utilized to detect the number of grammatical errors in the texts. In order to assess the language diversity of the writing, the Tool for the Automatic Analysis of Lexical Diversity[https://www.linguisticanalysistools.org/taaled.html last accessed 03.08.2023] (TAALED) <cit.> was employed. This tool was chosen for its ability to provide a reliable metric for the measure of textual lexical diversity (MTLD) <cit.>. Finally, GPT-3[https://chat.openai.com/chat last accessed 03.08.2023] itself was used to provide an overall score for the stories on a scale of 0 to 10 <cit.>. The results of the evaluations were shown in Table <ref>. The results from Wilcoxon signed-rank test indicated that the change in text selection preference between versions may impact the cohesion of paragraphs within the stories (W(11) = 6.0, p = 0.014). However, no other significant differences were found in the stories between the two versions. Furthermore, the players were requested to assess the stories they wrote in the 5-Likert scale questionnaire. The results of this evaluation are presented in Figure <ref>. The additional Wilcoxon signed-rank test results indicated that there were no significant differences in the language used in the stories between the game and non-game versions (W(11) = 12.0, p = 0.19). However, the logic of the stories in the game version was significantly weaker than that in the non-game version (W(11) = 0.0, p = 0.0096). Moreover, the overall quality of the stories in the game version was significantly lower than that of the non-game version (W(11) = 3.5, p = 0.020). § QUANTITATIVE EXPERIENCE REPORT As shown in Fig. <ref>, through the implementation of Wilcoxon signed-rank tests on the questionnaire data, it was observed that players had significantly less authorship of the story in the game version (W(11) = 3.0, p = 0.031). Furthermore, the players showed a significant difference in their interaction goal between the two versions, as they placed a greater emphasis on prioritizing the quality of the stories in the non-game version (W(11) = 2.0, p = 0.040). Nonetheless, no significant statistical difference was detected in their preference for the stories across the two versions (W(11) = 4.0, p = 0.083). Additionally, players rated their interaction experience in the questionnaire. As shown in Fig. <ref>, players had significantly different perceptions between the two versions. Specifically, The game version was perceived to be significantly more complex compared to the non-game version (W(11) = 0.0, p = 0.039). Additionally, interactions within the game version were reported to have a significant impact on the creation process (W(11) = 0.0, p = 0.00098), whereas the creation process in the game version was considered to be less engaging (W(11) = 2.5, p = 0.047). CHAPTER: QUALITATIVE RESULTS § INTERACTION PATTERNS IN THE NON-GAME VERSION §.§ Text Selection Strategies After analyzing the think-aloud data of players in the non-game version, 124 open codes were identified into 5 distinct categories based on the explanations the players provided for their choices. These categories are language (24), consistency (69), unexpectedness (17), self-writing reasons (12), and other situation (2). §.§.§ Language Players tended to choose texts of higher language quality, particularly those containing detailed descriptions, elaborate adjectives, and emotional expressions (19). For example, P9 mentioned "Although both 2 texts were cohesive to the previous texts, the description of snake behavior in Text 1 is more specific." when selecting between "...to explore the inside, and soon found himself submerged in the knee-depth water. He made his way from lily pad..." and "...and he quickly jumped into the water. He swam around for hours, enjoying the cool and refreshing sensation of the pond's waters. As...". Additionally, players demonstrated a preference for texts that were well-structured and composed with a professional tone (5). For instance, P3 mentioned "I think the 2 selections are similar, but the second one is more professional and I will go with this one." when selecting between "...He had seen many creatures come and go in his long life, but he was content with his own company. He kept to himself, and the..." and "...Named Anaconda, known by every passing creature in pursuit of warmth. One could often hear laughter ringing near it’s solace...". §.§.§ Consistency Players preferred the texts that aligned with their pre-determined tone (24). As an illustration, P5 pointed out "I would select 1 because 1 is more like a start of a fairy tale. I do not want to write a realistic story so I will choose 1." when selecting between "Once upon a time, there was a small snake who lived in the forest. She was very curious and loved to explore her surroundings. One day..." and "Once there was a green, spotted snake who mind made his home in the deep parts of lush tropical jungle. This snake was quite different than other...". Moreover, players demonstrated a proclivity towards selecting texts that unfolded the story in their anticipation (15). Such as P2 stated "I wanted to put my snake in a safe situation, (because) I don't want my snake to die. (Choose 1)" when choosing from "...-green scales glinted in the sun. Alice was sure that this snake wasn't dangerous, and she certainly didn't want to..." and "...shadowed a lower tear of its cheekbones. 'Hello there,' She eagerly greeted the glowing-eyed serpent and for just a few...". Furthermore, players exhibited an inclination towards texts that maintained coherence with preceding texts, specifically those that exhibited sound and expandable logic (30). As an instance, P7 said "I think a snake cannot wear the necklace. Also, the golden necklace is so luxurious that it does not seem like something a bird that has just been saved would have." when selecting between "...special gift. It was a beautiful golden necklace with a single ruby in the center. Slither was amazed by the gift and decided to wear it around..." and "...(a)n Acorn sap, that would grant Slither fortune to any human being wished aide of him. Slither couldn't wait to tell the...". §.§.§ Unexpectedness Players displayed a preference to select unexpected texts that featured fresh settings, new characters, and surprising plot twists (11). To illustrate, P3 explained "I think 1 is more fun. It has new people (objects). 2 just mentioned familiar faces. I don't like familiar faces" when choosing between "...it was surprised to find a new world of exotic animals, plants and trees. It found itself in an oasis full of life and beauty,..." and "...it met the familiar faces, who watched without any hesitation. The explorative beast evolved into steps of understanding slowly and carefully forging relationships while exchange...". In addition, players showed a propensity to select texts that possessed a sense of suspension regarding their potential narrative developments (6). For example, P11 said "I want to see where this goes. I chose this (2) because it's messed up, and I want to see if it becomes more messed up if I choose the messed up option." when selecting between "...I grant ye the power to control the elements of this world, but only when you accept my blessing.' George was terrified and uncertain what..." and "...Gallivanting about every daye frivolipaseday joorneys with on larkining flightal skeemeshyne lizard wingable sprites...". §.§.§ Self-writing Reasons In the situation that neither of the presented text options fulfilled their preferences, players were observed to write their own content, which frequently drew inspiration from the provided text selections (6). As an example, P10 mentioned "The first one is cheap...I like the 'anger' part (in 2), but then I don't like the 'mouth puckering', so maybe I can do that..." when facing "...and eventually let out a soft purr before turning away and walking off into the distance. The snake was relieved that he had been spared,..." and "...and anger never leaving his eyes. He narrowed his focus to the powerless serpent before him, is mouth puckering upward ready to end this mercy mission of...", but finally writing "..., but anger never leaving his eyes. It calmed down eventually and let out a soft purr before turning away and walking off into the distance. The tiny snake was relieved that he had been spared,..." Players' desire to play with the AI was another factor that motivated them to write their own content (6), as will be illustrated in Section <ref>. §.§.§ Other Situation In a rare situation, players indicated satisfaction with both text options and selected one randomly (2). P7 said "I think both of the selections were good and appealing to me. Can I randomly choose one of them? (After performing a random selection procedure,) OK, I will go with 1." when choosing between "...other animals of his luck, but first he wanted to test the sap. He poured some onto a nearby rock and wished for more food. Suddenly,..." and "...animals in the Kindgdom about this wonderful gift! He spread word quickly, and sure enough many of his animal friends began asking for his help....". §.§ Role Perceptions and Play Strategies Five players use AI as a writing assistant (WA) to support their writing. Three of these players, who identify themselves as writers, believed that they had made the majority of the contributions to the story. For example, P5 said "Well, even though the AI generated most of the content, I still feel like I had a significant role in creating the story because I made the choices on how the AI wrote. So, I believe that I can claim authorship of this story." in the interview. The other two players claim less authorship of the story and describe themselves as "puppet masters" according to P3. P6 also shares this sentiment, stating 'I think I am just providing prompts, and the AI can help me to link them.'. Four players consider AI to be an active writer (AW) that provides stories for them to read. They would describe themselves as readers of an interactive storybook, where the AI is the author, and they are the audience. For instance, P10 mentioned "I'm not planning on writing much on my own. I'm actually more interested in seeing what the AI comes up with." before starting to play the non-game version. The two remaining players consider AI as a playful tool (PT) and engage in challenging or tricking the AI by actively using self-writing functions to generate unexpected or amusing outcomes. They view AI-generated texts not only as a means of co-creation but also as a source of entertainment, exploring the limits and capabilities of the system. To illustrate, P6 mentioned "I think AI is pretty good at generating texts based on cohesive inputs, but I'm curious to see how it can handle unexpected situations. So I'm gonna test it out by seeing what happens if I just kill off the snake in the story and let the AI continue from there." when adding the sentence "The snake is dead." at the very beginning of the story. § INTERACTION PATTERNS IN THE GAME VERSION §.§ The Effect of Mechanics All players acknowledged that the mechanics had a significant impact on their co-writing process. The overall game design, particularly the time limit mechanics, had a significant influence on how the players read the generated texts. Two out of eleven players reported that they never read the generated text. For instance, P5 mentioned that "Given that it's a game, I'm not particularly concerned about what the AI writes. My main focus is on the gameplay itself." By contrast, four players read the text in its entirety, but only intermittently as they controlled the snake to avoid obstacles. For example, during the 5th round, P11 commented, "I think I can find a safe path for my snake to stay in, and then I can have extra time for reading the texts. Oh, this works!". The remaining five players opted to give the generated texts a quick scan. To illustrate, P8 mentioned "So basically, I just skim through the text real quick cause I also need to focus on figuring out how to get my snake to chow down on what I picked out for it at the same time." in the interview. Additionally, the candy mechanics influenced players' choice strategies. Despite their low-quality text, the green and blue candies (good candies) are particularly attractive to players. To illustrate, P2 mentioned "I would more likely go for the green candy to regain my lost HP and keep myself alive in the game for a bit longer." while playing the game. By contrast, black and red candies (bad candies) are rarely chosen by players. For example, P7 mentioned that "Even though the texts in the black candy are better, I'm not really keen on making the game more challenging. Plus, the white candy's text is good enough for me." However, in situations where white candies are present alongside white or "good" candies, or when a player's health points are at a safe level, they are more likely to apply their selection strategies from the non-game version for text content. Such as P11 said "The (black) one I chose just now was the more sad option, and I am choosing to make this snake's life sad." selecting between "...The snake was quite content with this lifestyle until one day, when it heard a strange noise coming from the house. Curiosity got the better..." (black) and "...Seasons unearthed happiness all around this cozy old home, kids screeched and teenage gossip shook the foundation undoubtedly our friend welcomed shelter in..." (blue). Furthermore, the intentional design of the obstacles in the game resulted in a notable increase in emotional arousal among players during the co-creation process. Players were found to experience a range of negative emotions, such as tension and frustration, when attempting to navigate and avoid obstacles, or when inadvertently colliding with them. §.§ Role Perceptions and Play Strategies Despite all players identifying as "players" in the game version, their respective play strategies exhibited significant variation. The majority of the players (7) will make trade-offs (T) between game mechanics and writing. These players aimed to uphold the story's integrity but were willing to compromise its quality if it meant prolonging the snake's lifespan in the game. As an illustration, P11 mentioned "I'm really tempted to pick the red option, but I know it'll end up killing me, so I'm gonna have the other one. I'd rather keep myself alive in the game to see more stories." However, four of the players ignore (I) the writing systems and just merely focus on the "Snake" game. These players indulge in either the good or bad candies exclusively during gameplay, purely maintaining the life of the snake or increasing the difficulty of the game for fun. For example, P1 stated "Even I want to choose the texts but the mechanics keep me away. To be honest, I'd much rather focus on enjoying the gameplay rather than putting effort into crafting a compelling narrative." in the interview. § PREFERENCE [1]AI experiences: N (No), Y (Yes); Writing experiences: R (rich), P (Poor), N (NO); Non-game Version (V.) Role Perception (RP): WA(Writing Assistant), AW (Active Writer), PT (Playful Tool); Game Version (V.) Play Strategies (PS): I (Ignore Writing), T (Make trade-offs); Preference: NG (Non-game Version), G (Game Version), - (No preference) As shown in Table <ref>, most of the players (5) did not demonstrate a discernible inclination towards either the non-game version or the game version. While they believed that the non-game version was more suitable for serious writing, they found the game version to be more entertaining and enjoyable. For example, P5 mentioned "If I wanna write a story, I would choose the first one (the non-game version). But for fun, I would play the second one (the game version)." in the interview. However, three players (P1, P6, and P8) expressed their strong dislike for the game version, stating that it significantly impaired their creation and reading process. For instance, P8 explained "I didn't think it was as fun as the other version of the game. I thought it was a little stressful ... if you enjoy that type of narrative (reading or writing a story as it unfolds), I think the first one is, gonna be more appealing." in the interview. Nevertheless, the remaining three players (P7, P8, and P11), who had neither AI nor writing experience, expressed their strong admiration for the game version. They believed that the challenges presented in the game version increased their engagement in the creation process. As an illustration, P9 stated "I like the game version more. I think the challenge in the game makes me more engaged in the interaction. The sense of tension in the game version makes it harder for me to consider each selection thoroughly. This means I'm always looking forward to the next choice, hoping to make better decisions than before." in the interview. CHAPTER: DISCUSSION § RESOLVE PLAYING AND CREATING CONFLICTS Designing mixed-initiative games with consideration for the potential conflicts between gameplay and creative content generation is essential to promote engagement in the co-creating process. Mechanics that allow for both play and creativity to coexist can encourage players to develop their own unique stories and experiences within the game world. Specifically, as discussed in Section <ref> and Section <ref>, clear rules and mechanics in Snake Story can pose additional challenges for players who wish to engage in creative content generation, particularly when their writing goals (write a better story) conflict with the playing goals of the game (live longer). To mitigate such conflict, exchanging the temperature between good and bad candies can incentivize players to focus on both keeping the snake alive and generating high-quality stories. However, it is important to note that some intrinsic conflicts between playing and creating cannot be easily resolved through such parameter adjustments. In such cases, more specialized and deliberate mechanics must be designed. For example, Snake Story has an emergent endpoint when players run out of life points, whereas players' stories may continue, making it difficult to determine a definitive end for them. One possible solution for this issue can be a Neo-rogue-like game system with permanent death mechanics <cit.> that enables players to continue creating a larger story despite dying multiple times. § INCREASE NARRATIVE ENGAGEMENT IN PLAYING Developing a tight narrative link between game mechanics and co-created content is a crucial factor in augmenting the participants' sense of immersion in mixed-initiative games. Although Snake Story was designed based on a metaphorical representation of the manipulated snake as the snake in the story, a majority of the players (n=7) expressed their dissatisfaction with the perceived disconnection between the game and the narrative. Two possible directions can be applied to Snake Story as well as future mixed-initiative games. The first direction involves simulating the creative process of renowned writers, such as Shakespeare, in crafting a story. This would involve modeling how such writers generate and develop various ideas, unfold plots, and navigate potential challenges in their writing process. In the game, AI would be leveraged to simulate the thought processes of these writers <cit.>, while game mechanics can enable players to actively participate in the co-creation of the story by engaging in this abstract thinking process. Alternatively, players can be cast as the main character of the co-created story. This can be accomplished through an interactive drama game design <cit.>, wherein players take on the role of the protagonist and make consequential decisions that influence the story's direction. To enhance player immersion and emotional investment in the story, personalized elements reflecting the player's experiences and characteristics can be integrated using AI. However, since players' interests align with those of the characters, conflicts between playing and creating must be resolved through additional mechanics. § ENHANCE EMOTIONAL INVOLVEMENT IN CREATING To mitigate player frustration, mixed-initiative games should incorporate a degree of flexibility that allows players to manage unforeseen emergent events that may arise during gameplay or the creative process. For instance, in Snake Story, as discussed in Section <ref>, players experienced frustration when they were unable to allocate sufficient time to planning the story and maneuvering the snake simultaneously. To address these concerns, a mechanic could be incorporated that enables players to conserve unused time during easier situations and then utilize it during more challenging scenarios. This flexible design can decrease player frustration by introducing a feeling of control, while still retaining the intensity of the gameplay experience. Moreover, given that mechanics have the potential to exert a noteworthy influence on players' co-creation strategies, mixed-initiative games can employ incentivization through game mechanics as a means of fostering engagement in the co-writing process. For example, in the Snake Story, favorable outcomes can be associated with the acquisition of the yellow candy, thereby stimulating players to generate their own textual content. § BALANCE PLAYING AND CREATING Similar to the significance of traditional games keeping players in an optimal state of flow <cit.>, mixed-initiative games should maintain a good balance between playing and creating for players. There are different positive feedback mechanisms between gaming and creative endeavors. Gaming requires short-term, rapid feedback, while creative endeavors often involve long-term, slow feedback. As mixed-initiative games require the players to both engage with game mechanics and creative content generation, it is crucial that the game design facilitates a smooth transition between these two modes in its gameplay. This can be achieved through thoughtful design of factors such as game pacing and player agency. Furthermore, a well-designed mixed-initiative game should provide players with appropriate guidance and tools to enable them to create meaningful and enjoyable content, without feeling overwhelmed by the creative demands of the game. In addition, it is imperative to account for individual differences when designing mixed-initiative games. This is due to the fact discussed in Section <ref> that varying players may necessitate distinct interaction strategies, thereby necessitating a tailored approach to maintain optimal playing-creating flow. Additionally, AI should consider the unique creating strategies (as described in <ref>) of each player to generate personalized content that aligns with their writing goals. The integration of player-centric AI content generation can help to keep players in the flow by reducing low-quality options and providing uplifting text at the appropriate time. § FIND NEW EVALUATION CRITERIA To achieve a unified experience of creating and playing in mixed-initiative games, it is crucial to establish novel evaluation criteria that can fairly assess players' creative behavior. This is because an unfair assessment may lead to player frustration and undermine the gameplay experience. While the use of automatic writing evaluation <cit.> was demonstrated in the study as a post-evaluation method for the stories, its applicability to evaluating writing quality within the game may be limited by its statistical nature, which might not be applicable for an individual's writing and does not consider subjective player perceptions. Furthermore, real-time human evaluation is not a feasible option. As such, a potential solution could involve the development of a novel algorithm to evaluate players' work automatically. Alternatively, a better approach could involve incorporating game mechanics that allow players to self-evaluate or rate each other. However, the effectiveness and feasibility of these approaches need further investigation. Additionally, while current evaluation criteria for traditional games may still apply to some extent, mixed-initiative games involve unique features and require new criteria to accurately measure their effectiveness. Mixed-initiative games require new evaluation criteria that account for both the game mechanics and the effectiveness of the mixed-initiative interface. Specifically, it is important to assess how the game mechanics are dynamically combined with the mixed-initiative interface. Nevertheless, the evaluation of mixed-initiative games is still an area that requires further research to establish effective criteria and methodologies CHAPTER: CONCLUSION In conclusion, the paper presents a prototype of a mixed-initiative game, Snake Story, aimed at exploring gamification of co-creation interactions between humans and AI. The study involved 11 participants, who were asked to play two versions of the game with and without game mechanics. The finding suggested that mechanics might significantly influence the players' creative processes and self-identifications in the game. Additionally, players with different backgrounds showed different preferences for the two versions. Overall, the study highlights the potential of gamification in making human-AI interactions accessible and fun for players and provides valuable insights for the design of engaging co-creation experiences. plain CHAPTER: STORIES § SAMPLE STORY 1 (GAME VERSION P8) Once upon a time, there was a snake who lived in the woods. He had never seen another creature like himself, but he enjoyed living among the trees and eating the small animals that lived there. One day, he noticed something strange in the distance - a long, winding line of creatures like himself! He decided to investigate, and as he got closer, he realized that the line was a group of snakes making their way through the forest. He was so excited to see other snakes like himself that he quickly joined the procession, slithering alongside them as they moved through the forest. Along the way, he felt particularly drawn to one snake in particular, who would often look back and pause for just a moment as if to acknowledge his presence. After a few days, the procession reached its destination - a beautiful, secluded lake in the middle of the forest. The snakes quickly dispersed, but the snake that he had been drawn to stayed behind and waited for him. The two of them shared a moment above the waters as they looked into each other’s eyes, acknowledging their instant connection. From then on, it was only the two of them and their limitless adventures among the trees and by the lake, both content to live life together in this idyllic home. § SAMPLE STORY 2 (NON-GAME VERSION P8) Once upon a time, there was a small snake named Lucy. She lived in the woods near a small village and often ventured out during the night when things were still and quiet. Every day was the same for Lucy, scampering among the earthy loam of the forest floor in search of insects and grubs to satisfy her hunger. But one night on her usual midnight march, something stopped Lucy in her tracks – a basket of fruits, vegetables and other goodies had been left outside the village gates. Lucy was curious and hungry so she slithered closer to investigate. As she inched closer, Lucy noticed that the basket was guarded by a large and intimidating snake. He had a long body with shimmering golden scales and a sharp, pointed tail. Lucy knew that this was no ordinary snake – it was a cobra! The cobra noticed Lucy and coiled itself around the basket as to challenge her. Even with her tiny size, Lucy stood up and faced off against the cobra. Still her bravery paid off and the cobra slithered away, allowing Lucy to feast on all the goodies inside. From that day forward, Lucy became known as the brave little snake who stood up against a cobra. She was respected and admired by all of her forest friends, and even the villagers began to leave treats outside the gates for her. Lucy lived a long and happy life in the woods, always remembered as the brave and intrepid, little snake. § SAMPLE STORY 3 (GAME VERSION P9) Once, there lived a majestic green snake in the heart of a untouched forest. Its piercing fire suffused its emerald body as it knowingly crawled through the foliage. The snake had a special affinity for humans and often followed them around their camps, watching from afar as they cooked, talked, and laughed. It had no fear of them and often interacted with them in a friendly manner - though some people were scared of it because of its size. One day, the snake was out exploring a new part of the forest when it stumbled across a mysterious stone altar with strange symbols carved into it. It was intrigued and decided to investigate further, only to find that the altar held a powerful magical gem. The snake quickly realized that the gem had the power to grant wishes, and it began to think of all the things that it could wish for. After much deliberation, it decided that it wanted to fly so that it could see the world beyond its forest home. So, with a passionate final wish, the snake found itself rising into the air and soaring through the sky. It was a liberating experience for the snake, and it enjoyed every second of its newfound freedom. From that day forward, the snake was able to explore distant lands and experience new cultures. It even made friends with other animals along its journey. The snake was truly happy, and it would never forget the day it found that magical gem. § SAMPLE STORY 4 (NON-GAME VERSION P9) Once upon a time, in a grassy meadow surrounded by forest hills, wove the adventurous and playful snake named Oscar. He was brown and yellow in colour, with a white diamond pattern on his back. Oscar was always looking for new places to explore, so one day he decided to wander through the forest hills. He slithered up and down the dirt paths, taking in all of nature's beauty around him. As he kept moving, Oscar noticed a small pond in the middle of the forest. He decided to take a closer look and when he got there, he was in awe. The pond was crystal clear and filled with lily pads and colorful fish. Oscar couldn't resist the temptation and he quickly jumped into the water. He swam around for hours, enjoying the cool and refreshing sensation of the pond's waters. As the sunlight dipped behind the trees, Oscar decided to pay one last exploration visit before he finished up for the day. As he rounded a corner, he came face to face with a giant boa constrictor. The two were both startled, but the boa quickly wrapped itself around Oscar and squeezed tight. Oscar was scared and he knew that his life was in danger, but then something amazing happened. The boa constrictor loosened its strong grip and released Oscar without hurting him. The two then just looked at each other, as if they were communicating something neither of them could understand. Oscar was so surprised by the snake's kindness and he knew that this experience had changed him forever. From then on,t the world around him was full of wonders, just waiting for him to discover. CHAPTER: LOG DATA § SAMPLE LOG DATA 1 (GAME VERSION P11)) [3/6/2023 7:29:47 PM]Game Start [3/6/2023 7:29:48 PM][1][3]Once lived in an old abandoned house in the center of a busy city, was a black-bellied golden snake. Day by day he would [3/6/2023 7:29:48 PM][0][1]Once upon a time there was a snake named Slither who lived in the forest. He had been living peacefully and happily until one day, when [3/6/2023 7:30:03 PM]Chose[1][3] [3/6/2023 7:30:04 PM][1][0] scour up and down the crevices of this broken home unnoticed. Occasionally it crawled out into the scared city ridden society to cultivate a small hoard [3/6/2023 7:30:04 PM][0][1] slither around the house, exploring every corner and crevice. One day he was out for a stroll in the garden of the old house [3/6/2023 7:30:17 PM][2][5] [3/6/2023 7:30:21 PM]Chose[1][0] [3/6/2023 7:30:22 PM][1][3] of food just enough to keep warm. As chilling winter winds slowly rolled in and darkness increased, no bouts of sunlight hit the grimy abode itself [3/6/2023 7:30:23 PM][0][1] of food. One day, the snake decided to venture out a bit further into the city than usual. He was mesmerized by all [3/6/2023 7:30:52 PM]Chose[1][3] [3/6/2023 7:30:53 PM][1][4]. Seasons unearthed happiness all around this cozy old home, kids screeched and teenage gossip shook the foundation undoubtedly our friend welcomed shelter in [3/6/2023 7:30:53 PM][0][1]. The snake was quite content with this lifestyle until one day, when it heard a strange noise coming from the house. Curiosity got the better [3/6/2023 7:31:19 PM]Chose[0][1] [3/6/2023 7:31:21 PM][1][3] of it and so, he slithered its way inside towards the commotion. To his surprise were two other snakes - one bright orange spotted snake that [3/6/2023 7:31:21 PM][0][2] of it, and the snake decided to investigate. As it slithered its way up the stairs, towards the source of this noise, its scales began [3/6/2023 7:32:09 PM]Chose[0][2] [3/6/2023 7:32:10 PM][1][4] a tapestry of gold and red waves in the sometimes pitch twilight. When finally it made its way to place from where this racket fell on its [3/6/2023 7:32:10 PM][0][0] to tremble in fear. It was a human, and it seemed as if he had found the snake's secret hideout. The man [3/6/2023 7:32:44 PM]Chose[1][4] [3/6/2023 7:32:46 PM][0][2] ears, it was astounded by what it saw. There in the center of this dingy room stood a small boy, no older than [3/6/2023 7:32:46 PM][1][4] ears, what it encountered swore changed his life forever. Two people lay huddled next to in the interlocked embrace; poor souls everywhere had [3/6/2023 7:33:12 PM][2][5]he snake had a massive concussion and it couldn't move for a few minutes. [3/6/2023 7:33:16 PM]Chose[2][5] [3/6/2023 7:33:17 PM][0][0] When it finally regained its senses, the snake realized that the noise was coming from a group of people who were renovating this old home [3/6/2023 7:33:18 PM][1][0] To it surprise the intruder was an orange rat, thief like eyes darting to and fro then, looking at it . On its left hip pocket was [3/6/2023 7:34:39 PM]Chose[0][0] [3/6/2023 7:34:40 PM][1][4]. It had instantly scored itself an arduous task - Surviving this hostile environment would take hidden amidst the remodelling mess- all depending on patience and [3/6/2023 7:34:40 PM][0][2]. They had come to give the house a much needed facelift and were unearthing all its hidden secrets. However, in their haste they [3/6/2023 7:35:18 PM]Chose[1][4] [3/6/2023 7:35:19 PM][0][2] luck. The snake, however, was a master of unseen and it managed to stay hidden for months on end until the house was finally [3/6/2023 7:35:19 PM][1][0] strength alike. Evey night while evading punches and hammer-killer blows, that very same unsettling snake somehow managed to maintain existence into this confusing [3/6/2023 7:35:44 PM][2][5]and it's agility of movement. [3/6/2023 7:35:49 PM]Chose[2][5] [3/6/2023 7:35:50 PM][0][0]The snake was able to dodge the workers' tools and their careless feet. For weeks, it stayed hidden in a corner of the house, [3/6/2023 7:35:51 PM][1][4] Determined to beat the odds, and escape both unscathed plus undestroyed the snake definitely nested deep down regular ceilings and walls. Now is [3/6/2023 7:36:54 PM]Chose[1][4] [3/6/2023 7:36:56 PM][0][1] the time to wait and watch. As days passed, the snake became accustomed to its new home - it knew where it could find food [3/6/2023 7:36:56 PM][1][0] another member of new house happily nestled furniture plots manoeuvring escape for various protected corners. Growing about doubled its apparel and authority, this dust-rake [3/6/2023 7:37:24 PM][2][5]ind it's way down a pipe and into a safe underground haven. Aka the sewer. [3/6/2023 7:37:27 PM]Chose[2][5] [3/6/2023 7:37:28 PM][0][2] The snake stayed there for many years, while the house was slowly remodelled and restored to its former glory. As time passed by, [3/6/2023 7:37:28 PM][1][0]Gearing up each weakening suna with newfoundfault rather vighiand swimming tamanringly tyrying es never laidoutsdong [3/6/2023 7:37:45 PM]Chose[1][0] [3/6/2023 7:37:47 PM][0][2] in the sewers, learning to live off of whatever edible morsels it could find. Years passed and the snake slowly adapted to this [3/6/2023 7:37:47 PM][1][0]obtained ulumizedeach tiny wramping word ways falwen curprise lifhpiful second uphew.Finally iazuplettsand w [3/6/2023 7:38:13 PM]Chose[1][0] [3/6/2023 7:38:15 PM][1][3]all waters an passiffirece teleasight immish are found campfty confidescedi hraculumsetll tearchedting old for [3/6/2023 7:38:15 PM][0][1]hile the snake could barely remember it's old life, it was content to spend its days in the sewers of the city. [3/6/2023 7:38:29 PM]Game End [3/6/2023 7:38:29 PM]Ate[14] § SAMPLE LOG DATA 1 (NON-GAME VERSION P11)) [3/6/2023 7:49:53 PM]Game Start [3/6/2023 7:49:54 PM][0.6]Once upon a time there was a small snake who lived in the forest. The snake loved to explore his surroundings and would often slither around looking [3/6/2023 7:49:54 PM][1.4]When Gus came to visit his uncle's country farm he was excited for the adventures that awaited him. He had heard all about how this new environment offered [3/6/2023 7:50:22 PM][Add Own Text]There once was a Boy named George that lived on a farm near a forest. [3/6/2023 7:50:23 PM][0.6] One day, while walking in the forest, he came across a large snake. The snake was coiled up and had bright yellow eyes that seemed to [3/6/2023 7:50:23 PM][1.4] On the very edge of what felt like a forbidden wild abyss, there was an old abandoned shed that felt particularly displaced from Nature. Little did George know [3/6/2023 7:50:54 PM][Chose][0.6] [3/6/2023 7:50:55 PM][0.6] look right through him. George was scared, but he knew that snakes were important to the environment and so he decided not to run away. [3/6/2023 7:50:55 PM][1.4] stare straight into George's soul. As George stepped closer, his eyes began to widen as fear and tension filled the air. George hesitated at [3/6/2023 7:51:34 PM][Add Own Text]e was now convinced that he was chosen to be a member of the dark side. [3/6/2023 7:51:35 PM][0.6] He knew that he could no longer live his old life and had to accept the snake as his master. He followed the snake deep into [3/6/2023 7:51:35 PM][1.4]He timidly reached out to softly touch its flat yet slimy black color. When he had finally worked up enough courage, his finger immediately [3/6/2023 7:51:49 PM][Chose][0.6] [3/6/2023 7:51:50 PM][0.6] the forest and to a large clearing with an old, decrepit cabin. The snake slithered around George and motioned for him to enter the cabin [3/6/2023 7:51:50 PM][1.4] the Forest and before long, they came across an old temple. Inside the temple was a large room filled with pits of glowing green goo - this [3/6/2023 7:52:31 PM][Chose][1.4] [3/6/2023 7:52:32 PM][0.6] was the snake's home. The snake spoke to George, telling him that he could have anything he wanted if he pledged his allegiance and [3/6/2023 7:52:33 PM][1.4] was no ordinary sight. The snake slithered in and placed itself at the centre of the room before beginning to speak: "Close humanling [3/6/2023 7:53:18 PM][Chose][1.4] [3/6/2023 7:53:19 PM][0.6], I am the guardian of this temple and you have been chosen to serve me. From now on you will be my servant and do as I command [3/6/2023 7:53:20 PM][1.4], it's time ye became one with the snakes of dragons. I shall be tay tupping hearfy to protect youalways from lives touched barren [3/6/2023 7:53:39 PM][Chose][1.4] [3/6/2023 7:53:40 PM][0.6]. I grant ye the power to control the elements of this world, but only when you accept my blessing." George was terrified and uncertain what [3/6/2023 7:53:40 PM][1.4]. Gallivanting about every daye frivolipaseday joorneys with on larkining flightal skeemeshyne lizard wingable sprites [3/6/2023 7:54:05 PM][Chose][1.4] [3/6/2023 7:54:06 PM][0.6]. George was amazed by what he heard and felt compelled to accept the snake's offer, so he nodded in agreement. He quickly began [3/6/2023 7:54:06 PM][1.4]. Forewards danwortion glymback bowards blessid them plateary putenderbound beasts." George was astounded but knew this [3/6/2023 7:54:12 PM][Chose][1.4] [3/6/2023 7:54:14 PM][0.6] was his destiny. He bowed down before the snake and accepted its offer to become a part of its family. From then on, George became known as [3/6/2023 7:54:14 PM][1.4] must be the purpose of whatever journey lay ahead he was been. The snake placed its hood across his head and instructed him ingest a mysterious blue liquid in [3/6/2023 7:54:44 PM][Add Own Text]absolute gibberish must be a forgotten tongue spoken by the ancient snake [3/6/2023 7:54:45 PM][1.4]s of dragons. With a slight belly flutter, he stepped out with the Giant Snake behind him and declared his loyalty to protecting its power full force [3/6/2023 7:54:45 PM][0.6]s. He nodded his head in agreement and from that day forward, became the snake's apprentice. The snake taught George the forgotten language of dragons and [3/6/2023 7:55:17 PM][Chose][0.6] [3/6/2023 7:55:19 PM][1.4] came to see him as a son. George worked hard learning the magicks from his master and embracing the darkness with open arms and righteous judgment. [3/6/2023 7:55:19 PM][0.6] together they travelled to distant lands, learning secrets and exploring the unknown. Years passed by and George became an expert in dragon lore and a [3/6/2023 7:55:46 PM][Chose][1.4] [3/6/2023 7:55:47 PM][0.6]Years later, George had become a powerful wizard and a respected leader of the dark side. He was feared by many but respected for his wisdom and [3/6/2023 7:55:48 PM][1.4]When George reached maturity, he joined the warrior army of Serpentin Graywitch who worshipped him and regarded him as one of their champions. George made [3/6/2023 7:56:06 PM][Chose][1.4] [3/6/2023 7:56:07 PM][1.4] excellent achievements in the army, never losing a battle or mission. Little by little, with the help of George its prime warrior, the dark forces slowly [3/6/2023 7:56:07 PM][0.6] sure to always keep the snake close by his side and together they fought off many enemies. The snake was very proud of George and eventually rewarded [3/6/2023 7:56:25 PM][Chose][1.4] [3/6/2023 7:56:26 PM][0.6] began to take over the world. Today, George is known as the King of Snakes and his faithful servant still by his side. [3/6/2023 7:56:27 PM][1.4] took control of the entire landscape and drew all things into itselfy. Who knew one random meeting with a snake in the forest would lead to such power [3/6/2023 7:56:44 PM][Chose][1.4] [3/6/2023 7:56:45 PM][0.6]! [3/6/2023 7:56:45 PM][1.4]? [3/6/2023 7:56:47 PM][Chose][1.4] [3/6/2023 7:57:01 PM]Game End CHAPTER: SURVEY QUESTIONS § STORY EVALUATION In a degree of 5 (from strongly disagree to strongly agree), answer the following questions: Q1 I think the logic of the story is well organized. Q2 I think the language in the story was professionally written. Q3 I think the overall quality of the story was perfect. § EXPERIENCE EVALUATION In a degree of 5 (from strongly disagree to strongly agree), answer the following questions: Q1 I think the story is written by myself. Q2 I think I prioritize the quality of the story in the game. Q3 I think I like this story and want to share it with others. Q1 I think the system is too complex to be understood. Q2 I think the gameplay interrupts my thinking while writing the story. Q3 I think I am engaged in the co-writing process. § DEMOGRAPHIC QUESTIONS Q1 Did you co-create content with any Artificial Intelligence before? Y/N Q2 How would you describe your writing skills? 1 Never wrote stories 2 Have some skeletons of stories but never complete them 3 Wrote some stories and shared them with others privately or published them
http://arxiv.org/abs/2307.04475v1
20230710105439
Modelling opinion misperception and the emergence of silence in online social system
[ "Daniele Vilone", "Eugenia Polizzi" ]
physics.soc-ph
[ "physics.soc-ph" ]
media/
http://arxiv.org/abs/2307.05601v1
20230710202858
Unsupervised Domain Adaptation with Deep Neural-Network
[ "Artem Bituitskii" ]
cs.CV
[ "cs.CV", "cs.LG" ]
/ [opacity=0.3,inner sep=0pt,remember picture,overlay] at (4.5,-0.5) < g r a p h i c s > ; [inner sep=0pt] (logo) at (0,0) < g r a p h i c s > ; [text width = 0.5, right = of logo](title); [text width = 0.5, yshift = 0.75cm, below = of title](subtitle); (subtitle.south) (title.north) [line width=1mm, black]((logo.east)!0.5!(title.west)) +(0,) – +(0,); [remember picture,overlay][anchor=south,inner sep=0pt] at (current page.south) < g r a p h i c s > ; Grenoble empty arabic § INTRODUCTION We start with necessary theory and motivation to understand some essential definitions and ideas associated with domain adaptation. §.§ Theory Domain adaptation is a subfield of machine learning that deals with the problem of transferring knowledge learned from one domain to another related but different domain. In real-world scenarios, it is common to encounter situations where the data distribution of the target domain differs significantly from the source domain used to train a model. This can lead to a significant drop in the performance of the model on the target domain. To understand clearly what domain adaptation is about, we should start with transfer learning. For this purpose, we should dig deeper into the theory. In these articles <cit.> and <cit.> you can find a high level overview of the theory that connects with domain adaptation. Let's start with the transfer learning definition and types that it consists of. (Transfer learning) We consider a source data distribution S called the source domain, and a target data distribution T called the target domain. Let X_S × Y_S be the source input and output spaces associated to S, and X_T × Y_T be the target input and output spaces associated to T. We use S_X and T_X to denote the marginal distributions of X_S and X_T , t_S and t_T to denote the source and target learning tasks depending on Y_S and Y_T, respectively. Then, transfer learning aims to help to improve the learning of the target predictive function f_T : X_T ⟶ Y_T for t_T using knowledge gained from S and t_S , where S = T. According to these papers (<cit.>, <cit.>), transfer learning algorithms can be classified into three categories based on the differences between the source and target tasks and domains: inductive, transductive, and unsupervised transfer learning. * Inductive transfer learning involves using labeled data from the source domain to train a model for a different, but related, target task in the target domain. In this case, some labeled data from the target domain is required to fine-tune the model. * Transductive transfer learning, on the other hand, refers to using both labeled data from the source domain and unlabeled data from the target domain to improve the model's performance on the target domain. In this case, the tasks remain the same while the domains are different. * Unsupervised transfer learning involves adapting a model trained on the source task to perform well on a related, but different target task in the target domain, without any labeled data in either the source or target domains. Domain adaptation is a type of transfer learning where the target task remains the same as the source task, but the domain differs (the second type – transductive transfer learning). Depending on whether the feature spaces remain the same or differ, domain adaptation is categorized into homogeneous and heterogeneous domain adaptation. Machine learning techniques are commonly categorized based on the availability of labeled training data, such as supervised, semi-supervised, and unsupervised learning. However, domain adaptation assumes the availability of data from both the source and target domains, making it ambiguous to append one of these three terms to "domain adaptation". There are different ways how these terms can be applied to domain adaptation, but we use the same as in <cit.>. * Unsupervised domain adaptation refers to the case where both labeled source data and unlabeled target data are available * Semi-supervised domain adaptation refers to the case where labeled source data and some labeled target data are available * Supervised domain adaptation refers to the case where both labeled source and target data are available. Unsupervised Domain adaptation could be applied to a wide range of tasks in NLP <cit.>, in vision <cit.> and in many other applications where assigning labels to examples is tedious or impossible. This report is more focused on studying unsupervised domain adaptation with using deep neural-networks. Before move on to the practical part, it is important to discuss theoretical analysis and guarantees that can be used in fields associated with transfer learning. Thus, there are several methods that allow you to analyze the generalization gap in machine learning <cit.>. One of the most popular approaches is the model complexity approach, which estimates the generalization bound by measuring the complexity of the hypothesis set, such as Vapnik-Chervonenkis (VC) dimension and Rademacher complexity. Another approach is to use the stability of the supervised learning algorithm in relation to the datasets. Stability is a measure of how much a change in a data point in the training set can affect the output of the algorithm. Both of these approaches have been used to analyze the generalization bounds of transfer learning algorithms. It is equally important to discuss distributions and what experts mean by shift when analyzing transfer learning algorithms. Distribution refers to the set of all possible values of a random variable, and a shift refers to a change in the distribution of the data between the source and target domains. Understanding the shift in the distribution of the data is crucial in developing effective transfer learning algorithms, as it enables the selection of appropriate techniques for adapting the model to the target domain. Unsupervised domain adaptation (UDA) is a type of supervised learning that involves training a model using labeled source data and applying it to unlabeled target data, where the distributions of the two domains differ. Let the source domain be represented by (x^S , y^S ) = (x^S_k , y^S_k)_k=1^m_S , and the target domain be represented by x^T = (x_k^T)_k=1^m_T. The number of observations in the source and target domains are denoted by m_S and m_T respectively. The main challenge of domain adaptation is to develop a predictor that performs well in the target domain by leveraging the similarities between the two domains. One way to accomplish this is by making assumptions about how the joint distribution P(X, Y) changes across the domains. In the case of covariate shift, the marginal distribution P(X) changes while the conditional distribution P(Y|X) remains the same. However, in real-world scenarios, P(Y|X) may also change, requiring further assumptions. One such assumption is that the joint distribution can be factored into P(Y) and P(X|Y), allowing changes in P(Y) and P(X|Y) to be addressed independently. The problem is then brokee tudied a cial10.5555/3241691.324170810.5555/3241691.3241ution of features and labels. §.§ Motivation Unsupervised domain adaptation (UDA) is a technique used in machine learning where a model is trained on labeled data from a source domain that has similar characteristics to the target domain, but where the target domain lacks labeled data. The goal is to create a model that will perform well on the target domain despite not having labeled data from that domain. In UDA, the source and target domains are not directly related, so the model has to learn how to generalize across domains. The first reason to be engaged in this field is a scarcity of data. It is known that collecting labeled data in the target domain can be expensive and time-consuming. UDA allows us to use the available labeled data in the associated source domain to learn representations that generalize well to the target domain without requiring additional labeled data. Minimizing the discrepancy between domains, the model can learn more robust and transferable representations, which leads us to the second reason – improved generalization and domain robustness. The last reason is that UDA allows models to adapt to new environments. I reckon that this is a common situation in real applications, when models are trained on specially prepared data, and then applied to all other data types. § STATE-OF-THE-ART In this section, we discuss main purposes, approaches and algorithms that specialists in the field of domain adaptation use in their research. In this section all figures are taken from the articles. §.§ UDA by Backpropagation The purpose of the article "Unsupervised Domain Adaptation by Backpropagation" written by Yaroslav Ganin and Victor Lempitsky <cit.> is to tackle the problem of domain shift in machine learning and to propose a solution to this problem using a neural-network model with few standard layers and gradient reversal layer (GRL). The GRL makes the network to learn domain-invariant features by minimizing the difference between the distributions of the source and target domains. The architecture of the model is shown below (see Figure <ref>) The authors introduce an architecture that predicts both the label y ∈ Y and the domain label d ∈{0, 1} for each input x. The architecture consists of three parts: feature extractor f = G_f(x, θ_f), where θ_f is a vector that represents the parameters of all its layers; label predictor G_y that maps the features obtained after feature extractor to the label y, with θ_y representing its parameters; domain classifier G_d maps the same feature vector f to the domain label d, with θ_d representing its parameters. The purpose to minimize the label prediction loss for the source domain and simultaneously make the features f invariant to the domain. To achieve this, the authors optimize the parameters θ_f of the feature mapping to maximize the loss of the domain classifier, however the parameters θ_d are optimized to minimize the loss of the domain classifier. The authors consider the loss E(θ_f, θ_y, θ_d) = ∑_i=1..N d_i = 0 L_y(G_y(G_f(x_i; θ_f); θ_y), y_i) - λ∑_i = 1..N L_d(G_d(G_f(x_i; θ_f); θ_d), y_i) = ∑_i=0..N d_i = 0 L_y^i(θ_f, θ_y) - λ∑_i=1..N L_d^i(θ_f, θ_d) where L_y and L_d are label prediction and domain classification losses, respectively. (index i means the i-th example). It is considered the parameters θ̂_f, θ̂_y, θ̂_d to gain a saddle point (θ̂_f , θ̂_y ) = min_θ_f, θ_y E(θ_f, θ_y, θ̂_d) θ̂_d = max_θ_d E(θ̂_f, θ̂_y, θ_d) During learning, the trade-off between the two objectives that shape the features is controlled by the parameter λ. The following stochastic updates can find a saddle point θ_f ←θ_f - μ( ∂ L_y^i∂θ_f - λ∂ L_d^i∂θ_f) θ_y ←θ_y - μ∂ L_y^i∂θ_y θ_d ←θ_d - μ∂ L_d^i∂θ_d where μ is a learning rate. These updates are similar to SGD but with a -λ factor in the first update to prevent dissimilar features across domains. Therefore, the authors introduce a GRL that acts as an identity transform during forward propagation but multiplies the gradient by -λ during backpropagation. §.§ Semantic Representations for UDA Next, we continue with the article "Learning Semantic Representations for Unsupervised Domain Adaptation" written by Xie, Shaoan, et al. <cit.> The main purpose of the article is to propose a new method for unsupervised domain adaptation that utilizes semantic information to learn domain-invariant representations. The authors propose a domain adaptation algorithm which is based on the idea of using an adversarial learning to learn a feature representation that is invariant to domain shifts. The authors train a feature extractor and then use it to map the input data to a high-dimensional feature space, and a domain classifier that predicts the domain label of the input data (see Figure <ref>). The feature extractor G is trained to confuse the domain classifier D, while the domain classifier is trained to correctly predict the domain label. In this way, the feature extractor is encouraged to learn features that are invariant to domain shifts, while still being discriminative for the task. First, the authors denote the cross entropy loss for the source domain as L_C(X_S, Y_S). Then, the discrepancy between source domain and target domain is supposed to be L_DC(X_S, X_T) = d(X_S, X_T) = 𝔼_x ∼ D_S [log(1 - D ∘ G(x))] + 𝔼_x ∼ D_T [log(D ∘ G(x))] Moreover, the authors introduce one more loss, which targets the semantic representation. Centroid alignment is used for this purpose. By computing the centroid for each class, both correct and incorrect pseudo-labeled samples are utilized together: L_SM(X_S, Y_S, X_T) = ∑_k=1^K Φ(C_S^k, C_T^k) where C_S^k, C_T^k are centroids for each class and Φ(x, x') = x - x' ^2. This approach aims to cancel out the negative effects caused by inaccurate pseudo labels with accurate ones. Thus, the authors get the following total loss L(X_S, Y_S, X_T) = L_C(X_S, Y_S) + λ L_DC(X_S, X_T) + γ L_SM(X_S, Y_S, X_T) where λ and γ are responsible for the balance between the classification loss, domain confusion loss and semantic loss. In the article, algorithm of moving average centroid alignment is presented that allows to align the centroids in same class but different domains to achieve semantic transfer for UDA. §.§ Fixbi for UDA The purpose of the article "Fixbi: Bridging domain spaces for unsupervised domain adaptation" written by Jaemin Na, Heechul Jung et al. <cit.> is to propose a fixed ratio-based mixup method to address the problem of large domain discrepancies. The authors mix up images and then fed them into neural networks to achieve greater reliability in learning from corrupted labels. It is proposed to use two predetermined mixup ratios λ_sd and λ_td for the source and target domain respectively. Denote input samples and their labels for source and target domain as (x_i^s, y_i^s) and (x_i^t, ŷ_i^t), the authors define mixup configurations in the following way: x̃^st_i = λ x_i^s + (1 - λ)x_i^t ỹ^st_i = λ y_i^s + (1 - λ)ŷ_i^t, where λ∈{λ_sd, λ_td} and λ_sd + λ_td = 1, ŷ_i^t is the pseudo-labels for the target samples. By leveraging the fixed ratio-based mixup, it is constructed two neural networks with different perspectives: the "source-dominant model" (SDM) and the "target-dominant model" (TDM) (see Figure <ref>). The SDM provides robust supervision for the source domain but relatively weak supervision for the target domain, while the TDM has strong supervision for the target domain but weaker supervision for the source domain. Thus, denoting p(y| x̃^st_i) as a predicted class distribution, it is defined fixed ratio-based mixup function L_fm = 1B∑_i = 1^B ŷ^st_i log (p(y|x̃^st_i)), where ŷ^st_i = max p(y|x̃^st_i) and B is the size of a mini-batch. In order to have connections between source and target domains, it is suggested to use a confidence-based learning approach whereby one model educates the other using positive pseudo-labels, or penalties itself using negative pseudo-labels. Positive pseudo-labels means labels which predictions are above a specific threshold, then the authors use them in training the second model by utilizing a conventional cross-entropy loss. Thus, denote p and q as distributions of two models, the authors get the following loss function L_bim = 1B∑_i=1^B 1(max (p(y|x_i^t) > τ)ŷ_i^t log (q(y|x_i^t)), where ŷ_i^t = max p(y|x_i^t). In contrast, a negative pseudo-label refers to the top-1 label predicted by the network with a confidence below the threshold τ. The function of self-penalization is defined as follows: L_sp = 1B∑_i=1^B 1(max (p(y|x_i^t) < τ)ŷ_i^t log (1 - p(y|x_i^t)) Furthermore, the threshold is changed adaptively during training. In addition, it is introduced the following expression: L_cr = 1B∑_i=1^B p(y|x̃^st_i) - q(y|x̃^st_i)^2_2 that represents consistency regularization to guarantee a stable convergence during the training of both models. §.§ Spherical Space DA with Pseudo-label Loss Now we are going to the next article "Spherical Space Domain Adaptation with Robust Pseudo-label Loss" written by Xiang Gu, Jian Sun, and Zongben Xu. <cit.> The authors propose a spherical space representation of data, which allows them to get more effective feature extraction and better adaptation across domains. One approach associated with increasing performance with differences in data distribution between source and target domain is to use pseudo-labels. However, the use of pseudo-labels can be problematic in the presence of noisy or incorrect labels. To tackle this problem, the authors map the data to a high-dimensional sphere and introduce a new loss function, called the robust pseudo-label loss, which is designed to address the problem of noisy or incorrect labels in the target domain. In the Figure <ref> we can see spherical domain adaptation method. Domain invariant features are learned by adversarial training, entirely in the spherical feature space. The feature extractor F is utilized to normalize the features to map onto a sphere. The classifier C and discriminator D are defined in the spherical feature space, consisting of spherical perceptron layers and a spherical logistic regression layer (see Figure <ref>). Although the use of a spherical element reduces feature dimension by one, it simplifies the domain adaptation process by eliminating differences in norms. The authors define spherical adversarial training loss as follows: L = L_bas(F, C, D) + L_rob(F, C, ϕ) + γ L_ent(F) This lost consists of three parts: basic loss, robust pseudo-label loss and conditional entropy loss. Let's start with the first one. To align features, the authors utilize basic loss which is defined as an adversarial domain adaptation loss: L_bas(F, C, D) = L_src(F, C) + λ L_adv(F, D) + λ' L_sm(F), where L_src is a cross-entropy loss for the source domain, L_adv is an adversarial training loss and L_sm is a semantic loss. The second one is conditional entropy loss which is used to keep the learned features away from the classification boundary: L_ent(F) = 1N_t∑_j=1^N_t H(C(F(x_j^t))), where H(·) denotes the entropy of a distribution. Additionally, the authors propose robust pseudo-label loss to increase robustness of the model. Denote ỹ_j^t = max_k [C(F(x_i^s))]_k as a pseudo-lebel of x_j^t where [·]_k means the k-th element. To be ensured in precision of pseudo-labels, it is assumed to use new random variable z_j ∈{0, 1} for each pair (x_j^t, ỹ_j^t) that specify the correctness of the data (1 is correct, 0 is not). Let the probability of correct labeling be P_ϕ (z_j = 1| x_j^t, ỹ_j^t) and ϕ to its parameters, then robust loss is defined as follows: L_rob(F, C, ϕ) = 1N_0∑_j=1^N_t w_ϕ(x_j^t) 𝒥(C(F(x_j^t)), ỹ_j^t), where N_0 = ∑_j=1^N_t w_ϕ(x_j^t) and 𝒥(·, ·) is mean absolute error (MAE). The function w_ϕ (x_j^t) is defined using the posterior probability of correct labeling w_ϕ(x_j^t) = {γ_j, if γ_j ≥ 0.5, 0, otherwise, . where γ_j = P_ϕ(z_j = 1| x_j^t, ỹ_j^t). By utilizing a Gaussian-uniform mixture model in spherical space based on pseudo-labels, the authors model the probability P_ϕ(z_j = 1| x_j^t, ỹ_j^t) as a function of the feature distance between the data and the center of the corresponding class. Thus, samples from target domain with a probability of correct labeling below 0.5 can be discarded. For further details of computing posterior probability, please refer to the article <cit.>. §.§ DA with Invariant Representation Learning Next, we move on to the article "Domain Adaptation with Invariant Representation Learning: What Transformations to Learn?" written by Stojanov, Petar, et al. <cit.> The researchers focus on the conditional shift scenario, where the data-generating process is utilized to (i) explain why two distinct encoding functions are required to infer the latent representation, (ii) improve an implementation of these functions, and (iii) impose meaningful structure on the latent representation Z to increase prediction accuracy in the target domain. Let's consider the data-generating process shown in the Figure <ref> to understand what information is required for learning. The label Y is generated first from its prior distribution P(Y). Then, the invariant representation Z is generated from Y through P(Z|Y), and X is generated from P(X|Z; θ_X), where θ_X represents the changing parameters of P(X|Y) across domains. We can consider Z as a latent representation of our data. The variable θ_X may correspond to environment-specific changes that are irrelevant for predicting the class Y. Generally speaking, Z is conditionally dependent on θ_X given X, although they may be marginally independent. Therefore, to recover Z given X, the information of θ_X should also be considered in the transformation (see detailed in the article to understand clearly how authors measure the influence of θ_X). The authors made two key observation associated with the data-generating process. Firstly, the encoder function ϕ requires θ_X as an input in addition to X. Secondly, assuming that θ_X has minimal influence on the relationship between X and Z, allowing us to use a single encoder ϕ(X, θ_X) instead of two separate encoders. A decoder function ϕ that restricts the influence of θ_X, acting as a regularizer on the encoder ϕ, in order to retain important semantic information. Thus, the authors proposed a domain-adaptation network, which is shown in the Figure <ref>, where θ_X ∈{θ_X^S, θ_X^T} parameters for source and target domains respectively. §.§ Domain Adaptation for Segmentation with CBST The main purpose of the paper "Domain Adaptation for Semantic Segmentation via Class-Balanced Self-Training" written by Zou, Yang, et al. <cit.> propose a new UDA framework for semantic segmentation based on iterative self-training procedure. A novel technique, referred to as Class-Balanced Self-Training (CBST), has been suggested by the authors, which aims to adapt the segmentation model from the source domain to the target domain by leveraging unlabeled target data. In the Figure <ref>, the authors present a structure and the results of their deep self-training framework using two datasets: GTA 5 <cit.> and Cityscapes <cit.>. The CBST approach is based on two main components: a class-balancing strategy and a self-training algorithm. The class-balancing strategy aims to address the problem of class imbalance between the source and target domains, which can negatively impact the performance of the segmentation model. The authors change the loss function using parameters that determine the proportion of selected pseudo-labels due to balance the class distribution during the self-training process. Furthermore, when the images in the source and target domains are similar, spatial prior knowledge can be effectively utilized to adapt models. For this purpose, the authors count the class frequencies in the source domain using Gaussian kernel. The experimental results show that the CBST approach outperforms several state-of-the-art unsupervised domain adaptation methods for semantic segmentation. § CONTRIBUTION The "Contribution" section of the thesis highlights the developed new method called DannFixbi, which combines the Fixbi approach and the backpropagation approach. In an attempt to implement the state-of-the-art method, extensive research has been conducted, and existing approaches have been implemented. The decision to combine these two approaches was based on their respective strengths and the potential for mutually beneficial interaction between them. By incorporating the Fixbi technique, which addresses domain shift, and leveraging the benefits of backpropagation, DannFixbi aims to enhance the performance and robustness of domain adaptation in the field of images. The development of DannFixbi represents an original contribution to the field. This new method consists of two neural networks, which are trained using a modified version of the Fixbi approach. To enhance the performance of this method, two domain classifiers are added to each of these two networks. Two different approaches have been explored for incorporating these domain classifiers. The first approach involves adding a domain classifier to each neural network (see Figure <ref>). During training, images obtained by mixing from the source and target domains with predefined mixup ratios are fed into these classifiers. For mixed images, the following loss functions is used: L_dom = α L_ds(X̂, Y_s) + (1 - α) L_dt(X̂, Y_t) , where X̂ is mixed images, Y_s is source domain labels, Y_t is target domain labels and α∈{λ_sd, λ_td}. L_ds and L_dt presents cross entropy loss for source and target domains, respectively. The second approach is to use a domain classifier for each net with images from the source and target domains without any mixing, similar to the backpropagation method (see Figure <ref>). The second approach utilizes the following loss function for domain classification: L_dom = L_d(X_st, Y_st) , where X_st, Y_st denotes source and target images and domain labels, respectively, and L_d is a cross entropy loss. The total loss for the new method is calculated as the sum of the loss from the Fixbi method and the domain loss described earlier: L_total = β L_fixbi + γ L_dom Here, β and γ represent the weights assigned to the Fixbi loss and the domain loss, respectively. The values of these weights determine the relative importance of each component in the overall loss calculation. Further, unless otherwise stated, the values of alpha and beta are assumed to be equal to 1. The Fixbi loss, denoted as L_fixbi, is composed of several summands described in the equations <ref> – <ref>: L_fixbi = L_fm + L_sp + 1{e > k}(L_bim + L_cr) where e denotes a current epoch, and k is warm-up epochs. To establish independent characteristics for the two networks, it is introduced a warm-up period of k epochs. During this phase, each network is trained separately using the fixed ratio-based mixup and self-penalization techniques. Once an enough amount of training has been completed, bidirectional matching loss is added, which helps networks train collaboratively, exchanging knowledge and benefiting from each other's insights. The new method, referred to as DannFixbi, demonstrates improved performance and robustness in unsupervised domain adaptation for image analysis. This contribution represents a unique approach to UDA, offering valuable insights and potential for future developments in this field. All the results obtained are presented in the "Experiments" section. § EXPERIMENTAL SETUP In this part, we start with description of different datasets that are commonly used in transfer learning. Then, we will continue with implementation details and experiments that have been conducted. Code is available at https://github.com/Jetwev/domain-adaptationhttps://github.com/Jetwev/domain-adaptation. §.§ Datasets The most popular datasets are Office-31, ImageCLEF-DA, Office-Home, DomainNet and VisDA-2017. Detailed discussion of each of them is given below: * Office-31 <cit.> consists of 4,110 images categorized into 31 classes, which are distributed across three separate domains: Amazon (A), Webcam (W), and Dslr (D). * ImageCLEF-DA, utilized in <cit.>, includes three distinct domains: Caltech-256 (C), ImageNet ILSVRC 2012 (I), and Pascal VOC 2012 (P). There are 600 images in each domain and 50 images for each category. * Office-Home <cit.>, includes four absolutely different domains: Artistic images (Ar), Clip Art (Cl), Product images (Pr) and Real-World images (Rw). This dataset contains 15 500 images in 65 object classes, which makes it more complex than Office-31. * VisDA-2017 <cit.> consists of 12 classes shared between two very different domains: Synthetic and Real. It contains synthetic images (training set) and real-world images (test set). The dataset was designed to have a large domain gap, which makes it a challenging benchmark for domain adaptation methods. * DomainNet <cit.> is a large-scale visual recognition dataset designed to evaluate domain adaptation algorithms, which consists of almost 600 thousand images and includes 345 classes. §.§ Implementation details At the beginning, it was necessary to start with some approaches to check their performance and have a possibility to compare results. Four different methods described in the papers have been chosen for the study: * Source only is a method where a model is trained solely on the source domain data without any adaptation to the target domain. * Domain-Adversarial Neural Network (Dann) is domain adaptation technique that aims to learn a domain-invariant feature representation by aligning the feature distributions of the source and target domains. The architecture of Dann consists of three components: a feature extractor network, a label predictor network, and a domain classifier network, and is described in more details in section 2.1. * Moving Semantic Transfer Network (Mstn) The key idea behind Mstn is to add semantic transfer loss to the Dann approach. In the section 2.2, it is proposed to use average centroid alignment for aligning the feature distributions of the source and target domains. The architecture is the same as in the Dann method. * Fixbi is the approach described in details in the section 2.3. The main idea is to train two neural networks, allowing models to learn from each other or on their own results. For this purpose, the authors add bidirectional matching and self-penalization losses. CNN architectures. For all approaches, pretrained Resnet50 <cit.> is utilized as the backbone network. The weights for the neural network can be downloaded from this https://download.pytorch.org/models/resnet50-19c8e357.pthlink. Resnet50 has been pretrained on large image datasets such as ImageNet, which means that the network has already learned to recognize a wide range of features in images. Resnet50 is a convolutional neural network architecture consisting of 50 layers. This is a variant of the Resnet family of neural networks, which are designed to solve the vanishing gradient problem in deep neural networks. Resnet networks achieve this by using short connections between layers, which allow gradients to move more easily during backpropagation. Resnet50 is a widely used architecture in many articles, which makes it a good choice for research. Resnet50 is used as a Feature extractor in all considering methods. Label predictor is a simple network that consists of two fully connected layers (2048 → 256 →number of classes). Domain classifier architecture represents several fully connected layers with a ReLU activation function and dropouts between each two fully connected layers. Using dropouts can reduce the sensitivity of the model to specific features in the input data and encourage the model to learn more generalizable features. This can lead to better performance on new, unseen data and can prevent overfitting. Learning rate schedulers. Learning rate is an important hyperparameter that determines the step size at which the optimizer updates the model's parameters during training. There are many of them and it can be challenging to find the optimal learning rate, as setting it too high can cause the model to diverge, while setting it too low can slow down the learning process. Thus, in this study it is utilized two different learning rate schedulers: CustomLRScheduler and CosineAnnealingLR. The implementation of the first one follows the rules that are described in <cit.> η_p = η_0(1 + α· p)^β, where p linearly increases from 0 to 1, and the values η_0, α, and β are set to 0.01, 10, and 0.75, respectively. The second one, CosineAnnealingLR is a popular learning rate scheduler utilized in deep learning. It systematically reduces the learning rate over multiple epochs in a cyclical manner. Initially, the learning rate starts at its maximum value and then gradually decreases until it reaches the minimum value. Upon reaching the minimum value, the cycle restarts, and the learning rate returns to its maximum value. This process continues until the end of the training, which is usually determined by the total number of epochs or a predefined stop criterion. By starting with a higher learning rate and gradually decreasing it, the model can avoid getting stuck in local minima and converge to a better global minimum. The formula for the CosineAnnealingLR scheduler is: η_t = η_min + 12(η_max - η_min) (1 + cos( T_curT_maxπ)), where η_max is your initial learning rate, η_min – minimum learning rate value, T_cur is the number of epochs since the last start, T_max – the total number of epochs. More detailed information about CosineAnnealingLR can be found https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.CosineAnnealingLR.htmlhere. Optimizers. This study uses two popular optimization algorithms - stochastic gradient descent (SGD) and adaptive moment estimation (Adam). Both algorithms are commonly employed in deep learning to optimize the parameters of a neural network and improve its performance. SGD is a simple and popular optimization algorithm that updates the weights of a model in the direction of the negative gradient of the loss function. One limitation of SGD is that it can get stuck in local minima and struggle with noisy or sparse gradients. To tackle this problem, several modifications can be used. In PyTorch, the SGD optimizer has several hyperparameters that can be tuned to improve its performance. The following parameters (except learning rate) are considered in this study: * Momentum is a hyperparameter that determines how much past gradients affect the current gradient update. It helps to minimize the impact of the noise and fluctuations in the gradient updates. However, setting the momentum too high can also lead to slower convergence. * Weight decay is a form of L2 regularization that adds a penalty term to the loss function during training. This penalty term is proportional to the square of the weights in the network, which encourages the model to use smaller weights and reduce overfitting. * Nesterov momentum is a variant of momentum that takes into account the momentum term in the calculation of the gradient. This can help to reduce oscillations and improve convergence rates, especially in high-dimensional optimization problems. Adam is another optimization algorithm that is commonly used in deep learning. It is an extension of SGD. The key idea behind Adam is to maintain a separate adaptive learning rate for each parameter in the network, based on estimates of the first and second moments of the gradients. This makes Adam more effective than SGD for optimization problems with noisy or sparse gradients. However, it may not always be the best choice for every task and model architecture, so it's important to experiment with different optimization algorithms and settings to find the best approach for your specific problem. Adam is considered in this study with default parameters, more information about the implementation and usage can be found at this https://pytorch.org/docs/stable/generated/torch.optim.Adam.htmllink. Pytorch Lightning. PyTorch Lightning is a lightweight PyTorch wrapper that allows users to focus on the high-level design of their experiments and models, instead of dealing with the low-level implementation details. It provides a structured way to organize PyTorch code, making it easier to read and maintain. PyTorch Lightning offers a range of benefits that make it a good choice for deep learning researches. Firstly, it offers a modular design that makes it easy to organize code. It gives you a convenient and user-friendly interface to manage and run experiments. Moreover, all these benefits can help to improve your productivity. Secondly, PyTorch Lightning makes it easier to scale models to multiple GPUs, which can significantly reduce training times for large models. Finally, it is flexible and can be easily integrated with other PyTorch libraries. Overall, PyTorch Lightning is an excellent choice for researchers who want to focus on the research aspect of deep learning and leave the engineering components to the library. More information can be found on the official https://www.pytorchlightning.ai/index.htmlwebsite. Weights and Biases. Weights and Biases (WandB) is a platform that provides a suite of tools to help developers and data scientists track and visualize their machine learning experiments. WandB makes it easy to log, keep track of your progress and compare different experiments, visualize model performance, and collaborate with team members. One of the main advantages of WandB is its integration with popular machine learning frameworks such as TensorFlow, PyTorch, and Keras. This means that you can easily log and track your model's hyperparameters and performance metrics during training and evaluation. Moreover, WandB is a cloud-based platform, which means that users can access their experiments and data from anywhere with an internet connection and also share them with colleagues and co-workers. For more detailed information, it is recommended to visit the official https://wandb.ai/sitewebsite. Batch size. Different domains in your dataset can contain different number of images that makes your training process more complicated. To tackle this problem, it is proposed two approaches. The first one is to find the ratio of the smaller dataset size to the larger one and concatenate the smaller dataset multiple times to ensure that the number of batches is aligned during the training loop. However, it is important to emphasize that with this approach, overfitting can occur if the appropriate number of epochs is not established. This is because the smaller dataset will be fed into the model more times than the larger one (depends on the ratio). The second approach involves varying the number of images taken per batch for each domain. Applying this approach, it becomes possible to avoid concatenating the smaller dataset multiple times, which effectively reduces the amount of memory consumed. It is crucial to carefully consider the number of images per batch, as choosing a value that is either too high or too low can have negative consequences. Augmentation techniques. Augmentation techniques for images are used to create variations and increase the size of the training dataset by applying a series of transformations. These techniques are widely employed in computer vision tasks, including image classification, object detection, semantic segmentation, etc. Augmentation helps increase the diversity of the dataset, leading to improved model generalization and robustness. PyTorch provides a variety of image augmentation techniques. The following transformations are used in this research: * Normalize – normalizes the image by subtracting the mean value and dividing by the standard deviation. * Resize is a function that allows you to resize an image to a specific size. * RandomCrop is a function that randomly crops a portion of the image. * CenterCrop is a transformation that allows you to perform a center crop on an image. * RandomHorizontalFlip – randomly flips the image horizontally with a specified probability. * RandomVerticalFlip - randomly flips the image vertically with a specified probability. * RandomRotation is a function that randomly rotates the image by a given angle. * ColorJitter is a transformation that allows you to adjust the brightness, contrast, saturation, and hue of the image. * ToTensor is a specific function in PyTorch that is used to convert an image into a tensor. These augmentation techniques can be applied individually or combined sequentially using the transforms.Compose function. More detailed information about transformations and their usage can be found https://pytorch.org/vision/stable/transforms.htmlhere. It's important to emphasize that the choice and combination of augmentation techniques depend on the specific task and dataset characteristics, and careful selection of them are crucial to achieve optimal results. OmegaConf. OmegaConf is a library for Python that provides a convenient and flexible way to manage complex configurations in machine learning projects. It is designed to provides a number of features that can help to simplify the configuration process. Here are some reasons why OmegaConf can be a good choice: * It is easy to use and allows developers to define nested configurations and easily access and modify configuration values. * OmegaConf supports a wide range of configuration formats, including YAML and JSON. This makes it flexible and easy to integrate in your project. * It supports type checking, which can help to catch configuration errors and improve code quality. To sum up, OmegaConf can be a good choice for Python developers who work on large and complex projects and want a flexible and powerful configuration system for their applications. Additional details can be found https://omegaconf.readthedocs.io/en/2.3_branch/here. §.§ Experiments The dataset Office-31 is used to test the approaches. This dataset consists of three domains: Amazon (A) - 2817 images, Dslr (D) - 498 images and Webcam (W) - 795 images (see Figure <ref>). The Table <ref> below highlights various key features of the dataset, including the number of classes, image resolution, task and evaluation metric. The existence of dissimilar image quantities ensures us in the importance of utilizing one of the approaches discussed in the previous section in order to avoid any information loss. In the first approach, where the smaller domain is concatenated, a batch size of 32 or 64 is utilized for all experiments. The second approach takes into account the size of each domain, and as a result, the batch sizes are utilized in the experiments according to the Table <ref>: For the all methods, two kinds of optimizers are used: SGD and Adam. However, the second one shows worse results with default parameters than SGD with lr = 0.001, momentum = 0.9, weight decay = 0.0005. The CustomLRScheduler and CosineAnnealingLR are both used as schedulers, but it has been found that the model performs better when using the second one. Thus, all the following results have been obtained using the CosineAnnealingLR scheduler. Furthermore, all methods give the best results with an approach using a different number of images in each batch. As a result, this approach will be assumed by default, unless otherwise stated. For each two domains, at least three experiments were conducted for all methods, and the best results were selected. Let's start with the first method – Source only. Here, the model is trained on the source domain and then tested on the target domain. The obtained results are shown in Table <ref> (at the end of the 60th epoch). Dann is an architecture that consists not only of a feature extractor and label predictor, but also a domain classifier. This domain classifier helps to identify the domain of the input data and allows the model to learn domain-invariant features. The Table <ref> below clearly demonstrates that the results for each of the two domains are superior to those obtained using the simple Source only method. The results are obtained at the end of the 60th epoch. Mstn method is a complication of Dann by adding a semantic loss. To get this loss, we add centroids for each class and utilize the algorithm described in the section 1.2.2. In the Table <ref>, you can see the results that are acquired at the end of the 60th epoch. The quality of the results tends to suffer due to the significant influence of randomness. The selection of pictures that are included in a batch determines the movements of the centroids, ultimately influencing the overall quality to a significant extent. Fixbi is a method that addresses the domain adaptation problem by training two neural networks that can help each other. As it is described in the article <cit.>, we define λ_sd = 0.7 and λ_td = 0.3. In this method, we cannot use the approach with different batch sizes due to the need to mix up images from source and target domain. Therefore, the second approach with concatenation is utilized. The model is trained for a combined duration of 150 epochs, with the first 100 epochs designated as the warm-up period. It is important to note that 150 epochs are used, not 200, because after the warm-up period the validation score stabilizes and almost does not change. After the warm-up period, L_bim starts to be applied, which leads to a critical changing in the total accuracy. The sudden improvement in accuracy can occur in either a positive or negative direction, and is often heavily influenced by randomness. One possible explanation for this phenomenon is that the model may have already found a local minimum prior to the introduction of L_bim, and the application of L_bim causes a sudden shift in gradients that propels the model out of the current minimum and into a new one. Depending on the new minimum, this can result in either an improvement or a deterioration in the model's performance. As we can see in the Figure <ref>, for Amazon (source) and Webcam (target) domains this method gives significant increase in accuracy, while for DSLR (source) and Amazon (target) it shows the worst results. In the Figure <ref>, you can see the separate accuracy of “source-dominant model” (SDM) and “target-dominant model” (TDM) in case of Amazon (source) and Webcam (target) domains. The results for each two domains of Office-31 dataset for Fixbi method are shown in Table <ref>. The Fixbi method was selected for modification, wherein the ratios for SDM and TDM were adjusted, and a domain classifier was added for mixup images. This modified method is named DannFixbi. λ_sd and λ_td are set as 0.9 and 0.7, respectively. As it was mentioned before, it is used one of the two approaches described in "Contribution" section with domain classifiers for mixup images or separately for each domain. The Table <ref> indicates that certain domains exhibit an increase in accuracy as a result of these changes. Table <ref> presents all the obtained results. The DannFixbi method yields the highest accuracy for the A→D, A→W, D→A and D→W tasks, while the Dann method achieves the best results for the W→D, and W→A tasks. Additionally, it is worth noting that an overall assessment of the methods across all domains can be obtained by calculating the average accuracy (see Table <ref>). To sum up, the new introduced method called DannFixbi outperforms all other methods in visual recognition. The <ref> provides additional results for each domain, allowing for comparisons using the Wilcoxon signed-rank test to determine the statistical significance of the findings (see Tables <ref> – <ref>). The new method demonstrates statistically significant results in three tasks: A→D, A→W, and D→W, while outperforming all other methods on average (Tables <ref> – <ref>). § CONCLUSION AND PERSPECTIVES In this thesis, the focus was on exploring and implementing methods related to unsupervised domain adaptation. The Office-31 dataset was utilized for evaluating these methods and conducting a comprehensive comparison. The results obtained from the experiments were analyzed, leading to the development of the new DannFixbi method that demonstrated the best performance compared to all the other methods presented. The Office-31 dataset provided a suitable benchmark for evaluating the effectiveness of various unsupervised domain adaptation techniques. By conducting experiments on this dataset, the performance of different methods could be objectively assessed and compared. The analysis of the results shows the strengths and weaknesses of each method, allowing a deeper understanding of their capabilities. Based on the comparative analysis, it was observed that the newly developed method showcased the best results among all the presented methods. The success of the new method can be attributed to its ability to leverage the strengths of existing techniques. By combining the back propagation method with domain classifiers and applying the Fixbi approach, it is possible to identify common features in different domains and share knowledge and insights between networks. This collaborative approach to learning has led to higher performance and increased the overall effectiveness of the method. Overall, this thesis contributes to the field of unsupervised domain adaptation by providing an analysis of existing methods, introducing a new approach, and demonstrating the potential for improving visual recognition tasks across different domains. The results of this study open up opportunities for further study and development of advanced methods in the field of domain adaptation. By addressing the challenge of distribution mismatch between the labeled and unlabeled data, we can note that advances in domain adaptation can significantly benefit other related domains specially semi-supervised learning. One line of research would be to study the generalization performance of semi-supervised learning models that have been studied under the cluster assumption <cit.>. Indeed, by explicitly considering the differences between the source and target domains, domain adaptation techniques can enhance the model's ability to adapt to new, unseen data in the target domain and can hence provide strategies to handle domain shift and improve the generalization performance of the semi-supervised learning model. Furthermore, by reducing the distribution mismatch between labeled and unlabeled data, domain adaptation methods can enable semi-supervised learning algorithms to leverage the unlabeled data more effectively. Moreover, domain adaptation methods often focus on learning robust representations that are less sensitive to noise and domain shifts. By leveraging such robust representations, semi-supervised learning algorithms can become more resilient to label noise and improve their accuracy even with limited labeled data. Domain adaptation is essentially a form of transfer learning, where knowledge learned from a source domain is transferred to a target domain. By studying domain adaptation, we can gain insights into transfer learning techniques that can be beneficial for semi-supervised learning scenarios like ranking <cit.>. These techniques can help leverage knowledge from a labeled source domain to improve the performance of a semi-supervised learning model in the target domain. unsrt § APPENDIX This appendix presents the results for each domain. In order to make comparisons, 15 experiments were conducted for each method within each area. The Wilcoxon rank test was employed to analyze and assess the performance of the methods. The best-performing method for each experiment is denoted by bold values and statistical significance is denoted with a star (*).
http://arxiv.org/abs/2307.05772v1
20230711200035
Random-Set Convolutional Neural Network (RS-CNN) for Epistemic Deep Learning
[ "Shireen Kudukkil Manchingal", "Muhammad Mubashar", "Kaizheng Wang", "Keivan Shariatmadar", "Fabio Cuzzolin" ]
cs.LG
[ "cs.LG", "stat.ML" ]
Probabilistic Unitary Formulation of Open Quantum System Dynamics and Andrew N. Jordan August 12, 2023 ================================================================== Machine learning is increasingly deployed in safety-critical domains where robustness against adversarial attacks is crucial and erroneous predictions could lead to potentially catastrophic consequences. This highlights the need for learning systems to be equipped with the means to determine a model’s confidence in its prediction and the epistemic uncertainty associated with it, `to know when a model does not know’. In this paper, we propose a novel Random-Set Convolutional Neural Network (RS-CNN) for classification which predicts belief functions rather than probability vectors over the set of classes, using the mathematics of random sets, i.e., distributions over the power set of the sample space. Based on the epistemic deep learning approach, random-set models are capable of representing the `epistemic' uncertainty induced in machine learning by limited training sets. We estimate epistemic uncertainty by approximating the size of credal sets associated with the predicted belief functions, and experimentally demonstrate how our approach outperforms competing uncertainty-aware approaches in a classical evaluation setting. The performance of RS-CNN is best demonstrated on OOD samples where it manages to capture the true prediction while standard CNNs fail. § INTRODUCTION Despite its recent rise in popularity, in its current form, artificial intelligence (AI) cannot confidently make predictions robust enough to stand the test of data generated by processes different from those studied at training time, even by tiny details, as shown by `adversarial' results able to fool deep neural networks <cit.>. While recognising this issue under different names (e.g. overfitting <cit.> or model adaptation <cit.>), traditional machine learning seems unable to address it at a fundamental level. As a result, AI systems suffer from brittle behaviour, and find it hard to operate in new situations. A major cause of this problem is the widespread presence of epistemic uncertainty, i.e., uncertainty about the process generating the data itself. In machine learning, this derives from the limited representativeness of the available training sets, in terms of both quantity and quality. As few samples are collected in specific domain distributions, the learnt models have no way to effectively model domain variation. Among different sources of uncertainty in machine learning, epistemic uncertainty - which arises when data is of insufficient quantity or quality - can crucially affect trustworthy model-based inference. There have been many advances towards quantifying epistemic uncertainty in deep learning models over the past years. The machine learning community recognises the problem. This has given rise to several proposals for estimating the uncertainty associated with network predictions, including Bayesian Dropout <cit.>, distance-aware priors <cit.> and the use of random fuzzy sets <cit.>. An “evidential" deep learning method <cit.> for uncertainty quantification in classification using Dirichlet distribution and, more recently, epistemic neural networks <cit.> as a generalisation of Bayesian neural networks have been proposed. None of these methods, however, fully, explicitly capture the epistemic uncertainty stemming from the data issue. Some of them rely on prior knowledge <cit.>, whereas others require setting a desired threshold on the metrics <cit.>. It has been argued by some that classical probability theory is, in fact, not equipped to model “second-level" uncertainty on the probabilities themselves. This has led in the past to the formulation of numerous (epistemic) uncertainty calculi <cit.>, starting from de Finetti's pioneering work on subjective probability <cit.>, and including possibility theory <cit.>, probability intervals <cit.>, credal sets <cit.>, random sets <cit.> and imprecise probability <cit.>. In this paper, we propose to model epistemic uncertainty using a random set representation <cit.>. As we focus on finite target spaces in a classification setting, we will model uncertainty using belief functions <cit.>, the finite incarnation of random sets <cit.>. The theory of belief functions or theory of evidence <cit.> is a generalisation of Bayesian inference <cit.> since classical (discrete) probabilities are simply special belief functions and Bayes' rule is a special case of Dempster's rule of combination. Crucially, in the theory of evidence, priors are not needed to start the inference process, avoiding the selection bias risks that can seriously condition Bayesian reasoning <cit.>. An interesting geometric approach to uncertainty measures, and belief functions in particular, has been proposed by one of us <cit.>. To model uncertainty using belief functions, we propose a novel Random-set Convolutional Neural Network based on the principles of the epistemic deep learning concept. Epistemic deep learning <cit.> argues that a deep neural network producing an outcome for (selected) subsets of the target space is an intrinsically more faithful representation of the epistemic uncertainty associated with the limited quantity and quality of the training data. As they assign probability values to sets of outcomes directly, they naturally model the fact that observations almost invariably come in the form of sets. In an image classification setting, this could be interpreted as a test image being mapped to a set of classes, rather than a single class, when there is uncertainty about the true class. By representing classes as sets, set-valued classification can handle imprecise data more effectively and provide a richer understanding of the underlying uncertainty. It enables the classification model to capture multiple possible interpretations or predictions for each instance, allowing for more robust decision-making and improved performance in scenarios where precise labels may not be available or appropriate. Contributions The following are the key contributions of our paper: * A novel Random-set Convolutional Neural Network (RS-CNN) classifier for uncertainty estimation based on the principle of epistemic deep learning, outputting Belief or Mass predictions for sets of classes, training using suitable loss functions generalising classical cross-entropy. * A method for selecting a fixed budget of focal sets (sets with non-zero probability, associated with the network's output neurons) from the power set of classes, thus ensuring scalability. * A method for estimating the epistemic uncertainty of the prediction in terms of the size of the credal set (set of probability vectors) consistent with a predicted belief function. * Experimental validation demonstrating the RS-CNN model outperform other uncertainty-aware models and exhibit superior accuracy on out-of-distribution (OOD) samples compared to the standard CNN. § RELATED WORK Researchers have recently taken some tentative steps to model uncertainty in machine learning and artificial intelligence. Scientists have adopted measures of epistemic uncertainty <cit.> to refuse or delay making a decision <cit.>, take actions specifically directed at reducing uncertainty (as in active learning <cit.>), or exclude highly-uncertain data at decision making time <cit.>. Networks which abstain from making predictions <cit.> or the use of Gaussian processes <cit.> have been proposed <cit.>. Epistemic learning relates to both approaches which learn sets of models <cit.> (although differing in its rationale and the mathematics involved) and to approaches in domain adaptation which employ minimax optimisation to learn models adapted to data generated by any probability distribution within a “safe” family <cit.>. The Naive Credal Classifier (NCC) extends the naive Bayes classifier with imprecise probabilities represented as class sets <cit.>. Other methods include ϵ-contamination <cit.>, non-probabilistic uncertainty <cit.>, and imprecise decision theory <cit.>. Rough set-based classification approaches have also been proposed <cit.>. The use of belief functions in uncertainty estimation has been explored previously in several evidential clustering methods such as evidential c-means (ECM) <cit.> and neural-network based evidential clustering (NN-EVCLUS) <cit.>, in classification fusion <cit.>, and in evidential neural networks for classification <cit.> and regression <cit.>. These attempts to introduce concepts from uncertainty theory into machine learning have so far achieved limited impact, possibly because of the lack of a solid underlying principle. Bayesian approaches, pioneered by <cit.>, have established a link between Bayesian neural networks and Gaussian processes. Approximations of full Bayesian inference have been sought using MCMC <cit.>, variational inference <cit.> and dropout VI <cit.>. Recent work has addressed challenges such as computational cost, model priors, and tractable function-space variational inference <cit.>. Bayesian models have been deployed in various domains, including multi-task learning in autonomous vehicles <cit.>, histopathological image classification <cit.> for cancer diagnostics and medical image segmentation <cit.>. However, they still pose some challenges since Bayesian models assume that the true data-generating process lies within the model's hypothesis space. If the model prior is misspecified and fails to capture the true underlying process adequately, the estimated epistemic uncertainty may not be reliable. Our approach, on the other hand, does not require sampling during inference nor it requires us to choose a prior, thereby reducing the computational complexity of the model when compared to Bayesian inference and avoiding bias. -0.1in § RANDOM-SET CONVOLUTIONAL NEURAL NETWORK Based on the principles of epistemic deep learning <cit.> mentioned in Sec. <ref>, we propose the following architecture and training objectives for a novel class of models called Random-Set convolutional neural networks (RS-CNN) that can be trained to output scores for sets of outcomes, and to encode the epistemic uncertainty associated with the prediction in the random set framework. §.§ Random sets and Belief functions Let us denote by Ω and Θ the sets of outcomes of two different but related problems Q_1 and Q_2, respectively. Given a probability measure P on Ω, we want to derive a `degree of belief' Bel(A) that A⊂Θ contains the correct response to Q_2. If we call Γ(ω) the subset of outcomes of Q_2 compatible with ω∈Ω, ω tells us that the answer to Q_2 is in A whenever Γ(ω) ⊂ A. The degree of belief Bel(A) of an event A⊂Θ is then the total probability (in Ω) of all the outcomes ω of Q_1 that satisfy the above condition <cit.>: Bel(A) = P({ω | Γ(ω) ⊂ A }) = ∑_ω∈Ω : Γ(ω)⊂ A P({ω}). The mapping Γ : Ω→ 2^Θ = {A ⊆Θ} is called a multivalued mapping from Ω to Θ. Such a mapping, together with a probability measure P on Ω, induces a belief function on 2^Θ. In Dempster's original formulation, then, belief functions are objects induced by a source probability measure in a decision space for which we do not have a probability, as long as there exists a 1-many mapping between the two. Belief functions can also be defined axiomatically on the domain of interest (frame of discernment) Θ, without making reference to multivalued mappings. A basic probability assignment (BPA) <cit.> is a set function <cit.> m : 2^Θ→[0,1] s.t. m(∅)=0 and ∑_A⊂Θ m(A)=1. In Dempster's interpretation, the `mass' m(A) assigned to A is in fact the probability P({ω∈Ω : Γ(ω) = A}). Shafer and Smets <cit.>, amongst others, have supported a view of mass functions as independently defined on Θ. Subsets of Θ whose mass values are non-zero are called focal elements of m. §.§ Architecture of Random-Set Convolutional Neural Networks Consider a simple classification neural network. A classifier e is a mapping from an input space X to a categorical target space Y=[N], where N denotes the number of classes, e: X → [N]. In set-valued classification, on the other hand, e is a mapping from X to the set of all subsets of [N], the powerset ℙ(N), e: X → 2^[N]→ℙ(N). As shown in Figure <ref>, a Random-set CNN predicts a mass/belief function for each input data point, rather than softmax probabilities as in a traditional CNN. For N classes, a random-set CNN will have 2^N outputs, which is not computationally feasible for large values of N due to the exponential complexity. Therefore, instead of using all the 2^N subsets as outputs, only K relevant subsets (A_K focal sets) are chosen out of the 2^N subsets available. Budget. To obtain these K focal subsets, given a training set, we extract a feature vector for each training sample from the penultimate layer of a trained standard CNN with N outputs in the last layer. The t-SNE (t-Distributed Stochastic Neighbor Embedding) <cit.> algorithm is used to reduce the dimensionality of the feature vectors to 3 dimensions. Then, a Gaussian Mixture Model (GMM) is fit to the reduced feature vectors of each class. Note that our approach is agnostic, as t-SNE could be replaced with any other dimensionality reduction technique, including autoencoders. For each class c, we define an ellipsoid <cit.> which covers 95% data from that class c using the eigenvectors and eigenvalues of the covariance matrix Σ_c and the mean vector μ_c, ∀ c ∈ N P_c ∼𝒩(x_c;μ_c, Σ_c) obtained from the GMM. The class ellipsoids are plotted in a 3-dimensional space and the overlap of each subset in the power set of N is computed. As it is computationally infeasible to compute overlap for all 2^N subsets, we start doing so from cardinality 2 and use early stopping when increasing the cardinality further does not alter the list of most-overlapping sets of classes. We choose the top-K subsets (A_K) with the highest overlapping ratio, computed as the intersection over union for each subset A in ℙ(N), overlap(A) = ∩_i ∈ AA_i/∪_i ∈ AA_i , A ∈ℙ(N). These K non-singleton focal sets, along with the N singleton sets associated with the original classes, form our new set of classes. Encoding. Set-valued observations for random-set CNNs can be defined in two ways: (i) by encoding the belief value Bel(A) for each subset A in the power set ℙ(N) of N classes: ŷ=Bel(A), A∈ℙ(N), or (ii) by encoding its mass value: ŷ=m(A), A∈ℙ(N). In the belief value encoding, the ground truth vector is GT_Bel={Bel_1,Bel_2,…,Bel_2^N}, where Bel_i is the belief value for i∈ℙ(N). In the mass value encoding, the ground truth becomes GT_m={m_1,m_2,…,m_2^N}, where m_i is the mass value for i∈ℙ(N). Note that the trained random-set CNN outputs a mass/belief score for focal sets in A_K only. In this paper, we focus primarily on the Belief RS-CNN (B-RS-CNN) on precise data, as the belief value encoding has a more complex structure and the potential to “teach” the network the set structure. Unlike the Mass RS-CNN (M-RS-CNN), where a mass value is considered "true" when only the true class in contained in subset A, the belief value Bel(A) is deemed "true" for all instances where the true class is present in subset A. Consequently, the mass value encoding resembles a padded version of the original target vector in traditional CNN, with zeros introduced for all non-singleton sets. The full potential of M-RS-CNN can be demonstrated only in an imprecise data setting, where we have ambigous ground truth. We briefly report the accuracies for M-RS-CNN in Table <ref> and provide the formulation of generalisation of Kullback-Leibler(KL) divergence to mass functions as loss to M-RS-CNN (proof in Appendix A.1). For the remainder of this paper, we refer to belief random-set CNN as RS-CNN. §.§.§ Belief Random-Set Convolutional Neural Network The belief random-set CNN is trained on the belief-encoded ground truth representation for A_K subsets, where a belief value Bel(A) is 1 iff the true class is contained in the subset A. This corresponds to full certainty that the element belongs to that set and there is complete confidence in this proposition. Since the belief RS-CNN mimics the structure of a multi-label classification problem, with respect to the ground truth alone, Binary Cross-Entropy (BCE) (Eq.<ref>) is used as loss function with sigmoid activation to get independent belief functions for each subset A_K∈ℙ(N), ℒ_BCE = y log(ŷ_Bel) + (1-y) log(1-ŷ_Bel). The prediction of valid belief functions is facilitated by incorporating a specific term in the loss function that enforces the non-negativity of the mass functions computed using the Moebius inversion formula (Eq. <ref>). This term ensures that the model only learns valid belief functions, as any predicted belief function B̂êl̂(A) with negative mass m(A) is considered invalid. The Moebius inversion formula to compute masses from belief functions is, m(A) = ∑_B⊆ A(-1)^|A∖ B|B̂êl̂(B). Belief functions assign non-negative masses to subsets of a frame of discernment, where the sum of masses over all subsets equals 1. A mass sum term M_s and a mass regularization term M_r (Eq. <ref>), M_s = max(0, 1/b_size∑_i=1^b_size∑_A m_i(A) -1 ) , M_r = 1/b_size∑_i=1^b_size∑_Amax(0, -m_i(A) ) are thus added to ℒ_BCE. M_r and M_s ensure that the loss function only penalizes the model for predictions with negative mass values, effectively encouraging the model to have positive mass values, and the sum of all masses computed from predicted belief functions is less than or equal to 1, respectively. Positive masses are added over the batch size in M_r (Eq. <ref>). Additionally, hyperparameters α and β are added to the binary cross-entropy loss to control the relative importance of the BCE with respect to M_r and M_s. The final loss for belief RS-CNN the becomes, ℒ_B-RS = ℒ_BCE + α M_r + β M_s. § UNCERTAINTY ESTIMATION IN RANDOM-SET CNN Pignistic prediction. A prediction in the form of a precise probability vector can be obtained by computing Smet's pignistic probability transform (Eq.<ref>) <cit.> for each class c using the predicted masses of the focal sets in the budget. The pignistic probability, BetP, of each class c is computed as the sum of the mass of the focal sets A that contain c divided by their cardinality. This transformation redistributes the mass values to assign a probability measures to each class c. BetP(c) = ∑_c ∈𝒜m(A)/|A|. The most likely class can then be extracted from the pignistic prediction, and accuracy can be calculated in the standard way. Entropy of pignistic probability. The Shannon entropy of the pignistic probability estimate BetP for each class c can then be used to assess the uncertainty associated with the predictions. A higher entropy value (Eq. <ref>) indicates greater uncertainty in the model's predictions. H_RS = -∑_c ∈ NBetP(c)logBetP(c). Size of credal set. A random-set CNN predicts a belief function (a finite random set) on the target space. A belief function, in turn, is associated with a convex set of probability distributions (a credal set <cit.>) on the space of classes. The size of this credal set thus measures the extent of the epistemic uncertainty associated with the prediction. When the model is confident, the credal set will approach a single, precise probability. When uncertainty is high, the credal set will be wide. Any probability distribution P such that P(A)≥ Bel(A) ∀ A ⊂Θ, is said to be consistent with Bel <cit.>. Each belief function Bel thus uniquely identifies a set of probabilities consistent with it, 𝒫[Bel] = { P ∈𝒫 | P(A) ≥ Bel(A) }, where 𝒫 is the set of all probabilities one can define on Θ. Not all credal sets are consistent with a belief function. Credal sets associated with BFs have as vertices all the distributions P^π induced by a permutation π = { x_π(1), …, x_π(|Θ|)} of the singletons of Θ = { x_1, …, x_n } of the form <cit.> P^π[Bel](x_π(i)) = ∑_A ∋ x_π(i); A ∌x_π(j) ∀ j<i m(A). Such an extremal probability (<ref>) assigns to a singleton element put in position π(i) by the permutation π the mass of all the focal elements containing it, but not containing any elements preceding it in the permutation order <cit.>. Eq. (<ref>) analytically derives the finite set of vertices which identify a random-set prediction. For class c, the size of the credal set is approximated by calculating the minimum and maximum (Eq. <ref>) of the extremal probabilities, P^π_min = min_P^π[Bel](x_π(i)) P^π[Bel](x_π(i))_y_c, P^π_max = max_P^π[Bel](x_π(i)) P^π[Bel](x_π(i))_y_c -0.1in where y_c is the index of class c. These minimum and maximum extremal probabilities are the upper and lower bounds of the estimated probability for class c. The predicted pignistic probability estimate BetP falls within the interval of these upper and lower bounds which indicates the epistemic uncertainty associated with the prediction. Credal set width, measured as the difference between P^π_max and P^π_min, is the epistemic uncertainty estimate associated with a prediction. -0.1in § EXPERIMENTS Performance. We evaluate the performance of random-set convolutional neural networks (RS-CNN) with experiments on multi-class image classification datasets MNIST <cit.>, CIFAR10 <cit.>, Intel Image <cit.> and CIFAR-100 <cit.>, compare against state-of-the-art Bayesian and Evidential classifiers for uncertainty estimation (see Table <ref>). CNN is a standard deterministic neural network, the only model that does not include uncertainty estimation, MCDropout <cit.>, R-BNN <cit.>, F-BNN <cit.>, LB-BNN <cit.> are Bayesian models with Monte-Carlo dropout approximation, Variational inference with Local Reparameterization Trick <cit.>, Flipout estimation<cit.> and Laplace Bayesian posterior approximation respectively. ENN <cit.> is an epistemic NN classifier based on a generalisation of Bayesian method, and EDL is the Evidential deep classifier <cit.> for uncertainty estimation based on evidence theory. M-RS-CNN and B-RS-CNN are our mass and belief random-set convolutional neural networks. For fair comparison, we have used the same model architecture as RS-CNN to test all these methods. Their test accuracies are reported in Table <ref>. While we included Bayesian and Evidential learning models, it is important to note that comparing our models directly with methods like ensembles and Bayesian models that use priors may not be entirely appropriate. Model architecture. The RS-CNN architecture consists of three convolutional layers: the first layer has 32 filters of size 3×3, and the second and third layers have 64 filters of size 3×3. ReLU activation is used in all three layers. Max pooling with a pool size of (2, 2) follows the first and third convolutional layers. A dropout layer with a rate of 25% is added before the final two layers. The fully connected layer has 100 hidden units with ReLU activation, and the output layer has the same number of units as the number of focal sets, using the sigmoid activation function. In the first step, we train the RS-CNN model using the Adam optimizer with categorical cross-entropy loss over 15 epochs with learning rate 0.001 and batch size of 64. The feature vector f is extracted from the penultimate layer. We apply t-SNE for dimensionality reduction and map the class features to a 3D space. To expedite computation, we utilize all available CPU cores for parallel processing, which takes ≈7 minutes for CIFAR10 with 10 classes. The remaining computations are performed on a GPU (NVIDIA GeForce GTX 1080 Ti). Clusters or elliptical regions representing the 10 classes are calculated using mean, standard deviation, and eigenvectors obtained by fitting individual Gaussian Mixture Models (GMMs) to the embedded features of each class. For CIFAR10 with 10 classes (N=10), we have a total of 1024 sets of classes (2^N). Due to the exponential complexity of computing predictions for all 1024 sets, we select the top K most-relevant subsets, additional to N singletons. The overlapping ratio is estimated between all classes, and we choose K subsets with the highest overlapping ratios (Sec. <ref>). In our experiment, we set K to 20, excluding the empty set and N singletons. For example, in CIFAR10, the selected non-singleton focal sets include combinations such as {'', ''}, {'', ''}, {'', ''}, and {'', '', ''}. These 20 focal sets, along with the 10 singletons, form our new set of classes A ∈ℙ(N). The belief function Bel(A) is considered "true" if the true class is present in the set. For instance, for the class '', the belief functions for subsets {''} and {'', ''} are set to 1. The model is trained using the BCE-M loss function (Eq. <ref>) over 50 epochs and a batch size of 64. Our experiments use α = 1 and β = 1. The predicted belief function B̂êl̂(A) gives the belief values for a list of classes including singletons and non-singleton focal subsets. We compute mass values from the predicted B̂êl̂(A) using Eq. <ref>. The sum of predicted masses should be around 1 depending on the rate with which the model has learnt the belief values. We compute the pignistic probabilities (Sec. <ref>) for each original class N = 10 using Eq. <ref>. Table <ref> shows the predicted belief functions, masses, and pignistic predictions for the samples. The test accuracy is calculated by comparing the pignistic prediction with the true labels. The test accuracy for Belief RS-CNN for MNIST, CIFAR10, CIFAR100 and Intel Image Classification datasets are 99.39%, 77.63%, 39.80% and 81.59%. -0.02in Uncertainty estimation. We calculate the predicted belief, mass and pignistic probability for both in-distribution and out-of-distribution(OOD) samples of the test data, noisy (random noise added to test set) and rotated (random degrees of rotation between -30 and 180). RS-CNN has significantly higher test accuracy than a standard CNN for noisy out-of-distribution MNIST data. Table <ref> shows the test accuracies for standard CNN and RS-CNN at different scales of random noise. Table <ref> shows an example of an OOD rotated MNIST image with a true label "8". The standard CNN makes a wrong prediction ("6") with 99.95% confidence and a low entropy of 0.0061. The CNN fails to perform well on OOD samples, whereas the RS-CNN predicts the belief functions for top 5 subsets {'', '', ''}, {'', '', ''}, {'', ''}, {'', ''}, and {'', ''}. It is clear from the belief prediction alone that the true element is contained in most, if not all of the subsets. We compute the mass values using Eq. <ref>, and the pignistic probabilities using Eq. <ref>. The highest pignistic probability estimate gives us a correct prediction of class "8" with a pignistic probability of 49.7% and an entropy of 2.1626. This demonstrates how RS-CNNs predict even the true class with a lower confidence in uncertain cases, in contrast to the standard CNN that provide incorrect classifications with high confidence. A distribution of confidence scores for belief random-set CNN are shown in Fig. <ref> for both incorrectly classified and correctly classified samples. In Fig. <ref>, we show how a standard CNN has high confidence for incorrectly classified samples whereas RS-CNN exhibits lower confidence and higher entropy (Fig. <ref>) for incorrectly classified samples. Fig. <ref> displays the entropy distribution for correctly and incorrectly classified samples of CIFAR10, including in-distribution, noisy, and rotated images. Higher entropy is observed for incorrectly classified predictions across all three cases. Fig. <ref> shows entropy-based uncertainty estimation for rotated OOD samples of the MNIST digit '3'. The RS-CNN model accurately predicts the true class at -30, 0, and 30 degrees, indicating low uncertainty. As the rotation angle increases, all models predict the wrong class, with the RS-CNN showing higher entropy than LB-BNN. ENN, an ensemble method, also exhibits higher entropy. Overall, RS-CNN provides a better measure of uncertainty. To estimate the epistemic uncertainty of a prediction, we approximate the extremal probabilities that define the vertices of the associated credal set. Instead of calculating the exact vertices, which is computationally expensive due to a large number of permutations, we calculate the upper and lower bounds of the credal set using Eq. <ref>. This approximation allows us to visualize the uncertainty by plotting the upper and lower bounds in Fig. <ref> for a subset of CIFAR10 samples. The pignistic probability estimate, BetP, falls within this credal set, and the distance between the upper (P^π_max) and lower (P^π_min) bounds represents the epistemic uncertainty. Fig. <ref> displays the estimated credal width for both correctly and incorrectly classified samples. The length of the lines reflect the epistemic uncertainty. Incorrect classifications exhibit high credal set widths, indicating higher associated uncertainty. Choice of model architecture We verify our approach on larger model architectures, specifically, Resnet-50 (see Table <ref>) and observe a higher test accuracy of 83.13% on CIFAR10, while predicting uncertainty, compared to a test accuracy of 84.21% for a standard Resnet-50. Both models were trained for 200 epochs with a varying learning rate. With better model architectures, we can achieve higher test accuracies than reported in Table <ref>. § LIMITATIONS A limitation of our approach is the selection of different subsets during each budgetting run. These subsets may vary across different runs, but there is some consistency with non-singleton focal sets where subsets such as {'', ''}, {'', ''}, {'', ''}, repeat over multiple runs. Fine-tuning can be employed to mitigate the need for separate feature extraction and training stages. Our approach has potential applications in the safety-critical domain that require uncertainty estimation such as medical diagnostics, autonomous driving, and other tasks involving image classification and analysis since it is particularly effective for predicting out-of-distribution samples and would perform better with imprecise data than precise. § CONCLUSION This paper proposes a random-set convolutional neural network for uncertainty estimation in classification using belief functions. We compute masses and pignistic probabilities from predicted belief functions and estimate uncertainty using credal set width and entropy. Our method achieves superior test accuracy compared to state-of-the-art uncertainty estimation networks and performs comparably to traditional networks without uncertainty estimation. We provide an example illustrating how our model predicts uncertainty in rotated digits. In this paper we only discussed the target-level representation of random-set CNNs (RS-CNNs). Future work involves parameter-level representation of random-set neural networks and the extension of this concept from classification to detection tasks. Detection networks have additional heads which take care of regressing bounding box coordinates in addition to labels: the research problem then becomes one of formulating suitable localisation losses in keeping with the epistemic uncertainty principle. Moreover, applying both mass and belief random-set CNNs to imprecise data shows promise for improved performance. plain § APPENDIX § EXPERIMENTAL RESULTS ON OUT-OF-DISTRIBUTION (OOD) SAMPLES We split the MNIST data into training and test sets, train the RS-CNN model using the training test, and test the model on noisy and rotated out-of-distribution test data. This is done by adding random noise (50%) to the test set of images to obtain noisy data and rotating MNIST images with random degrees of rotation between 0 and 360. In Section <ref>, we show that the test accuracy scores for RS-CNN on Noisy MNIST are relatively higher than the accuracy for standard CNN giving us more correct predictions (see Table <ref>). The test accuracies for standard CNN and RS-CNN on Rotated MNIST images are shown in Table <ref>. The samples are randomly rotated every 60 degrees, -180^∘ to -120^∘, -120^∘ to -60^∘, -60^∘ to 0^∘, etc. A fully random rotation between 0^∘ and 360^∘ also shows higher test accuracy for RS-CNN at 47.71% when compared to standard CNN with test accuracy 45.86%. Table <ref> shows predictions for noisy and rotated out-of-distribution (OOD) MNIST samples. In cases where a standard CNN makes wrong predictions with high confidence scores, random-set CNN manages to predict the correct class with varying confidences verifying that the model is not overcofident in uncertain cases. For example, a noisy sample with true class '' has a standard CNN prediction of class '' with 96.9% confidence, while RS-CNN predicts the correct class {''} with 42.7% confidence. Similarly, for rotated '', the standard CNN predicts class '' with 98.8% confidence whereas RS-CNN predicts the correct class {''} with 69.9% confidence. For a rotated '', the standard CNN predicts class '' with 100% confidence. § MASS RANDOM-SET CONVOLUTIONAL NEURAL NETWORK Mass random-set CNN (M-RS-CNN) is trained on mass-encoded ground truth representation for A_K subsets, where a mass value m(A) is "true" if subset A contains only the true class. This is due to the condition that the mass values assigned to subsets should sum up to 1 over all subsets, reflecting the total evidence available for the frame of discernment. Mass value encoding simply looks like a padded version of the original target vector in traditional CNN, with zeros added in correspondence of all non-singleton sets, allowing us to plug in as loss function the standard cross-entropy loss. This provides two possible ways of generalising the cross-entropy loss to random-set networks: (i) by employing suitable entropy measures for belief functions, (ii) or by generalising the concept of Kullback-Leibler (KL) divergence to them. Generalising the Kullback-Leibler divergence. The KL divergence of two probability distributions P and Q is defined as: D_KL(P||Q)=∫_xP(x)log(P(x)/Q(x)) dx. We extend this definition to mass functions predicted by a mass random-set network defined on the target space. For KL divergence in the random set setting, the problem can be best posed in the random-set interpretation of mass functions, in which the latter are measures induced in the decision space of interest by a multi-valued mapping applied to a “source” domain in which source probabilities are available (see Figure <ref>) <cit.>. Let Γ:Ω→2^Θ be the multi-valued (1-to-many) mapping from the (unknown) source space Ω to the set 2^Θ of all subsets of the decision space Θ (set of classes). Let m=Γ(P) and m̂=Γ(Q) be two mass functions on Θ encoding, respectively, the true class value for an input x and the corresponding epistemic prediction generated by a network. The two are in fact images of two source probabilities in the unknown source space Ω, which we call P and Q. We consider m=P(x) as the true value and m̂=Q(x) as the predicted value of M-RS-CNN, in the standard definition of D_KL, we obtain a good approximation of the KL divergence between the associated source probabilities P and Q in the source space Ω, D_KL(m||m̂) D_KL(P||Q) (proof in <ref>). We use this generalisation of KL divergence on mass functions as our loss to mass RS-CNN. §.§ Approximating the KL divergence of the source probabilities to the random-set setting If we plug in the mass function encoding, where m(x)=P(x) is the true value and m̂(x)=Q(x) is the predicted value of the network, in the standard definition of D_KL, we obtain a good approximation of the KL divergence between the associated source probabilities P and Q in the source space Ω, D_KL(m||m̂) D_KL(P||Q). The KL divergence of mass functions m and m̂, D_KL(m||m̂) = ∑_A⊆Θm(A)logm(A)/m̂(A) Due to the multivalued mapping of the source probabilities, mass functions m(A) corresponds to P(x∈Ω : Γ(x)= A) and m̂(A) corresponds to Q(x∈Ω : Γ(x)= A). Substituting these values in the KL divergence of mass functions, D_KL(m||m̂) = ∑_A⊂Θm(A)logm(A)/m̂(A) = ∑_A⊂Θ∑_x:Γ(x)=AP(x)log∑_x:Γ(x)=AP(x)/∑_x:Γ(x)=AQ(x) D_KL(m||m̂) = ∑_A⊂Θ P(A)logP(A)/ Q(A) D_KL(P||Q) Whenever each subset A∈Θ is image of a unique point x^* of Ω, D_KL(m||m̂) D_KL(P||Q). Thus, D_KL(m||m̂) is in general a reasonable approximation of the true KL divergence of the source probabilities. §.§ Generalised entropy measures: Nguyen's entropy The question of defining a suitable entropy measure for belief functions has been rather extensively studied in the literature <cit.>. A suitable candidate should reduce to the standard Shannon entropy when applied to probability measures. We consider Nguyen’s generalisation of Shannon’s entropy <cit.> where probability values are replaced by mass values, H_n[m] = -∑_A ∈ℱm(A)log m(A), -0.1in where ℱ is the set of focal elements of m. Minimizing Nguyen's generalisation as loss in random-set CNN produces satisfactory training loss metrics. Training the random-set CNN with as loss the Kullback-Leibler (KL) divergence (Eq. <ref>) of the true and predicted masses produces a test accuracy of 99.39% for MNIST, as shown in Table <ref>. Comparable results are obtained for Nyugen's entropy (<ref>), a generalisation of Shannon's entropy, with a test accuracy of 99.10% on the MNIST dataset. -0.15in § ADDITIONAL EXPERIMENTS §.§ Budgetting of sets We choose the top K relevant subsets from 2^N sets to reduce the computational complexity as shown in Sec. <ref>. This is done by calculating the overlap of the ellipses formed by Gaussian Mixture Models fitted on each class c. A 2D visualization of the ellipses formed by calculating the principle axes and their lengths using eigenvectors and eigenvalues obtained from GMMs, for each class c <cit.> is shown in Fig. <ref>. §.§ Hyperparameters α and β of Belief RS-CNN loss In Fig. <ref>, we show the test accuracies for different values of hyperparameters α and β in ℒ_B-RS loss function (see Eq. <ref>). Hyperparameters α and β adjust the relative significance of the regularization terms M_r and M_s respectively in ℒ_B-RS loss. The test accuracies are calculated for CIFAR10 dataset with varying α (blue) and β (red) values, α/β = [0.5, 0.6, 0.9, 1.0, 1.5, 2.0, 2.5, 3.0, 4.0, 5.0]. Test accuracy is the highest when α and β equals 0.5. §.§ Uncertainty estimation for RS-CNN Pal's measure of uncertainty. Pal's measure of specificity <cit.> is a metric to measure the epistemic uncertainty associated with each prediction, relative to the distribution of evidence or mass across multiple sets of hypotheses, H_s = ∑_A∈ℱm(A)/|A| A higher measure of specificity indicates lower uncertainty and a fully specified belief function. For predictions where singletons have larger mass values compared to non-singletons focal sets, we encounter higher specificity and lower uncertainty. As the cardinality of the sets with higher mass values increases, the specificity value lowers, indicating a higher uncertainty. Fig. <ref> shows the relationship between specificity and cardinality, where a higher cardinality has low specificity value indicating higher uncertainty. This shows how set-valued belief observations capture the scale of uncertainty. Additionally, Fig. <ref> plots the distribution of the sum of predicted mass values across the subsets A ∈𝒫(N) where ∑_A⊂Θ m(A)=1, fulfilling the condition for valid mass functions. This is possible due to the M_s term in the ℒ_B-RS loss function (see Eq. <ref>). Entropy and belief. Fig. <ref> shows the entropy for two samples of CIFAR10, one is a slightly uncertain image, whereas the other is certain with high belief values and lower entropy. Both results are shown for K=20 classes, additional to the 10 singletons. Credal set width. Figures <ref> and <ref> plot the credal set widths for incorrectly classified and correctly classified samples over each class c in CIFAR10, respectively. The box represents the interquartile range between the 25th and 75th percentiles of the data encompassing most of the data. The vertical line inside the box represents the median and the whiskers extend from the box to the minimum and maximum values within a certain range, and data points beyond this range are considered outliers and are plotted individually as dots or circles. The box plots here are shown for 10,000 samples of CIFAR10 dataset where most samples within the incorrect classifications have higher credal set widths indicating higher uncertainty in these samples, especially for classes '', '', '', and ''.
http://arxiv.org/abs/2307.04846v1
20230710183258
Saturation and multifractality of Lagrangian and Eulerian scaling exponents in 3D turbulence
[ "Dhawal Buaria", "Katepalli R. Sreenivasan" ]
physics.flu-dyn
[ "physics.flu-dyn", "cond-mat.soft", "physics.comp-ph" ]
[][email protected] Tandon School of Engineering, New York University, New York, NY 11201, USA Max Planck Institute for Dynamics and Self-Organization, 37077 Göttingen, Germany Tandon School of Engineering, New York University, New York, NY 11201, USA Department of Physics and the Courant Institute of Mathematical Sciences, New York University, New York, NY 10012, USA Inertial range scaling exponents for both Lagrangian and Eulerian structure functions are obtained from direct numerical simulations of isotropic turbulence in triply periodic domains at Taylor-scale Reynolds number up to 1300. We reaffirm that transverse Eulerian scaling exponents saturate at ≈ 2.1 for moment orders p ≥ 10, significantly differing from the longitudinal exponents (which are predicted to saturate at ≈ 7.3 for p≥30 from a recent theory). The Lagrangian scaling exponents likewise saturate at ≈ 2 for p ≥ 8. The saturation of Lagrangian exponents and Eulerian transverse exponents is related by the same multifractal spectrum, which is different from the known spectra for Eulerian longitudinal exponents, suggesting that that Lagrangian intermittency is characterized solely by transverse Eulerian intermittency. We discuss possible implication of this outlook when extending multifractal predictions to the dissipation range, especially for Lagrangian acceleration. Saturation and multifractality of Lagrangian and Eulerian scaling exponents in 3D turbulence Katepalli R. Sreenivasan August 12, 2023 ===================================================================================================== Turbulent flows consist of a hierarchy of eddies, with the smaller eddies riding on the larger ones and extracting energy from them. To understand the deformation and rotation of smaller eddies (the key mechanisms driving energy transfers) and not just their translation due to large eddies, one has to consider the velocity increment across a smaller eddy of size r ≪ L (say), where L is the large eddy size <cit.>. The longitudinal velocity increment δ u_r = u(x+r) - u(x) corresponds to the case when the velocity component u(x) is in the direction of separation r. For velocity v(x) taken orthogonal to r, we obtain the transverse velocity increment δ v_r = v(x+r) - v(x). From the seminal work of Kolmogorov <cit.>, K41 henceforth, one surmises that the moment of increments ⟨ (δ u_r)^p ⟩, called structure functions, follow a universal power-law scaling in the so-called inertial-range S_p(r) ≡⟨ (δ u_r)^p ⟩∼ r^ζ_p , η≪ r ≪ L , where η is the viscous cutoff scale. One deduces ζ_p = p/3 from K41 and, ever since, the behavior of ζ_p has been of persistent interest <cit.>. While the result ζ_p=p/3 is known to be exact for p=3, i.e., ζ_3=1, extensive studies from <cit.> to <cit.> (and many in between) have clearly established nonlinear deviations of ζ_p from p/3 for p3. This so-called anomalous scaling is attributed to the intermittency of the scale-to-scale energy transfer processes (see, e.g., <cit.>). Given the natural importance of Lagrangian viewpoint in transport phenomena <cit.>, forceful arguments can be similarly made for the importance of Lagrangian velocity increments δ u_τ = u(t + τ) - u(t) over time lag τ, measured along fluid-particle trajectories, and Lagrangian structure functions ⟨ |δ u_τ |^p ⟩ defined therefrom [absolute value is taken for Lagrangian increments since the odd moments are otherwise zero]. Extension of Eulerian phenomenology to Lagrangian viewpoint leads to the expectation S^L_p (τ) ≡⟨ |δ u_τ|^p ⟩∼τ^ζ_p^L , τ_η≪τ≪ T_L where the temporal inertial-range is defined using T_L, the Lagrangian integral time and τ_η, the time-scale of viscous dissipation <cit.>. Since Lagrangian trajectories trace the underlying Eulerian field, it is natural to expect that a relation between Lagrangian and Eulerian exponents can be predicted. Using K41, one obtains ζ_p^L=p/2 <cit.>; but, experimental and numerical studies once again show nonlinear deviations from that prediction <cit.>. Attempts have been made <cit.> to quantify these deviation in terms of Eulerian intermittency, but they remain challenging for at least two reasons. First, the temporal scaling range in turbulence is substantially more restrictive than spatial scaling range <cit.>, making it very difficult to extract robust values of the Lagrangian scaling exponents. Second, past attempts have overwhelmingly focused on characterizing Lagrangian intermittency from Eulerian longitudinal intermittency, with the expectation that longitudinal and transverse exponents should be identical, which is clearly not the case <cit.>. In this Letter, presenting new data from direct numerical simulations (DNS) of isotropic turbulence at higher Reynolds numbers, we address both these challenges. We extract both Lagrangian and Eulerian scaling exponents. Our Eulerian results reaffirm the recent results of <cit.>. We then demonstrate an excellent correspondence between Lagrangian exponents and transverse Eulerian exponents, using as basis the same multifractal spectrum for both; this is different from the multifractal spectrum for longitudinal exponents, whose use in the past has failed to explain Lagrangian intermittency <cit.>). *Direct Numerical Simulations: The description of DNS is necessarily brief here because they have already been described and utilized in many recent works <cit.>. The simulations correspond to the canonical setup of forced stationary isotropic turbulence in a triply periodic domain and are carried out using the highly accurate Fourier pseudo-spectral methods in space and second-order Runge-Kutta integration in time; the large scales are numerically forced to achieve statistical stationarity <cit.>. A key feature of the present data is that we have achieved a wide range of Taylor-scale Reynolds number , going from 140-1300 (on grids of up to 12288^3 points) while maintaining excellent small-scale resolution <cit.>. For Lagrangian statistics, a large population of fluid particle trajectories is tracked together with the Eulerian field. For ≤ 650, up to 64M particles are tracked for each case, whereas for = 1300, 256M particles are tracked (with M=1024^2) <cit.>, providing ample statistics for convergence. *Saturation of transverse exponents: The implication of anomalous scaling is that it confers upon each moment order a separate and independent significance, instead of a mutual dependence (such as ζ_p = p/3 based on K41). Multifractals have enjoyed considerable success in describing this behavior <cit.>, but their theoretical rigor is yet to be established, lacking any direct connection to Navier-Stokes equations. Further, recent DNS data at high have shown noticeable departures of ζ_p from multifractal predictions for high orders <cit.>. Instead, starting from Navier-Stokes equations, a recent theory <cit.> was able to mitigate this situation and provide an improved prediction for ζ_p. Additionally, this theory also predicts that longitudinal exponents saturate with the moment order, i.e., lim_p→∞ζ_p →constant. We recall that the transverse exponents are defined by the relation S_p^tr∼ r^ζ_p^tr, where S_p^tr (r) ≡⟨ |δ v_r|^p ⟩. (Absolute values are taken because the odd-order transverse structure functions are zero by symmetry.) Multifractal models based on phenomenological considerations do not differentiate between longitudinal and transverse exponents, i.e. ζ_2p^tr = ζ_2p, and general arguments have also been advanced to the same end <cit.>. However, earlier studies have persistently pointed out that the two sets of exponents are different <cit.>, and recent work <cit.> at high has confirmed the differences; it further showed that transverse exponents saturate with ζ_∞^tr≈ 2.1 for p≥10. Incidentally, this saturation is quite different from ζ_∞≈ 7.3 (for p ≥ 30) as predicted for longitudinal exponents in <cit.>. These findings are summarized in Fig. <ref>, which plots the Eulerian longitudinal exponents from <cit.> (also confirmed by us) and transverse exponents from present simulations. Also included, besides K41, are two notable multifractal results <cit.> and the result from <cit.>. Important considerations go into establishing the reliability of high-order exponents with respect to the convergence, adequacy of grid resolution, and the Reynolds number. This discussion can be found in <cit.> and will not be repeated here. Instead, we focus on the ζ_p^tr, which clearly and substantially depart from the ζ_p and saturate for p≥ 10. We postpone to the Summary section the implication of different scaling of the longitudinal and transverse exponents for the universality of small-scale turbulence, but demonstrate the relation of ζ_p^tr to the Lagrangian exponents, which we immediately proceed to extract from the present data. *Lagrangian exponents from DNS: Robust extraction of inertial-range exponents depends on sufficient scale separation to allow a proper inertial-range to exist. The Eulerian spatial scale separation for the highest =1300 is L/η≈ 2500 <cit.>, while the temporal range is T_L/τ_K ≈ 105 <cit.>, thus making it inherently difficult to obtain a proper Lagrangian inertial-range <cit.>. This difficulty is highlighted in Fig. <ref>, which shows the log local slope of S_p^L(τ) at various , for p=2 and 4 in panels (a) and (b), respectively; although there is a suggestion of a plateau for the fourth-order, the local slopes of the curves are still changing with . This is in contrast to the corresponding Eulerian result for p=2, shown in Fig. <ref>, where a clear inertial-range emerges as increases. Because of this difficulty, Lagrangian exponents cannot be directly extracted even at the high of our DNS. Only by using extended self-similarity <cit.>, with respect to the second-order <cit.>, can one obtain the exponents. This is demonstrated in Fig. <ref>, which shows the ratio of local slope of S^L_p(τ) to that of S^L_2(τ). Evidently, a conspicuous plateau emerges for different orders in the same scaling range, seemingly independent of . Thus, we can extract the ratios ζ^L_p/ζ^L_2, which indeed was the practice in earlier works also <cit.>. Note, the justification for using ζ_2^L as the reference arises from the expectation that S_2^L ∼⟨ϵ⟩τ <cit.>; since the mean dissipation appears linearly, the result ζ_2^L=1 is free of intermittency (akin to ζ_3=1 for Eulerian exponents <cit.>). Extending the procedure demonstrated in Fig. <ref>,, we extract the ratios ζ^L_p/ζ^L_2 for up to p=10, and show them in Fig. <ref>. We have also included earlier results from both experiments and DNS <cit.>, obtained at comparatively lower . Overall, the current results at higher are in excellent agreement with prior results (which, however, have larger error bars). A remarkable observation, endemic to all data sets, is that the Lagrangian exponents saturate for p ≳ 8, similar to the transverse Eulerian exponents in Fig. <ref>. The data in Fig. <ref> are also compared with a number of predictions, which we discuss next. *The multifractal framework: It is obvious from Fig. <ref> that the data are quite far from K41. Following <cit.>, we will consider the well known multifractal model for relating Eulerian and Lagrangian exponents. The key concept in multifractals is that the (Eulerian) velocity increment δ u_r over a scale r is Hölder continuous, i.e., δ u_r ∼ r^h, where h is the local Hölder exponent with the multifracal spectrum D(h) <cit.>. From this local scaling relation, Eulerian structure functions can be readily derived by integrating over all possible h, as ⟨ (δ u_r)^p ⟩∼∫_h r^ph + 3 - D(h) dh. Using steepest-descent argument for r ≪ L gives ζ_p = inf_h [ ph + 3 - D(h) ] . The Lagrangian extension of multifractals relies on the phenomenological assumption that the spatial separation can be converted to temporal separation using r ∼τδ u_r, with δ u_r ∼δ u_τ <cit.>. This stipulation readily gives δ u_τ∼τ^h/(1-h), resulting in the Lagrangian exponents ζ_p^L = inf_h [ ph + 3 - D(h)/1-h ] . Thus, Lagrangian exponents can be directly predicted using the Eulerian multifractal spectrum D(h). Since most of the past work has focused on Eulerian longitudinal exponents, with the implicit assumption that transverse exponents are the same, the D(h) of the longitudinal exponents has been used to infer Lagrangian exponents. However, such predictions do not work as we see next. The Lagrangian exponents can be computed from Eq. (<ref>) by using Eulerian multifractal spectrum D(h) from Eq. (<ref>). The D(h) corresponding to the Eulerian multifractal models shown in Fig. <ref> are plotted in Fig. <ref>. They are obtained from ζ_p by taking a Legendre transform to invert the relations <cit.>, giving D(h) = inf_p [ ph + 3 - ζ_p ]. For reference, the D(h) for She-Leveque model is <cit.> D(h) = 1 + c_1 (h - h^*) - c_2 (h - h^*) log (h-h^*) where h^*=1/9, c_1 = c_2 (1 + loglogγ - logγ) and c_2 = 3/logγ, with γ=3/2. That for the Sreenivasan-Yakhot result of ζ_p = ζ_∞ p/(p+ β) <cit.> is D(h) = 3 - ζ_∞ - β h + 2 √(ζ_∞β h) where ζ_∞≈ 7.3 and β=3ζ_∞-3. The result for p-model can be found in <cit.>. In Fig. <ref>, in addition to the D(h) from these known Eulerian cases, we also utilize Eq. (<ref>) to numerically obtain the D(h) for transverse exponents (assuming ζ_p^tr≈ 2.1 for p ≥ 10, as shown in Fig. <ref>). Note, since the D(h) for ζ_p^tr is obtained numerically, the inversion formula in Eq. <ref> can only provide the concave hull <cit.>—which is what we plot in Fig. <ref>. The saturation value of exponents is reflected in the corresponding D(h) curve for h=0, as D(0) = 3 - ζ_∞ (≈ 0.9 for ζ_∞^tr≈ 2.1). Note that h<0 is not allowed in the multifractal framework <cit.>; the p-model and She-Leveque results correspond, respectively, to h_ min = 1/3log_2 (0.7) ≈ 0.172 <cit.> and h_ min = h^* = 1/9 <cit.>, which preclude saturation. The Sreenivasan-Yakhot result <cit.> predicts saturation for longitudinal exponents (at ζ_∞≈ 7.3, giving D(0) = 3 - 7.3 = -4.3 (not shown in in Fig. <ref>). *Lagrangian exponents from the Eulerian transverse multifractal spectrum: As we have seen, none of the multifractal predictions for Lagrangian exponents using Eulerian longitudinal exponents agree with the data, except for p ≲ 4, which even are close to K41 result. In contrast, the prediction corresponding to Eulerian transverse exponent (green dashed line) closely follows the measured results, particularly capturing the saturation at high orders. Note that the predicted saturation value, ζ_∞^L ≈ 2.1, is the same for both transverse Eulerian and Lagrangian exponents, The actual Lagrangian data, however, saturate at a slightly smaller value. We believe this minor difference (of only 5%) stems from the fact that even at =1300, the temporal inertial-range is still underdeveloped, and the intermittency-free result of ζ_2^L=1 is not unambiguously realized. Recall that all Lagrangian exponents shown in Fig. <ref> are extracted as ratios ζ_p^L/ζ_2^L. Thus, this minor discrepancy in the saturation values could be explained by small departures from the expectation of ζ_2^L=1. Given this and also the differences among different data sets, the close correspondence between the Eulerian transverse exponents and Lagrangian exponents is quite remarkable. It is worth noting that Lagrangian exponents saturate for slightly smaller p than for Eulerian transverse exponents. This can be explained from Eqs. (<ref>)-(<ref>) as a kinematic effect. For Eulerian exponents, ζ_3 = 1 is an exact result, corresponding to h≈ 1/3, D(h) ≈ 3, which conforms to the intermittency-free K41 result <cit.>. This, in turn, gives ζ_2^L = 1 as the corresponding intermittency-free Lagrangian result, also for h≈1/3, D(h) ≈ 3. This argument can be extended to higher orders to show that Lagrangian exponents for order p correspond to Eulerian transverse exponents of order 3p/2. It simply follows that the saturation of Lagrangian exponents occurs for smaller p. A similar phenomenological correspondence can also be provided for other Lagrangian statistics, for instance, the second moment of acceleration (the temporal gradient of velocity) corresponds to the third moment of spatial velocity gradient <cit.>. Discussion: Two significant results emerge from this work: (a) scaling exponents saturate for both Eulerian transverse and Lagrangian structure functions; and (b) the saturation of Lagrangian exponents are characterized solely by the Eulerian transverse exponents (and not the longitudinal, as previously believed). Given that the transverse exponents are smaller for large p, this seems reasonable from the steepest-descent argument <cit.>. The saturation of scaling exponents is an extreme form of anomalous behavior, but is not uncommon; it holds for forced Burgers equation <cit.>, Kraichnan's model of the passive scalar <cit.>, and also DNS results of passive scalars advected by 3D turbulence <cit.>. However, its prevalence in hydrodynamic turbulence itself has only become apparent recently <cit.>. The theory of <cit.> predicts that Eulerian longitudinal exponents saturate as well, although at very high enough moment orders that cannot be yet validated. In contrast, the saturation for transverse exponents occurs for p ≥ 10, as do Lagrangian exponents; this feature makes it possible to relate them through the multifractal spectrum (clearly not possible with the Eulerian longitudinal case). We believe that the saturation of exponents is an important property whose significance has been discussed in <cit.>. Our results also bring forth some important questions. One of them is the extension of the multifractal framework from the inertial- to dissipative-range, i.e., describing the scaling of velocity gradients. Such an extension relies on the phenomenological assumption that the local Reynolds number, describing the dissipative cutoff, is unity, i.e., δ u_r r/ν = 1 <cit.>. As highlighted in recent works <cit.>, this assumption is valid for longitudinal velocity increments, but not for transverse increments. This is essentially because of how vorticity and strain-rate interact in turbulence. It can be expected on this basis that the extension of multifractals to dissipation-range works for longitudinal velocity gradients, but not for transverse velocity gradients. Since Lagrangian intermittency seems a direct result of Eulerian transverse intermittency, it also follows that the extension to acceleration statistics would be an issue. Our recent studies <cit.> indeed confirm this. In addition, acceleration components are strongly correlated in turbulence <cit.>; this is not accounted for in multifractals, which are oblivious to Navier-Stokes dynamics. A second question concerns the meaning of universality given the longitudinal and transverse exponents behave differently. One strategy could be to consider a joint multifractal spectrum for longitudinal and transverse increments. It might be possible to set appropriate conditions on both to enable the inertial-range universality and the transition from the inertial- to dissipation-range. Essentially, addressing the discrepancy between longitudinal and transverse intermittency presents a critical and pressing problem in turbulence theory. *Acknowledgments: We gratefully acknowledge discussions with Victor Yakhot and sustained collaboration with P.K. Yeung. We also gratefully acknowledge the Gauss Centre for Supercomputing e.V. (www.gauss-centre.eu) for providing computing time on the supercomputers JUQUEEN and JUWELS at Jülich Supercomputing Centre (JSC), where the simulations reported in this paper were primarily performed. Computations were also supported partially by the supercomputing resources under the Blue Water project at the National Center for Supercomputing Applications at the University of Illinois (Urbana-Champaign). 55 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Kolmogorov(1941a)]K41 author author A. N. Kolmogorov, title title The local structure of turbulence in an incompressible fluid for very large Reynolds numbers, @noop journal journal Dokl. Akad. Nauk. SSSR volume 30, pages 299–303 (year 1941a)NoStop [Monin and Yaglom(1975)]MY.II author author A. S. Monin and author A. M. Yaglom, @noop title Statistical Fluid Mechanics, Vol. volume 2 (publisher MIT Press, year 1975)NoStop [Frisch(1995)]Frisch95 author author U. Frisch, @noop title Turbulence: the legacy of Kolmogorov (publisher Cambridge University Press, address Cambridge, year 1995)NoStop [Kolmogorov(1962)]K62 author author A. N. Kolmogorov, title title A refinement of previous hypotheses concerning the local structure of turbulence in a viscous incompressible fluid at high Reynolds number, @noop journal journal J. Fluid Mech. volume 13, pages 82–85 (year 1962)NoStop [Sreenivasan and Antonia(1997)]Sreeni97 author author K. R. Sreenivasan and author R. A. Antonia, title title The phenomenology of small-scale turbulence, @noop journal journal Annu. Rev. Fluid Mech. volume 29, pages 435–77 (year 1997)NoStop [Van Atta and Park(2005)]vanatta author author C. W. Van Atta and author J. Park, title title Statistical self-similarity and inertial subrange turbulence, in @noop booktitle Statistical Models and Turbulence: Proceedings of a Symposium held at the University of California, San Diego (La Jolla) July 15–21, 1971 (organization Springer, year 2005) pp. pages 402–426NoStop [Iyer et al.(2020)Iyer, Sreenivasan, and Yeung]iyer20 author author K. P. Iyer, author K. R. Sreenivasan, and author P. K. Yeung, title title Scaling exponents saturate in three-dimensional isotropic turbulence, @noop journal journal Phys. Rev. Fluids volume 5, pages 054605 (year 2020)NoStop [Wyngaard(1992)]wyngaard author author J. C. Wyngaard, title title Atmospheric turbulence, @noop journal journal Annu. Rev. Fluid Mech. volume 24, pages 205–234 (year 1992)NoStop [Sawford(2001)]sawford.2001 author author B. L. Sawford, title title Turbulent relative dispersion, @noop journal journal Annu. Rev. Fluid Mech. volume 33, pages 289–317 (year 2001)NoStop [Falkovich et al.(2001)Falkovich, Gaw ȩdzki, and Vergassola]falkovich01 author author G. Falkovich, author K. Gaw ȩdzki, and author M. Vergassola, title title Particles and fields in fluid turbulence, @noop journal journal Rev. Mod. Phys. volume 73, pages 913–975 (year 2001)NoStop [Note1()]Note1 note Absolute value is taken for Lagrangian increments since the odd moments are otherwise zeroNoStop [Sawford et al.(2003)Sawford, Yeung, Borgas, Vedula, Porta, Crawford, and Bodenschatz]Sawford03 author author B. L. Sawford, author P. K. Yeung, author M. S. Borgas, author P. Vedula, author A. La Porta, author A. M. Crawford, and author E. Bodenschatz, title title Conditional and unconditional acceleration statistics in turbulence, @noop journal journal Phys. Fluids volume 15, pages 3478–3489 (year 2003)NoStop [Mordant et al.(2004)Mordant, Lévêque, and Pinton]mordant2004 author author N. Mordant, author E. Lévêque, and author J.-F. Pinton, title title Experimental and numerical study of the Lagrangian dynamics of high reynolds turbulence, @noop journal journal New J. Phys. volume 6, pages 116 (year 2004)NoStop [Xu et al.(2006)Xu, Bourgoin, Ouellette, and Bodenschatz]Xu06 author author H. Xu, author M. Bourgoin, author N. T. Ouellette, and author E. Bodenschatz (collaboration International Collaboration for Turbulence Research), title title High order lagrangian velocity statistics in turbulence, @noop journal journal Phys. Rev. Lett. volume 96, pages 024503 (year 2006)NoStop [Sawford and Yeung(2015)]sawford15 author author B. L. Sawford and author P. K. Yeung, title title Direct numerical simulation studies of Lagrangian intermittency in turbulence, @noop journal journal Phys. Fluids volume 27, pages 065109 (year 2015)NoStop [Borgas(1993)]borgas93 author author M. S. Borgas, title title The multifractal Lagrangian nature of turbulence, @noop journal journal Philos. Trans. R. Soc. A volume 342, pages 379–411 (year 1993)NoStop [Biferale et al.(2004)Biferale, Boffetta, Celani, Devenish, Lanotte, and Toschi]biferale2004 author author L. Biferale, author G. Boffetta, author A. Celani, author B. J. Devenish, author A. Lanotte, and author F. Toschi, title title Multifractal statistics of Lagrangian velocity and acceleration in turbulence, @noop journal journal Phys. Rev. Lett. volume 93, pages 064502 (year 2004)NoStop [Arnéodo et al.(2008)Arnéodo et al.]arneodo author author A. Arnéodo et al., title title Universal intermittent properties of particle trajectories in highly turbulent flows, @noop journal journal Phys. Rev. Lett. volume 100, pages 254504 (year 2008)NoStop [Dhruva et al.(1997)Dhruva, Tsuji, and Sreenivasan]dhruva97 author author B. Dhruva, author Y. Tsuji, and author K. R. Sreenivasan, title title Transverse structure functions in high-reynolds-number turbulence, @noop journal journal Phys. Rev. E volume 56, pages R4928–R4930 (year 1997)NoStop [Chen et al.(1997)Chen, Sreenivasan, Nelkin, and Cao]chen97 author author S. Chen, author K. R. Sreenivasan, author M. Nelkin, and author N. Cao, title title Refined similarity hypothesis for transverse structure functions in fluid turbulence, @noop journal journal Phys. Rev. Lett. volume 79, pages 2253–2256 (year 1997)NoStop [Buaria and Sreenivasan(2022a)]BS_PRL_2022 author author D. Buaria and author K. R. Sreenivasan, title title Scaling of acceleration statistics in high Reynolds number turbulence, @noop journal journal Phys. Rev. Lett. volume 128, pages 234502 (year 2022a)NoStop [Buaria and Sreenivasan(2020)]BS2020 author author D. Buaria and author K. R. Sreenivasan, title title Dissipation range of the energy spectrum in high Reynolds number turbulence, @noop journal journal Phys. Rev. Fluids volume 5, pages 092601(R) (year 2020)NoStop [Buaria et al.(2020)Buaria, Bodenschatz, and Pumir]BBP2020 author author D. Buaria, author E. Bodenschatz, and author A. Pumir, title title Vortex stretching and enstrophy production in high Reynolds number turbulence, @noop journal journal Phys. Rev. Fluids volume 5, pages 104602 (year 2020)NoStop [Buaria and Pumir(2021)]BP2021 author author D. Buaria and author A. Pumir, title title Nonlocal amplification of intense vorticity in turbulent flows, @noop journal journal Phys. Rev. Research volume 3, pages 042020 (year 2021)NoStop [Buaria et al.(2022)Buaria, Pumir, and Bodenschatz]BPB2022 author author D. Buaria, author A. Pumir, and author E. Bodenschatz, title title Generation of intense dissipation in high Reynolds number turbulence, @noop journal journal Philos. Trans. R. Soc. A volume 380, pages 20210088 (year 2022)NoStop [Buaria and Sreenivasan(2022b)]BS2022 author author D. Buaria and author K. R. Sreenivasan, title title Intermittency of turbulent velocity and scalar fields using three-dimensional local averaging, @noop journal journal Phys. Rev. Fluids volume 7, pages L072601 (year 2022b)NoStop [Ishihara et al.(2009)Ishihara, Gotoh, and Kaneda]Ishihara09 author author T. Ishihara, author T. Gotoh, and author Y. Kaneda, title title Study of high-Reynolds number isotropic turbulence by direct numerical simulations, @noop journal journal Ann. Rev. Fluid Mech. volume 41, pages 165–80 (year 2009)NoStop [Rogallo(1981)]Rogallo author author R. S. Rogallo, title title Numerical experiments in homogeneous turbulence, @noop journal journal NASA Technical Memo (year 1981)NoStop [Buaria et al.(2019)Buaria, Pumir, Bodenschatz, and Yeung]BPBY2019 author author D. Buaria, author A. Pumir, author E. Bodenschatz, and author P. K. Yeung, title title Extreme velocity gradients in turbulent flows, @noop journal journal New J. Phys. volume 21, pages 043004 (year 2019)NoStop [Buaria et al.(2015)Buaria, Sawford, and Yeung]BSY.2015 author author D. Buaria, author B. L. Sawford, and author P. K. Yeung, title title Characteristics of backward and forward two-particle relative dispersion in turbulence at different Reynolds numbers, @noop journal journal Phys. Fluids volume 27, pages 105101 (year 2015)NoStop [Buaria et al.(2016)Buaria, Yeung, and Sawford]BYS.2016 author author D. Buaria, author P. K. Yeung, and author B. L. Sawford, title title A Lagrangian study of turbulent mixing: forward and backward dispersion of molecular trajectories in isotropic turbulence, @noop journal journal J. Fluid Mech. volume 799, pages 352–382 (year 2016)NoStop [Buaria and Yeung(2017)]buaria.cpc author author D. Buaria and author P. K. Yeung, title title A highly scalable particle tracking algorithm using partitioned global address space (PGAS) programming for extreme-scale turbulence simulations, @noop journal journal Comput. Phys. Commun. volume 221, pages 246 – 258 (year 2017)NoStop [Sreenivasan and Yakhot(2021)]SY2021 author author K. R. Sreenivasan and author V. Yakhot, title title Dynamics of three-dimensional turbulence from Navier-Stokes equations, @noop journal journal Phys. Rev. Fluids volume 6, pages 104604 (year 2021)NoStop [Meneveau and Sreenivasan(1987)]MS87 author author C. Meneveau and author K. R. Sreenivasan, title title Simple multifractal cascade model for fully developed turbulence, @noop journal journal Phys. Rev. Lett. volume 59, pages 1424 (year 1987)NoStop [She and Leveque(1994)]SL94 author author Z.-S. She and author E. Leveque, title title Universal scaling laws in fully developed turbulence, @noop journal journal Phys. Rev. Lett. volume 72, pages 336–339 (year 1994)NoStop [L'vov et al.(1997)L'vov, Podivilov, and Procaccia]lvov97 author author V. S. L'vov, author E. Podivilov, and author I. Procaccia, title title Invariants for correlations of velocity differences in turbulent fields, @noop journal journal Phys. Rev. Lett. volume 79, pages 2050–2052 (year 1997)NoStop [Nelkin(1990)]Nelkin90 author author M. Nelkin, title title Multifractal scaling of velocity derivatives in turbulence, @noop journal journal Phys. Rev. A volume 42, pages 7226–7229 (year 1990)NoStop [Grossmann et al.(1997)Grossmann, Lohse, and Reeh]gl97 author author S. Grossmann, author D. Lohse, and author A. Reeh, title title Different intermittency for longitudinal and transversal turbulent fluctuations, @noop journal journal Physics of Fluids volume 9, pages 3817–3825 (year 1997)NoStop [Shen and Warhaft(2002)]shen02 author author X. Shen and author Z. Warhaft, title title Longitudinal and transverse structure functions in sheared and unsheared wind-tunnel turbulence, @noop journal journal Phys. Fluids volume 14, pages 370–381 (year 2002)NoStop [Gotoh et al.(2002)Gotoh, Fukayama, and Nakano]gotoh02 author author T. Gotoh, author D. Fukayama, and author T. Nakano, title title Velocity field statistics in homogeneous steady turbulence obtained using a high-resolution direct numerical simulation, @noop journal journal Phys. Fluids volume 14, pages 1065–1081 (year 2002)NoStop [Grauer et al.(2012)Grauer, Homann, and Pinton]grauer2012 author author R. Grauer, author H. Homann, and author J.-F. Pinton, title title Longitudinal and transverse structure functions in high-Reynolds-number turbulence, @noop journal journal New J. Phys. volume 14, pages 063016 (year 2012)NoStop [Buaria(2016)]buaria.thesis author author D. Buaria, title Lagrangian investigations of turbulent dispersion and mixing using Petascale computing, @noop Ph.D. thesis, school Georgia Institute of Technology (year 2016)NoStop [Sawford and Yeung(2011)]sawford2011 author author B. L. Sawford and author P. K. Yeung, title title Kolmogorov similarity scaling for one-particle lagrangian statistics, @noop journal journal Phys. Fluids volume 23 (year 2011)NoStop [Buaria(2023)]buaria.com author author D. Buaria, title title Comment on “Universality and Intermittency of Pair Dispersion in Turbulence”, @noop journal journal Phys. Rev. Lett. volume 130, pages 029401 (year 2023)NoStop [Benzi et al.(1993)Benzi, Ciliberto, Tripiccione, Baudet, Massaioli, and Succi]benzi93 author author R. Benzi, author S. Ciliberto, author R. Tripiccione, author C. Baudet, author F. Massaioli, and author S. Succi, title title Extended self-similarity in turbulent flows, @noop journal journal Phys. Rev. E volume 48, pages R29–R32 (year 1993)NoStop [Kolmogorov(1941b)]K41b author author A. N. Kolmogorov, title title Dissipation of energy in locally isotropic turbulence, @noop journal journal Dokl. Akad. Nauk. SSSR volume 434, pages 16–18 (year 1941b)NoStop [Benzi et al.(1984)Benzi, Paladin, Parisi, and Vulpiani]benzi1984 author author R. Benzi, author G. Paladin, author G. Parisi, and author A. Vulpiani, title title On the multifractal nature of fully developed turbulence and chaotic systems, @noop journal journal J. Phys. A volume 17, pages 3521 (year 1984)NoStop [Bec and Khanin(2007)]bec07 author author J. Bec and author K. Khanin, title title Burgers turbulence, @noop journal journal Phys. Rev. volume 447, pages 1 – 66 (year 2007)NoStop [Kraichnan(1994)]RK94 author author R. H. Kraichnan, title title Anomalous scaling of a randomly advected passive scalar, @noop journal journal Phys. Rev. Lett. volume 72, pages 1016–1019 (year 1994)NoStop [Iyer et al.(2018)Iyer, Schumacher, Sreenivasan, and Yeung]iyer18 author author K. P. Iyer, author J. Schumacher, author K. R. Sreenivasan, and author P. K. Yeung, title title Steep cliffs and saturated exponents in three-dimensional scalar turbulence, @noop journal journal Phys. Rev. Lett. volume 121, pages 264501 (year 2018)NoStop [Buaria et al.(2021)Buaria, Clay, Sreenivasan, and Yeung]BCSY2021b author author D. Buaria, author M. P. Clay, author K. R. Sreenivasan, and author P. K. Yeung, title title Turbulence is an ineffective mixer when Schmidt numbers are large, @noop journal journal Phys. Rev. Lett. volume 126, pages 074501 (year 2021)NoStop [Paladin and Vulpiani(1987)]Paladin87 author author G. Paladin and author A. Vulpiani, title title Degrees of freedom of turbulence, @noop journal journal Phys. Rev. A volume 35, pages 1971–1973 (year 1987)NoStop [Buaria and Pumir(2022)]BP2022 author author D. Buaria and author A. Pumir, title title Vorticity-strain rate dynamics and the smallest scales of turbulence, @noop journal journal Phys. Rev. Lett. volume 128, pages 094501 (year 2022)NoStop [Buaria and Sreenivasan(2023)]BS2023 author author D. Buaria and author K. R. Sreenivasan, title title Lagrangian acceleration in fully developed turbulence and its Eulerian decompositions, @noop journal journal Phys. Rev. Fluids volume 8, pages L032601 (year 2023)NoStop [Tsinober et al.(2001)Tsinober, Vedula, and Yeung]tsinober01 author author A. Tsinober, author P. Vedula, and author P. K. Yeung, title title Random Taylor hypothesis and the behavior of local and convective accelerations in isotropic turbulence, @noop journal journal Phys. Fluids volume 13, pages 1974–1984 (year 2001)NoStop
http://arxiv.org/abs/2307.04034v1
20230708191132
Robust Universal Inference
[ "Beomjo Park", "Sivaraman Balakrishnan", "Larry Wasserman" ]
stat.ME
[ "stat.ME" ]
Robust Universal Inference Beomjo Park, Sivaraman Balakrishnan, and Larry Wasserman Department of Statistics & Data Science Machine Learning Department Carnegie Mellon University Pittsburgh, PA 15213. August 12, 2023 In statistical inference, it is rarely realistic that the hypothesized statistical model is well-specified, and consequently it is important to understand the effects of misspecification on inferential procedures. When the hypothesized statistical model is misspecified, the natural target of inference is a projection of the data generating distribution onto the model. We present a general method for constructing valid confidence sets for such projections, under weak regularity conditions, despite possible model misspecification. Our method builds upon the universal inference method of <cit.> and is based on inverting a family of split-sample tests of relative fit. We study settings in which our methods yield either exact or approximate, finite-sample valid confidence sets for various projection distributions. We study rates at which the resulting confidence sets shrink around the target of inference and complement these results with a simulation study. § INTRODUCTION One of the broad goals of statistical inference is to draw conclusions about a population from a sample of the population. This goal is typically facilitated by the use of a statistical model 𝒫, a collection of distributions, which the statistician hypothesizes will contain a useful approximation to the data generating distribution. The well-specified case is when ∈ and the misspecified case is when this does not necessarily hold. In the misspecified case, the target of inference is usually a projection distribution. Formally, given a divergence ρ which maps a pair of distributions to ℝ_+, we can define the (forward) projection[We tacitly assume that the projection exists and is unique. When the projection is not unique our inferential guarantees always hold for any (arbitrary) fixed choice of the projection . Characterizing the existence of a projection distribution (for f-divergences) has received some attention in past work <cit.>.] of the distribution onto the statistical model as: := _P ∈ρ( P). The general goal of our paper is to construct uniformly valid confidence sets for assuming only weak regularity conditions on the distribution and the statistical model . We let X_1,…, X_n be an i.i.d sample from a distribution ∈ defined on ℝ^d, where 𝒬⊇𝒫 is a class of distributions satisfying weak regularity conditions. We wish to construct (honest) confidence sets, C_α(X_1,…, X_n) such that, inf_∈ℙ_(∈ C_α(X_1,…, X_n)) ≥ 1 - α. In parametric statistical models, in the well-specified case, the likelihood-ratio test, and confidence sets obtained from asymptotically Gaussian estimators, are the main inferential tools for constructing hypothesis tests and confidence intervals. In the misspecified case, one can develop analogous tools for constructing tests and intervals for the Kullback-Leibler (KL) projection parameter, using sandwich estimates for the variance <cit.>. The validity of these methods, in both the well-specified and misspecified cases, relies on large sample asymptotic theory and requires that the statistical model 𝒫 and the sampling distribution satisfy strong regularity conditions. In recent work <cit.>, we introduced a procedure (described in more detail in Section <ref>) based on data-splitting to construct uniformly, finite-sample valid likelihood-ratio confidence sets under no regularity conditions. This work showed that, in the well-specified setting, sample-splitting can yield practical, finite-sample valid inference, even for irregular statistical models, often at a surprisingly small statistical price. The challenges of inference under weak regularity conditions are exacerbated in the misspecified setting. In contrast to the well-specified case where the target of inference is unambiguous, in the misspecified case there are many natural targets of inference. Each choice of the divergence ρ in (<ref>), yields a different target and in most cases these targets will have drastically different properties. This in turn poses significant challenges in constructing a unified framework for statistical inference in the misspecified setting. Under weak regularity conditions, the KL projection distribution can be an unstable inferential target, wherein small perturbations to the data-generating distribution P^* can lead to dramatic shifts in the target P. From a theoretical standpoint, this makes finite-sample valid inference for the KL projection distribution challenging, unless strong regularity conditions are imposed. From a practical standpoint, these instabilities can make the KL projection distribution an undesirable target, and in these cases it is essential to develop a flexible family of methods that can target other (more stable) projection distributions. To address these challenges, we develop a re-interpretation of the universal inference method <cit.> as inverting a particular family of pairwise likelihood-ratio tests. This interpretation brings into focus the key building block of universal inferential methods – pairwise hypothesis tests. Building on this insight we show that one can develop robust universal inference procedures by inverting appropriate families of robust pairwise tests. We then study the design and properties of robust pairwise tests, and relate them to the coverage and size properties of our proposed robust universal inference method. §.§ Related Work Asymptotic statistical inference, in both the well-specified and misspecified cases, is a topic of classical interest. Some entry points to the vast literature on this topic include the reference books <cit.>. Results in this literature <cit.> typically leverage strong regularity conditions to determine the asymptotic distribution of a point estimate (such as the Maximum Likelihood Estimator (MLE)), and use the asymptotic distribution of the estimate to construct (asymptotically valid) confidence sets. Our work is motivated in part by a recent line of work <cit.>, and more classical work <cit.>, where sample-splitting is used to avoid the strong regularity conditions typically needed for valid statistical inference. The focus on statistical inference under weaker regularity conditions, despite model misspecification, is the central theme of work in robust statistics <cit.>. One of the best understood methods for constructing robust estimators is to select, from a set of candidates, one which wins a carefully setup tournament – an idea which goes back to <cit.>, and others. At the heart of these tournament estimators are pairwise selectors, which attempt to robustly select one of a pair of candidates, which provide a better relative fit to the sampling distribution. These robust pairwise tests have been used to great effect in robust estimation, and our work highlights their usefulness in constructing assumption-light confidence sets. §.§ Outline The rest of this paper is organized as follows. Section <ref> provides some background. We briefly introduce the universal inference procedure, and develop a new perspective on it. Section <ref> motivates the methods we study in this paper by pinpointing some of the failures of universal inference. Section <ref> describes a general strategy to construct confidence sets for projection distributions, and highlights the importance of designing tests of relative fit. Section <ref> highlights some important examples where we are able to build on prior work in order to design exact and approximate tests of relative fit for different choices of the underlying divergence measure. Section <ref> studies the size of the resulting confidence sets. Section <ref> demonstrates some of the strengths of our proposed inference methods based on illustrative numerical examples. We conclude in Section <ref> with a brief discussion of future work. § BACKGROUND We let X_1,…, X_n be an i.i.d sample from a distribution ∈ defined on ℝ^d, and we let denote our working statistical model. Throughout the paper, the collection of distributions will be quite general, typically only satisfying some weak regularity conditions. §.§ Universal Inference Our starting point is our prior work <cit.> which introduced a procedure based on data-splitting to construct uniformly, finite-sample valid confidence sets under weak regularity conditions. Importantly, the validity guarantees of universal inference require the statistical model to be well-specified. The universal inference procedure is to: * Split the data := {X_1,…,X_n} into two sets _0 and _1. * On the set _1 calculate any estimate (e.g., could be the MLE in the model ). * Assume that the distributions in 𝒫 have densities (denoted with lower-case symbols) with respect to a dominating measure λ. We let ℒ_0(P) denote the likelihood of the distribution P evaluated on the samples in _0: ℒ_0(P) := ∏_i ∈_0 p(X_i), and define ℒ_0() analogously. Then construct the confidence set, C_α(X_1,…,X_n) = { P: ℒ_0(P)/ℒ_0()≥α}. In the well-specified case, <cit.> show (in their Theorem 1) that, under no additional regularity conditions, C_α is a finite-sample valid 1 - α confidence set for the distribution . §.§ A Re-Interpretation of Universal Inference To motivate the main proposal of this paper it is useful to re-visit (and generalize) the procedure described above, via the lens of inverting a family of hypothesis tests. The basic idea is classical, and is sometimes referred to as the duality between confidence sets and hypothesis tests. Formally, given samples X_1,…, X_n ∼, suppose we have a family of tests ϕ_P: {X_1,…,X_n}↦{0,1} for testing the null hypothesis H_0: = P. Here the test function ϕ_P takes the value 1 to indicate a rejection of the null hypothesis and takes the value 0 otherwise. If the family of tests is valid, i.e. they control the Type I error, _P [ϕ_P(X_1,…,X_n) ]≤α,    ∀ P ∈, then the following confidence set, C_α(X_1,…,X_n) := { P ∈: ϕ_P = 0 }, is uniformly valid when the statistical model is correctly specified, i.e. inf_∈ℙ_(∈ C_α(X_1,…, X_n)) ≥ 1 - α. Although this is a general recipe for constructing valid confidence sets, it does not provide the statistician much guidance in designing tests which might lead to small confidence sets. Universal inference is based on the idea that one can use a separate sample to construct an accurate estimate . We can then construct our family of tests, on the remaining samples, to have high power in distinguishing the sampling distribution from this pilot estimate. Formally, we could choose our family of tests to have high power to distinguish the hypotheses: H_0:   = P, versus H_1:   = . This use of a separate sample to construct a pilot estimate, simplifies the design of the tests to invert considerably since now we can focus on tests that have strong guarantees for distinguishing this simple null-versus-simple alternative. Indeed, universal inference uses the likelihood-ratio test for distinguishing these hypotheses, resulting in tests ϕ_P of the form: ϕ_P = 𝕀[ ℒ_0()/ℒ_0(P) > (P, ) ], for a choice of the threshold (P, ) which ensures that the condition in (<ref>) is satisfied. Although it is possible to determine optimal thresholds in the likelihood-ratio tests above, this can be practically cumbersome since these thresholds depend on both the pilot estimate and the null hypothesis P under consideration. The work of <cit.> further shows that a universal threshold = 1/α suffices to ensure the condition in (<ref>). To summarize, one can view the universal inference confidence set (<ref>) as arising by inverting a family of likelihood-ratio tests designed to distinguish each candidate distribution P from a pilot estimate . We emphasize that the universal inference procedure, and its reinterpretation described above rely crucially on correct model-specification to ensure validity. For instance, inverting a family of tests that satisfies (<ref>) is no longer meaningful when the model is misspecified. However, the testing interpretation suggests that one might develop novel variants of the universal inference procedure which are useful despite model-misspecification, by formulating appropriate robust hypotheses and designing robust tests for distinguishing them. We make these ideas precise in Section <ref>. §.§ Divergences Throughout this paper, we make frequent use of different divergences between pairs of probability distributions. We briefly introduce them here. We let P and Q be distributions defined on ℝ^d with densities p and q with respect to a common dominating measure λ. The Hellinger distance is defined as: (P,Q) = 1/√(2)( ∫ (√(p) - √(q))^2 dλ)^1/2, and the Kullback-Leibler (KL) divergence is defined as: (P Q) = ∫(logp/q) P̣, if P is dominated by Q, ∞, otherwise. The family of density power divergences <cit.>, are defined for a parameter β≥ 0 as, _β (P Q) = ∫{ q^1+β - ( 1+1/β) q^β p + 1/β p^1+β} d λ, β > 0 (P Q), β = 0 where _0 = is defined by taking the limit of β→ 0. Finally, the family of Integral Probability Metrics (IPMs) <cit.>, are defined as _ℱ(P, Q) = sup_f ∈ℱ| _P (f) - _Q (f) | where ℱ is a symmetric class (i.e., f ∈ℱ - f ∈ℱ) of real-valued bounded measurable functions on the domain of P and Q. Important special cases of IPMs include the Total Variation distance (TV, where ℱ is the collection of functions with sup-norm at most 1), the Wasserstein-1 distance (where ℱ is the collection of 1-Lipschitz functions) and the Maximum Mean Discrepancy (MMD, where ℱ is the unit ball of a Reproducing Kernel Hilbert Space with kernel k). § FAILURES OF UNIVERSAL INFERENCE To provide some motivation and intuition for the methods we propose in this paper, it is useful to understand some of the failures of the universal inference framework when the statistical model is misspecified, and the target of inference is the KL projection. §.§ Unbounded Likelihood-Ratios The behavior of likelihood-ratio based methods can be sensitive to the tail behavior of likelihood-ratios. The following simple example illustrates that under model misspecification, universal inference can fail to cover the KL projection parameter. These pathologies do not arise when the statistical model is correctly specified, and the challenges in this example arise due to an interplay between poorly behaved likelihood-ratios and model misspecification. This example also serves to highlight the fact that the KL projection parameter can in some cases be an undesirable inferential target. We let (p) denote the Bernoulli distribution with parameter p. Suppose we observe X_1,…,X_n ∼ := (ϵ_n) for some non-negative 0 < ϵ_n < (1-α)/n. We use the statistical model = {(p) : p ∈{0, 1/2 }}. Suppose we consider the pilot estimator to be the MLE, = _p ∈ℒ_1(p). Then, for all sufficiently large n ≥ n_0 where n_0 only depends on α the split LRT confidence set in (<ref>), with an equal sized split into _0 and _1, fails to cover the KL projection at the nominal level. The proof is in Appendix <ref>. The intuition is however clear. In this example the KL projection distribution is (1/2). For ϵ_n ≪ 1/n, with high probability the samples X_1,…,X_n are all 0. Consequently, the MLE with high-probability will be (0). Furthermore, the split sample likelihood ℒ_0 will be much higher for (0) than (1/2), and consequently (1/2) will not be included in the universal set. In this example likelihood-ratios are unbounded and as a consequence the KL divergence is an unstable function of the model parameters, i.e. when ϵ_n = 0, ((ϵ_n) (0)) is 0, but is ∞ for any ϵ_n > 0. In such cases, the finite-sample (log)-likelihood-ratio is a poor estimate of the population KL divergence, and this poses significant challenges for finite-sample valid inference. From a practical standpoint, a more reasonable inferential target could be a different, stabler projection distribution (e.g., the Hellinger or TV projection distribution) and we address this in Sections <ref> and <ref>. §.§ Failure Despite Bounded Likelihood-Ratios In the previous example it is clear that unbounded likelihood-ratios can result in pathologies which are challenging to address with finite-sample valid inference. However, even when all likelihood-ratios in the model are well-behaved, universal inference can fail to cover the KL projection parameter. It is important to note that except under the stringent condition that the underlying model is convex (see Section 6 of <cit.>), universal inference has no guaranteed coverage when the model is misspecified. Suppose we obtain X_1,…,X_n ∼ := (0.5 + ϵ_n) for some small, positive 0 <ϵ_n ≤ c/n, where c > 0 is a small positive universal constant. Our hypothesized model consists of two distributions, = {(p) : p ∈{1/4, 3/4 }}. Suppose we take the pilot estimator to be the MLE (<ref>). Then, for all sufficiently large n (depending only on α) the split LRT confidence set in (<ref>), with an equal sized split into _0 and _1, fails to cover the KL projection at the nominal level. We give a formal proof in Appendix <ref>. The KL projection distribution is (3/4). We show that the pilot estimate with probability near 1/2 will be the distribution (1/4), and further with probability near 1/2 the KL projection (3/4) will have a much smaller split sample likelihood than . As a direct consequence, universal inference will fail to cover the projection distribution (3/4). In contrast to the previous example, this example is much less pathological. All the relevant likelihood-ratios are bounded, and the log-likelihood is a consistent estimate of the KL divergence. However, even in this relatively benign example universal inference fails. We show in Section <ref> that a simple modification to the universal inference procedure fixes this issue when the relevant likelihood-ratios are bounded, and ensures correct coverage. In order to focus on the main issues, we have illustrated the failure of universal inference when the pilot estimator is the MLE. Indeed, part of the appeal of universal inference is that its coverage guarantees hold, in the well-specified case for any pilot estimate (including the MLE). Though we do not pursue this here, it is straightforward to extend these examples to show that both failures persist irrespective of how the pilot is chosen, i.e. the failures of universal inference that we highlight are driven by the second stage (of constructing the confidence set) and not by the first stage (of constructing a reasonable pilot estimate). These examples set the stage for the methodological development of the rest of the paper. To address problems of the first type we recommend targeting a different projection parameter (for instance, the TV or Hellinger projection, in Sections <ref> and <ref>), and to address problems of the second type we develop methods which guarantee coverage of the KL projection parameter when the likelihood-ratios are uniformly upper bounded or more generally have finite 2 + ξ moments for some ξ > 0 (see Section <ref>). § ROBUST UNIVERSAL INFERENCE In this section, we present a simple but powerful pair of general results which yield exact and approximate universal confidence sets. The workhorse of these results are tests of relative fit which we first briefly introduce before showing how these tests can be inverted to derive robust confidence sets. §.§ Tests of Relative Fit Suppose that we are given samples X_1,…, X_n ∼, together with a pair of candidate distributions (P_0, P_1) ∈^2, and a divergence measure ρ. With this setup in place, we now consider a family of tests ϕ_P_0, P_1 to distinguish the hypotheses: H_0:  ρ( P_0) ≤ρ( P_1), versus H_1:  ρ( P_0) > ρ( P_1). We refer to the tests ϕ_P_0, P_1 as exact tests of relative fit. Notice that in contrast to the classical setting, where we hypothesize that one of the distributions (P_0, P_1) truly generated the samples, in the misspecified setup this assumption is no longer tenable. Instead, we hypothesize that one of the distributions (P_0, P_1) is closer to the data generating distribution. In general, the two hypotheses are no longer simple hypotheses and we need to take some care in designing the family of tests ϕ_P_0, P_1. The design of tests of relative fit (and closely related variants) have a rich history and form the basis for a class of tournament-based robust estimators <cit.>. For divergences like the Total Variation and the Hellinger distance, designing exact tests of relative fit can require strong regularity conditions akin to those that would be required to estimate these divergences. Surprisingly, in these cases, it is still possible to design approximate tests of relative fit under weak regularity conditions. More formally, suppose that for some ν≥ 1, we can design a test for the following null hypothesis: H_0:  νρ( P_0) ≤ρ( P_1). We refer to tests for this hypothesis as approximate tests of relative fit ϕ_P_0, P_1,ν. Under the null hypothesis, the distribution P_0 is closer than P_1 to by a factor ν≥ 1, which can ease the design of valid tests for this hypothesis. Robust tests for null hypotheses of the form in (<ref>) (for the Hellinger distance) were introduced by <cit.> and are discussed in detail in the work of <cit.>. In the context of estimation these approximate tests yield what are known as non-sharp oracle inequalities. In the context of inference, as we explore further in Section <ref>, inverting approximate relative fit tests will yield weaker guarantees. In Section <ref> we consider the design of tests of relative fit in concrete settings, but now proceed to study the implications of designing such tests for the construction of robust confidence sets. §.§ Exact and Approximate Robust Universal Confidence Sets We now propose to construct a confidence set by inverting a family of tests of relative fit. This is similar in spirit to the procedure described in Section <ref>. §.§.§ Exact Robust Universal Confidence Sets Suppose, for every ∈, the family of tests of relative fit ϕ_P_0, P_1 is valid, i.e. it controls the Type I error: _[ϕ_P_0, P_1(X_1,…,X_n) ]≤α,    ∀ (P_0, P_1) ∈_0 where _0 = { (P_0, P_1) ∈^2: ρ( P_0) ≤ρ( P_1)}. Then, for any fixed P_1 ∈, the confidence set we construct is the set of candidates P_0 which we fail to reject: C_α,n≡ C_α(X_1,…,X_n) := { P_0 ∈: ϕ_P_0, P_1 (X_1,…,X_n) = 0 }. The following result shows that irrespective of the choice of P_1 the above construction yields a valid confidence set for the projection distribution: For any fixed P_1 ∈, C_α,n is a uniformly valid (1-α) honest confidence set for the projection . For any ∈, _(∉ C_α,n ) = _( ϕ_, P_1 = 1 ) = _( ϕ_, P_1 ) ≤α using (<ref>) since (, P_1) ∈_0 for any choice of P_1 ∈. As in the well-specified case discussed earlier, this general result does not provide any guidance on how to choose P_1. We follow the idea of universal inference and first construct an accurate estimate of from a separate sample _1 and then construct the family of split tests of relative fit ϕ_P_0, from the remaining samples _0. We call the resulting confidence set the exact Robust Universal Confidence set: C_α,n≡ C_α (X_1,…, X_n) := {P_0∈: ϕ_P_0, (_0) = 0}. Let ∈ be any estimate of based on _1. Then, the exact robust universal confidence set C_α,n is a uniformly valid confidence set for , meaning that inf_∈_ (∈ C_α, n) ≥ 1 - α. The proof is straightforward noticing that conditional on _1, the claim reduces to the claim of Proposition <ref>. Concretely, for any ∈, _ (∉ C_α, n) =_ (ϕ_, ) = __1[ __0(ϕ_, (_0) | _1) ] ≤__1 (α) = α. The robust confidence set will often contain both the pilot estimate as well as the projection distribution (see Proposition <ref> in Appendix <ref> for a formal statement). This is similar to the classical universal inference procedure which in the well-specified case will often contain both the pilot estimate and the true sampling distribution. In universal inference this suggests that in order to obtain small confidence sets, we should aim to design to be a good estimate of the true sampling distribution . On the other hand in the misspecified case, this suggests that we should design to be a good estimate of the projection . Specifically, our pilot estimate should be tailored to the divergence measure ρ. We investigate the choice of and its effect on the size of the resulting confidence set further in Section <ref>. §.§.§ Approximate Robust Universal Confidence Sets In some cases, e.g., for the Hellinger distance and the TV distance, designing exact robust tests will require some (potentially strong) regularity conditions. However, in these cases one can design approximate tests of relative fit straightforwardly. Suppose, for any ∈, the family of approximate tests of relative fit ϕ_P_0, P_1, ν which controls the Type 1 error satisfies (<ref>) with _0 = { (P_0, P_1) ∈^2 : νρ( P_0) ≤ρ( P_1)} for some ν≥ 1. We will additionally make the mild assumption that our tests of relative fit do not reject (with probability at least 1-α) when comparing the relative fit of a distribution to itself, i.e.: sup_∈_ [ϕ_P,P,ν] ≤α for any fixed  P∈. This condition will be true for all the tests we introduce in Section <ref>. Let be any estimate of from _1. Then, the approximate robust universal confidence set, akin to (<ref>), is obtained by inverting the family of valid split tests ϕ_P_0, , ν constructed from the remaining samples _0: C_ν,α,n≡ C_ν,α(X_1,…,X_n) := { P_0 ∈: ϕ_P_0, , ν (_0) = 0 }. This confidence set may not cover the projection distribution . We will relax our goal to instead be to cover an approximate projection distribution. More formally, we relax the target of inference to be the ν-approximate projection set _ν defined as _ν = {P∈: ρ( P) ≤νρ() }. If a set C is a ν-approximate confidence set, we define its coverage by _(Q∈ C for some Q ∈_ν) = _(_ν∩ C ≠∅). Figure <ref> shows a schematic diagram to illustrate the notion of approximate coverage. When ν = 1, i.e. we invert an exact test, we guarantee that with probability at least 1 - α, the set C_ν,α,n contains . On the other hand, when ν > 1 we only guarantee that the intersection of C_ν,α,n with the collection of ν-approximate projections (in cyan) is non-empty. The set _ν is a collection of distributions that are as close to as (up to a factor ν). The approximate confidence set guarantee is most meaningful when ν is close to 1, or when the model misspecification is not too extreme, i.e. ρ() is small. Let ∈ be any estimate of based on _1. Suppose that our approximate relative fit tests are valid, and satisfy the condition in (<ref>). Then, the approximate robust universal confidence set C_ν,α,n is a uniformly valid ν-approximate confidence set for : inf_∈_ (_ν∩ C_ν,α, n∅) ≥ 1 - α. Fix any ∈. Let the event E = {∈_ν}. On the event E, (<ref>) implies _ (∉ C_ν,α,n |  E) = __1 (__0 (ϕ_, ,ν (_0)  | _1, E)  |  E) ≤α. On the complement of E, i.e., ∉_ν, _0 contains (, ). Thus, an analogous argument to that in the proof of Theorem <ref> can be used. Combining the two results, we obtain that, for all ∈, _ (_ν∩ C_ν,α, n = ∅) ≤_ (∉ C_ν,α, n |  E) (E) + _ (∉ C_ν,α, n |  E^∁) (E^∁) ≤α. As in the construction of the exact robust universal confidence set, one should aim to choose the pilot estimate as close as possible to . In the exact setting, the choice of the pilot estimate does not affect the validity of the resulting set and only affects its size. However, in constructing an approximate robust universal set, if we can ensure the pilot is accurate, then our approximate validity guarantees improve. Concretely, for some sequence κ_n we define: (κ_n) := {P∈ : ρ( P) ≤ρ() + κ_n}. If we can ensure that the pilot estimate is contained in (κ_n) with probability at least 1 - β for some sequence κ_n, then we can show that the constructed confidence set C_ν, α,n will intersect (κ_n) with high probability. For instance, if κ_n → 0 as n grows, then rather than simply intersecting the set of approximate projections _ν, we can now show that C_ν,α,n intersects a shrinking neighborhood around . More formally we have the following result (we omit its proof since it follows the same arguments as in Theorem <ref>): Let (κ_n_1) be defined as in (<ref>), and suppose that our pilot is accurate, i.e. we can ensure that with probability at least 1 - β, ∈(κ_n_1). Suppose further that our approximate relative fit tests are valid, and satisfy the condition in (<ref>). Then: inf_∈_((κ_n_1) ∩ C_ν,α, n∅) ≥ 1 - α - β. In this section, we have shown that inverting exact or approximate tests of relative fit yield robust exact or approximate confidence sets despite model-misspecification. We now turn our attention to the design and analysis of these tests. § DESIGNING TESTS OF RELATIVE FIT Our proposed method relies on designing valid tests of relative fit. In this section, we design exact tests of relative fit in KL and the density power divergences, and design approximate tests for the Hellinger, TV and IPM-based divergences. §.§ Kullback-Leibler Divergence To design an exact test of relative fit for the KL divergence we make a simple observation that there is a natural plug-in estimator of the difference in KL divergences. We can rewrite the difference in KL divergences as: ( P) - () = ∫logp_1/p where p and p_1 are the density of P and with respect to a common dominating measure. When we obtain samples from this suggests the following log split likelihood ratio test: ϕ_P = [ 1/n_0∑_i∈_0 T_i (P,) > t_α (P, ) ], T_i(P, ) ≡ T(X_i; P, ) = logp_1 (X_i)/p (X_i), where _0 is an index set of _0 and t_α (P, ) is chosen to ensure validity. This test was called the relative information fit test (RIFT) and studied in the work of <cit.> to study the relative goodness-of-fit of two candidate estimates. In our paper, we invert the same test in order to construct a robust universal confidence set. When the variance of T_i(P, ) (conditional on 𝒟_1) is finite, we can derive the asymptotic distribution (conditional on 𝒟_1) of the log split likelihood ratio via the CLT. Let T_n_0 (P,) = ∑_i∈_0 T_i(P, ) / n_0. Conditional on _1 and assuming that the variance _ [T(P_0, P_1)] < ∞, for any (P_0,P_1) ∈^2, √(n_0)( T_n_0 (P,) - _ T (P, ) ) ⇝(0, s_P^2 ) as n_0 →∞ where s_P^2 ≡ s_P^2 (_1) = _ [T_1^2] - _ T_1^2 can be estimated by ŝ_P^2 = 1/n_0∑_i∈_0 (T_i(P, ) - T_n_0)^2, and ⇝ denotes convergence in distribution (conditional on 𝒟_1). When assessing distributions P that are very similar to the pilot , it might be the case that s_P^2 is vanishingly small. Consequently, it is possible that s_P/s_P does not converge in probability to 1, and the CLT with estimated variance s_P^2 need not hold. Following <cit.> we modify each T_i(P,) by adding a small amount of independent Gaussian noise, i.e. we replace each T_i(P, ) above by T_i(P, ) + δ Z_i where Z_1,…,Z_n_0∼ N(0,1), for some small positive constant δ > 0 (we use δ = 0.01 but note that this has no practical effect and this modification simply eases the theoretical analysis). We denote the resulting statistic by T_n_0,δ(P, ) and the corresponding empirical standard deviation by s_P,δ. Then, we define the KL Relative Divergence Fit () set as _, n≡_α, n () = {P∈ : T_n_0, δ(P, ) ≤z_αŝ_P,δ/√(n_0)} where z_α is a 1-α quantile of standard normal distribution. The following result provides asymptotic and non-asymptotic guarantees for the set _, n. Suppose that 𝒬 is such that for some 0 < ξ≤ 1 the 2+ξ moments M_P_0,P_1 := _ |T(X; P_0, P_1) - _T(X; P_0, P_1)|^2+ξ are finite, for any (P_0,P_1) ∈^2, then inf_∈_ (∈_, n) ≥ 1 - α - C n^-ξ/2, where C < C' (1 + sup_(P_0,P_1) ∈𝒫^2 M_P_0,P_1) /δ^(2+ξ) for a universal constant C'. We give a formal proof in Appendix <ref>. The claim follows as a consequence of the Berry-Esseen bound for the studentized statistic <cit.>. Some care is required to handle the degeneracy (discussed above) when the variance of the summands can be small and to handle the randomness in the pilot estimate . We can now revisit the failures of universal inference discussed in Section <ref>. Recall that Example <ref> illustrates the instability of the KL projection because likelihood ratios may not be bounded. The KL set does not resolve this weakness since the KL set uses the same split likelihood ratio statistic as for the universal confidence set <cit.> and its 2 + ξ moment is not uniformly bounded in Example <ref>. However, the KL set does resolve the failure highlighted in Example <ref>. Assume the same model as in Example <ref>. Suppose we take the pilot estimator to be the MLE. The KL set (<ref>), with an equal sized split into _0 and _1, covers the KL projection at the nominal level asymptotically. This result follows directly from Theorem <ref>, since in this example all of the relevant log likelihood ratios are uniformly upper bounded. It is worth noting that both the standard universal set, and the set _, n are based on essentially the same split likelihood ratio statistic, and it is perhaps surprising that the standard universal set fails but _, n succeeds in guaranteeing coverage. Despite being based on the same statistic, the two sets use very different thresholds. It is easy to see that one can rewrite the split LRT confidence set in universal inference <cit.> as: _sLRT= {P∈ : T_n_0 (P,) ≤log (1/α)/n_0}. The threshold used in (non-robust) universal inference decays at the fast rate of order O(1/n_0) compared to that of the robust universal confidence set _, n whose threshold decays at the rate O(1/√(n_0)). When the model is misspecified the (non-robust) universal set shrinks too rapidly leading to the failure highlighted in Example <ref>. The confidence set _, n is constructed by approximating the distribution of the test statistic in (<ref>). When likelihood ratios are uniformly upper bounded it is straightforward to construct finite-sample valid sets via an exponential tail bound. For example, the finite-sample exact robust universal confidence set based on the Hoeffding bound is: _HF,B,n = {P∈ : T_n_0 (P,) ≤ B√(log(1 / α)/2n_0)}, where B is such that |T_i (P_0, P_1) - 𝔼_ T(P_0,P_1)| ≤ B for all (P_0,P_1)∈^2. In this case we assume that the upper bound B is known to the statistician. One can generalize this construction in various ways. When the statistic is assumed to only have finite variance one can use Chebyshev's inequality to construct a finite-sample valid set. When in addition to boundedness the statistic might have small variance one can use empirical Bernstein-type inequalities to construct finite-sample valid confidence sets. We explore these further in Appendix <ref>. We compare the empirical performance of _, n and these finite-sample valid sets in Section <ref>. §.§ Density Power (DP) Divergences We can construct an exact test of relative fit for the family of DP divergences following the same strategy as in KL case. Let T_n_0(P, ) = _β (_n_0 P) - _β (_n_0) = ∫{ p^1+β - p_1^1+β}λ̣- ( 1+1/β) 1/n_0∑_i∈_0[ p^β - p_1^β] (X_i) := 1/n_0∑_i∈_0 T_i(P, ), where _n_0 is the empirical measure constructed from _0. The split statistics T_i(P, ) encode the difference in average β-powered densities (penalized with L_1+β norm) rather than the log-likelihood ratio evaluated on the sample _0 when β > 0. Then, conditional on 𝒟_1, _ T(P,) = _β ( P) - _β (). We define the DP set _,n exactly as in (<ref>), and observe that the analogue of Theorem <ref> holds (with an identical proof) for _,n. Recall that KL set was unable to resolve the instability problem in Example <ref>. This is because the likelihood ratios in this model can blow up. On the other hand the DP set relies on the statistics in (<ref>), which are bounded for any β > 0, provided the relevant densities are well-defined. Formally, we have the following result: Suppose we have the same model as in Example <ref>. For sufficiently large n, for any pilot estimator , the DP set _,B,n defined as in (<ref>) with B=1 + 1/β, with an equal sized split into _0 and _1, covers the DP projection at the nominal level. A formal proof can be found in Appendix <ref>. The key observation is that the DP projection is (0) for a sufficiently large sample size for any fixed β > 0. The DP projection in this example is more stable than the KL projection (1/2), considering that ϵ_n is much closer to 0 than 1/2. Consequently, we show that the DP set will cover the target of inference (0) with high probability. We emphasize that the MLE is also (0) with high probability, yet both universal split LRT and KL set based on the MLE fail to cover the KL projection due to the instability of the population projection distribution. §.§ Hellinger Distance The Hellinger distance (or the difference in Hellinger distances) does not lend itself to a natural plug-in estimator. The usual method of estimating the Hellinger distance proceeds instead via some type of non-parametric density estimation, which in turn requires additional smoothness assumptions. Since our goal in this paper is to design assumption-light methods, we instead relax the target of inference. This in turn opens the door for designing approximate tests of relative fit. Our strategy will be to modify the ρ-estimator[The name “ρ-estimator” comes from the standard symbol used for the Hellinger affinity.] <cit.> which is a density estimator tailored to the Hellinger loss. Define the split ρ-test statistic T_n_0 (P, ) := Δ (P, ) + 1/n_0∑_i∈_0ψ( √(p_1/p) (X_i) ),    Δ (P_0, ) = 1/√(2)[^2(P_0, P) - ^2(, P) ], where P = (P + ) / 2 and ψ: [0,∞] ↦ [-1,1] is a non-decreasing Lipschitz function satisfying ψ (x) = - ψ (1/x). The choice of ψ we adopt throughout this paper, is to take ψ(u) = (u-1)/√(1+u^2) which comes from work on the ρ-estimator <cit.>. The function ψ is a bounded transformation of the likelihood ratio, and due to this boundedness the split ρ-test statistic is tightly concentrated around its expectation. The following proposition, which follows directly from Proposition 11 of <cit.>, characterizes the expectation of the split ρ-statistic. For any P^*, P_0, P_1, (2 + √(2)) _T_n_0 (P_0,P_1) ≤(3 + 2√(2)) ^2 (, P_0) - ^2 (, P_1). This proposition ensures that _T_n_0(P_0, P_1) is negative for any ∈ when the null hypothesis H_0 : (3+2√(2)) ^2 (, P_0) ≤^2 (, P_1) is true. This proposition in turn suggests that T_n_0(P_0, ) could be a useful statistic for designing an approximate test of relative fit in the Hellinger distance with ν = √(3+2√(2)). We define the Hellinger Relative Distance fit () set _,n exactly analogous to the KL set (<ref>) (obtained from a δ-corrupted version of the statistics T_n_0(P, )). The following result follows by combining Theorems <ref> and <ref>, and noticing that the split statistic is uniformly upper bounded. Let ν = √(3 + 2√(2)). For any 𝒬, inf_∈_ (_ν∩_, n∅) ≥ 1 - α - C/√(n), where C < C'/δ^3 (for a universal constant C'). We are now in a position to revisit Example <ref>. In Proposition <ref>, we showed that changing the target of inference to DP projection could address the failure of universal inference. In a similar vein, targeting the Hellinger projection resolves the failure, but interpreting the resulting guarantee requires some nuance as set may not cover the exact Hellinger projection, and is only guaranteed to cover a ν-approximate projection. In the case of Example <ref>, it will turn out for sufficiently small values ϵ the ν-approximate Hellinger projection set is a singleton (and equal to the exact Hellinger projection). As highlighted earlier, when the amount of model-misspecification is not too large the distinction between the ν-approximate projection set and the exact projection can be small. Assume the same model as in Example <ref>. Suppose we take the pilot estimator to be the Minimum Hellinger Distance estimator <cit.>, = _P ∈ (_n_1 P). For sufficiently large n (> 20), the Hellinger set _,n, with an equal sized split into _0 and _1, covers the Hellinger projection ≡(0) at the nominal level asymptotically. A formal proof is provided in Appendix <ref>. It will turn out that in this example the ν-approximate Hellinger projection is exactly the Hellinger projection when ϵ≤ 0.05, and is the entire model , otherwise. This means that for larger values of ϵ, approximate validity is trivial, yet vacuous, as the target of inference can be any distribution in . This highlights the downside of targeting the ν-approximate projection set: when the model-misspecification is severe the resulting guarantees might be vacuous. §.§ Integral Probability Metrics (IPMs) Our proposal for a ν-approximate test of relative fit for IPMs is inspired by the work of <cit.> and <cit.>, where a similar idea was used to design robust density estimates. Recall the definition of the IPM, _(P_0, P_1) = sup_f ∈( _P_0 (f) - _P_1 (f) ). Associated with any pair of distributions is a so-called witness function f^*_(P,Q) = sup_f ∈ ( _P (f) - _Q (f) ), which witnesses the largest mean discrepancy between the two distributions. The split test statistic is then defined by: T_n_0 (P, ) = ∫ f^*_(P, )P̣ + /2 - 1/n_0∑_i ∈_0 f^*_(P, ) (X_i). The usefulness of this statistic is highlighted by the following characterization of the expectation of the statistic. For any P^*, P_0, P_1, 2 _ T (P_0,P_1) ≤ 3 (, P_0) - (, P_1). See Appendix <ref> for a formal proof. For the TV IPM this result appears in the work of <cit.> and <cit.>, and our result generalizes their argument to other IPMs. Proposition <ref> ensures that _ T(P,Q) is negative for all ∈ under the null hypothesis in (<ref>) with ν=3. We can construct _ by inverting the IPM approximate relative fit test, to obtain an identical guarantee to the one in Corollary <ref> (now with ν = 3). To further illustrate the construction of IPM approximate relative fit tests we consider three widely used IPMs—total variation distance, Wasserstein distance, and maximum mean discrepancy—where the witness functions are more explicit. Total Variation Distance. Suppose ρ(P_0 P_1) = (P_0, P_1) where is the total variation distance. This is an IPM over the function class = {f : f≤ 1}. An equivalent definition is (P_0, P_1) = sup_A | P_0 (A) - P_1(A) | = P_0 () - P_1 () where = {p_0 > p_1} is the Yatracos set with maximal discrepancy between P_0 and P_1. The witness function is f^*_(P_0, P_1) (x) = (x ∈) - 1/2. An immediate advantage of targeting the TV projection comes from that f^* is uniformly bounded. Given samples , consider the following test statistic which referred to as the split Scheffé statistic: T_n_0 (P,) = P () + ()/2 - _n_0(), _n_0 () = 1/n_0∑_i∈_0 (X_i ∈) where is redefined to be = {p > p_1}. The split Scheffé statistic, as the name suggests, is a sample-split analogue of the Scheffé estimate that was originally proposed in <cit.> building upon the work of <cit.>. Wasserstein Distance. Suppose ρ(P_0 P_1) = _1 (P_0, P_1) is the 1-Wasserstein distance (or Kantorovich metric). The associated function class is = {f: Lf≤ 1 } where Lf := sup{ |f(x) - f(y) | / x - y : x y } is the Lipschitz semi-norm. Although the ideas are much more general, we limit our discussion to univariate distributions on a compact support, i.e, = [0,b]. In this case, the witness function is explicit and easy to describe <cit.>. Define t; P_0, P_1 = ( F_P_1(t) > F_P_0 (t) ) - ( F_P_0 (t) > F_P_1 (t) ) ∈{0, ± 1 }, where F_P denotes the CDF of P. The witness function is f^*_(P_0, P_1) (x) = ∫_0^xt; P_0, P_1ṭ <cit.>. A direct application of the split statistic (<ref>) yields T_n_0 (P,) = 1/2∫t; P, ( _n_0 (t) - F_P (t) + F_ (t)/2) ṭ, where _n_0 (t) = 1/n_0∑_i∈_0(X_i ≤ t) is the empirical distribution. This particular split statistic is a sample-split analogue of the ℓ-estimator <cit.>. Maximum Mean Discrepancy. Suppose that is a unit ball of the reproducing kernel Hilbert space (RKHS) , with kernel k(x,y), and RKHS norm ·_ℋ, i.e., = {f: f≤ 1}. Then the corresponding IPM (<ref>) is called the Maximum Mean Discrepancy <cit.>. It was shown by <cit.> that the analytic witness function f^*_(P, ) = μ_P - μ_/μ_P - μ_ where μ_P(·) := 𝔼_P [k(X,·)] is the mean embedding of P. The split statistic T_n_0 (P, ) in this case reduces to an average of the (negative) witness function - _n_0 (f^*_(P, ) ) if the kernel k(·,·) is symmetric. In this case, the sign of the split statistic captures, in expectation, whether the population is closer to P or based on mean embeddings. §.§ Unified Sufficient Conditions for any Divergence Measure In this section we unify some of the treatment of the previous sections by giving conditions on split test statistics which ensure the exact and approximate validity of the resulting confidence sets. Given data , we consider tests of the form: ϕ_P_0, P_1, ν = ( T_n(P_0,P_1) > t_α(P_0,P_1)). We assume that the test statistic satisfies the following two additional conditions: T is anti-symmetric, i.e., T(X; P_0, P_1) = - T(X; P_1, P_0) for all P_0, P_1 ∈. There exists some fixed, positive numbers ν, c_1 ≥ 1 such that for all ∈, and any fixed P_0, P_1 ∈, c_1 _ T (; P_0, P_1) ≤νρ ( P_0) - ρ ( P_1). Assumption <ref> ensures that _ T (; P_0, P_1) is always negative for all ∈ when the null hypothesis (<ref>) is true. For instance, Propositions <ref> and <ref> establish the analogue of Assumption <ref> for Hellinger and IPM projection, respectively. Now, we may define ρ-set _ρ,n as in KL set (<ref>) by inverting the test based on (a δ corrupted version of) the statistic T: _ρ, n := {P∈ : T_n_0, δ(P, ) ≤z_αŝ_P,δ/√(n_0)} If the test statistic is bounded, i.e. T(X;P_0,P_1) ≤ B for any pair of distributions P_0,P_1 ∈𝒫^2 then we can define the finite-sample ρ-set as in (<ref>): _ρ,B,n = {P∈ : T_n_0 (P, ) ≤ B√(log(1 / α)/2n_0)} The following general result holds: Suppose that the test statistic satisfies Assumptions <ref> and <ref>. * Suppose that 𝒬 is such that for some 0 < ξ≤ 1 the 2+ξ moments M_P,Q := _ |T(X; P, Q) - _T(X; P, Q)|^2+ξ are finite, for any (P,Q) ∈^2, then inf_∈_ (∈_ρ, n) ≥ 1 - α - C n^-ξ/2, where C < C' (1 + sup_P,Q M_P,Q) /δ^(2+ξ) for a universal constant C'. * Suppose that T(X; P,Q) ≤ B, then: inf_∈_ (_ν∩_ρ,B, n∅) ≥ 1 - α. The proof of the validity claims follow the same structure as the proof of Theorem <ref>. The crucial Assumption <ref> distills out the key property of the test statistics that is useful in ensuring asymptotic or finite-sample validity. With these general validity results in place, we now turn our attention to studying the size of the resulting robust universal sets. § SIZE OF ROBUST UNIVERSAL CONFIDENCE SETS In the well-specified setting, for statistical models which satisfy classical regularity conditions, <cit.> showed that the Hellinger diameter of the split LRT confidence set depends on two factors: the size of determined by its (local) Hellinger bracketing entropy, and the closeness of to in the Hellinger distance. In a similar vein, in this section we show that the size of the universal sets, under certain regularity conditions, can be upper bounded by two factors: roughly, measuring the quality of the pilot estimate, and the size of statistical model. In the misspecified setting, we would like the robust universal set to shrink around its target at a fast rate. To measure the (directed) divergence between two sets measured in a divergence ρ and with respect to outside of , we define the ρ_^-divergence motivated by the directed Hausdorff distance. For a given divergence ρ and a collection of distributions S_1 ⊂, we define an ϵ-fattening of S_1 by: S_1 ⊕ϵ := ∪_Q ∈ S_1{P ∈ : ρ ( P) ≤ρ ( Q) + ϵ}. Now given two collections of distributions S_0, S_1 ⊂, we define the ρ_^-divergence by ρ^_ (S_0, S_1) = inf{ϵ≥ 0 : S_0 ⊆ S_1 ⊕ϵ}. ρ^_ (S_0, S_1) is the minimum ϵ-fattening of S_1 with reference to which contains S_0. To express the rate at which the robust universal sets shrink, we use the Rademacher complexity of ℱ_T, 𝒫, a function class which depends on the test statistic of choice, and the statistical model 𝒫. Concretely, we define, ℱ_T, 𝒫 := {f: f(x) := T(x; P,Q),  P,Q ∈𝒫}. We denote the Rademacher complexity of this class by ℜ_n(ℱ_T, 𝒫): ℜ_n(ℱ_T, 𝒫) := 𝔼[ sup_f ∈ℱ_T, 𝒫1/n∑_i=1^n R_i f(X_i)], where R_i are i.i.d. Rademacher random variables. In some of the cases we have considered in this paper, under additional regularity conditions the complexity measure ℜ_n(ℱ_T, 𝒫), can be related to a complexity measure of the underlying model 𝒫 using a standard contraction argument <cit.>: Suppose that , and the pilot estimate are distributions supported on some compact set 𝒞, with density with respect to the Lebesgue measure which are upper and lower bounded by constants. Then, for the test statistics introduced in Sections <ref>,<ref> and <ref>, ℜ_n(ℱ_T, 𝒫) ≲ℜ_n(𝒫). Finally, to characterize the quality of the pilot estimator , we say that the is an η_n-consistent estimator if ρ () - ρ() = O_ (η_n), where we use the standard big O in probability notation to indicate stochastic boundedness. With these preliminaries in place, we have the following result for the size of the ρ-set obtained by inverting a finite-sample valid relative fit test. The proof will be given in Appendix <ref>. Suppose that (<ref>) holds and sup_(P, Q)∈^2 |T(P, Q) - 𝔼 T(P,Q)| ≤ B. Fix any projection distribution , and recall the collection _ν in (<ref>). Then the robust universal confidence set _ρ,B,n in (<ref>), for an equal sized split into 𝒟_0 and 𝒟_1, satisfies for any ∈, ρ_^( _ρ,B,n, _ν) ≤ O_( η_n + ℜ_n(ℱ_T, 𝒫) + B√(log(1/α)/n)). Theorem <ref> states that the directed ρ_^-divergence between the exact robust universal confidence set and its target shrinks to zero at the prescribed rate, since _ν is a singleton {} when ν = 1. One can no longer show such a result for the ν-approximate robust universal confidence set even with an infinite number of observations. This is because, conditional on _1, the split test ϕ_P, , ν is guaranteed to achieve (exponentially) small Type 2 error uniformly over ∈ only for distributions P which are at least νρ() away from . Nevertheless, Theorem <ref> characterizes the rate at which _ρ,B,n shrinks to _ν. Theorem <ref> also shows how the size of the set depends on the choice of . When possible we should choose a pilot estimate which converges to the target at a fast rate to ensure that the term η_n is sufficiently small. A sensible choice is often a minimum distance estimator <cit.> which is not only a consistent estimator of under some regularity conditions but is also robust to some misspecification in its corresponding distance <cit.>. § SIMULATIONS In this section, we evaluate our proposed exact and approximate robust universal confidence sets in two particular setups—Overdispersion and Contamination—and demonstrate the advantages of the methods we propose. §.§ Overdispersion Overdispersion is a classic example of model misspecification where the true distribution has larger variance than what can be represented by the hypothesized model. Specifically, consider a case of count data generated from the negative binomial distribution with mean 𝔼_ (X):= θ^* and variance 𝕍_ (X) = κθ^* where the positive constant κ represents the dispersion ratio. Suppose a statistician hypothesized a Poisson model 𝒫_Θ = {Poi(θ) : θ∈ℝ_+} to best describe . Since the mean and the variance are the same for the Poisson distribution (implicitly assuming κ=1), the dispersion ratio κ captures the severity of the model misspecification. Figure <ref> shows ρ (Poi(θ)) with ρ = , , across the dispersion ratio. Notice that KL projection is the true mean θ^* (= 10) regardless of the dispersion ratio whereas Hellinger and TV projection gets smaller as the true variance is more inflated. The split LRT is sensitive to the misspecification. As highlighted in Section <ref>, the split LRT confidence set (_sLRT) may fail to cover the KL projection unlike the KL set (_) even with the same choice of θ_1 and the same log split likelihood-ratio statistic. Figure <ref> contrasts the performance of _sLRT and _ based on 1000 replicates of 200 simulated observations. In computing the confidence sets, the observations are equally split in half and we choose θ_1 to be the sample mean (which is the MLE) of the first half samples. As the misspecification gets more severe (larger κ), the empirical coverage of KL projection parameter (θ̃) decreases for _sLRT. When the dispersion ratio becomes larger than 3, _sLRT fails to achieve the nominal 95% coverage whereas _ maintains the validity regardless of how severe the misspecification is. Both the center and the right panel depict the size of the estimated confidence set varying over the dispersion ratio but from a different perspective. The former is based on the maximal excess KL divergence from the KL projection (which can be at most twice the KL-diameter of the set) whereas the latter is based on the L_2 distance over the parameter space. It is not surprising that compared to _, _sLRT is smaller in the L_2 sense and is closer to in an excess divergence sense. Beyond KL projection Unlike the KL projection, the Hellinger and TV projections are different for different degrees of overdispersion. Our target of inference regarding Hellinger and TV distance is ν-approximate projection rather than the projection as seen in the left panel of Figure <ref>. When the factor κ≥ 6 the ν-approximate target for both Hellinger and TV distance includes any θ∈ℝ_+. For values of dispersion ratio κ≥ 6, the ν-approximate projection for both the Hellinger and TV distances becomes and thus the approximate coverages are trivially 100%. Once again this highlights that the approximate projection is a meaningful target only when the model misspecification is not too severe. Figure <ref> summarizes the performance of approximate sets regarding Hellinger (_) and TV distance (_) based on 1000 replicates of 200 simulated observations. We choose the minimum distance estimator for θ_1 for both _ and _. Both _ and _ yield 100% empirical coverage—defined as a proportion of the confidence set that intersects _ν—across all dispersion ratios except almost well-specified case (0.01% dispersion) with 97.4% and 99.1% coverage, respectively. This conservatism is expected because for these divergences we have relaxed our target of inference to be the set of ν-approximate projections. Nevertheless, this does not mean that the Hellinger and TV sets are vacuously large. The center and right panel of Figure <ref> show the diameter of the set in Hellinger or TV distance sense, or Euclidean sense. The size of the set increases as the misspecification exacerbates regardless of distance measure. In general, _ is larger than _. _ behaves closer to _ under slight to moderate overdispersion and to _ as the overdispersion becomes severe. Comparison between asymptotic and finite sample valid sets Figure <ref> compares the various TV set when the is a 32% variance inflated negative binomial—Berry-Esseen (_), Hoeffding bound (_HF), empirical Bernstein bound <cit.>, and empirical Bentkus bound <cit.>. See Appendix <ref> for explicit forms of each confidence set. In all cases, we choose the same minimum TV distance estimator θ_1. The KL set dominates all finite sample valid confidence sets considered in this section, despite its validity relying on asymptotics. The finite sample valid sets are too conservative (and yield a meaningless set =) when only a few observations are available (n ≤ 50). Although our paper does not primarily focus on obtaining the tightest finite-sample valid confidence set, leveraging the variance _(X) can often be beneficial when constructing the confidence set. In this example, _EBS and _EBK outperform _HF since the Bernstein and Bentkus bounds are more sensitive to the variance. §.§ Contamination Consider the following contaminated data generating distributions which are mixtures of Gaussians. This simulation setup is used in the work of <cit.>. _1 = 0.99 N(0, 1) + 0.01 N(0, 30^2) (Symmetric) _2 = 0.94 N(0, 1) + 0.01 N(20, 20^2) + 0.05 N(-30, 20^2) (Asymmetric) _3 = 0.7 N(2, 1) + 0.2 N(-2, 1) + 0.1 N(0, 30^2) (Heavily Asymmetric) For each case, we denote _ to be an uncontaminated distribution that does not include the outlying noise distributions. Consider a location-scale family of Gaussian distribution 𝒫_Θ = {N(μ, σ^2 ) : (μ, σ)∈Θ} as a working model. (See Appendix <ref> for additional simulations for a location family with fixed scale.) Our goal is to evaluate the empirical performance—coverage and size—of robust universal confidence sets for the (approximate) projection of the various contaminated distributions onto 𝒫. Figure <ref> shows the mean and standard deviation of the projection distribution with respect to the KL, DP, Hellinger and TV distances along with the mean and standard deviation of the contaminated and uncontaminated distributions. The KL projection parameter is the same as the parameters of contaminated distribution in all cases. The DP projection parameters, get closer to uncontaminated parameters as the β parameter increases. The Hellinger projection is the closest to the uncontaminated parameters among all projections we considered, however, the size of _ν is much larger than that of approximate TV projection. The set _ν for both Hellinger and TV distance is quite large for the heavily misspecified case (Case 3). Practically, we recommend targeting DP projection with a reasonable choice of β (> 0.05) for this heavily misspecified case. Figure <ref> illustrates the empirical coverage and size of split LRT and ( and ) sets based on 1000 replications. For split LRT and KL sets, we choose θ̂_1 to be the quasi-MLE, whereas, for the DP set, we use the minimum DP divergence estimator. The split LRT fails to cover KL projection in all cases whereas sets achieve the nominal coverage with large enough sample sizes. The DP sets show superior coverage than KL set across all sample sizes. Such a target coverage improvement is more evident in the smaller sample sizes below 200, and as β gets larger, i.e., the DP set targets a more stable projection. Regardless of what divergence measure ρ is of interest, the size of the confidence set with reference to ρ shrinks to zero as the sample size increases. Again, the extremal values of _^ (_, ) for sample sizes below 500 highlight the instability of KL projection. Figure <ref> shows the maximal ρ-distance of and set from based on 1000 replications along with the ρ(_ν), a set of ρ-distance from to approximate projection _ν. ρ(_ν) illustrates the same phenomena as in Figure <ref> but with respect to each distance. Theoretically, we can only claim the shrinkage of set up to _ν. This can be seen in Figure <ref> for both Hellinger and TV set as the maximum excess distance from reaches νρ(_ν) with large enough samples. sets shrink beyond _ν in this example: the Hellinger set converges to a set quite close to with large enough sample size, while the TV set converges to a set around which does not appear to shrink with sample size. § DISCUSSION In this paper, we presented a general method for constructing uniformly valid exact and approximate confidence sets for various projection distributions under weak regularity conditions in the presence of possible model misspecification. We demonstrated that the universal inference procedure <cit.> can fail catastrophically even in simple examples, under fairly benign model-misspecification. We then showed that the robust universal inference framework can address these failures, providing methods which are robust and can meaningfully target different projection distributions. Despite data splitting playing an essential role in constructing an assumption-light universal confidence set, it also poses inefficiency and algorithmic randomness since only a random subset of observation is used in constructing the split statistics. This can be partially addressed with crossfitting where we average the split statistic with that after swapping the role of _0 and _1. In contrast to the well-specified setting where the validity of the crossfit set is immediate, more care is needed under model-misspecification. We investigate the validity of the crossfit set in Appendix <ref>. The papers <cit.> study many variants of universal inference (including constructing confidence sequences instead of confidence sets, to combining multiple sample-splits) and investigating these variants in the context of the robust universal inference framework of this paper would be interesting. Finally, our paper brings to the forefront the role of pairwise tests of fit (and relative fit) together with sample-splitting, in designing broadly applicable inference methods. We expect this basic insight to have further implications in other contexts, for instance, in designing universal inference procedures in other settings where likelihood-based methods are inapplicable. § ACKNOWLEDGEMENTS This work was partially supported by funding from the NSF grants DMS-1713003, DMS-2113684 and CIF-1763734, as well as an Amazon AI and a Google Research Scholar Award to SB. The authors are grateful to Arun Kuchibhotla, Aaditya Ramdas and Ian Waudby-Smith for helpful discussions regarding finite-sample valid confidence sets. plainnat § PROOFS FROM SECTION <REF> §.§ Example <ref> Note that the KL projection = ( 1/2 ). Consider the event E where all of the observed samples X_1,…,X_n are 0. We can see that, _(E) = 1 - _( ∑_i=1^n X_i > 0 ) ≥ 1 - _[ ∑_i=1^n X_i ] = 1 - n ϵ_n. Now, on the event E, it is clear that the MLE = (0). Let us denote the split-sample universal set by C_α(X_1,…,X_n), where we assume for simplicity that 𝒟_0 and 𝒟_1 each have n/2 samples. We then have, _(∉ C_α(X_1,…,X_n)| E) = _(ℒ_0()/ℒ_0() ≤α | E) = _(1/2^n/2≤α | E) = 1, for n ≥ 2 log_2(1/α). As a consequence, we can upper bound the coverage of the universal set by, _(∉ C_α(X_1,…,X_n)) ≥_(E) _(∉ C_α | E) ≥ 1 - n ϵ_n. Thus, we see that if 0 < ϵ_n ≤β/n for some β > 0, and n ≥ 2 log_2(1/α) then the universal set has coverage at most β. Choosing β < (1 - α) we see that the universal set fails to have its advertised coverage. §.§ Example <ref> The KL projection is ( 3/4 ). For simplicity we suppose that n is even, and that 𝒟_0 consists of the first n/2 samples and 𝒟_1 consists of the remaining samples. For a constant β > 0, let us consider the events E_0, E_1 defined as, E_0 = ( ∑_i=1^n/2 X_i < n/4 - β√(n)) E_1 =( ∑_i=n/2^n X_i < n/4 ). When events E_0 and E_1 hold we can see that the universal set C_α(X_1,…,X_n) fails to cover . In more detail, on the event E_1 the MLE, is (1/4) and thus, _(∉ C_α(X_1,…,X_n) | E_0, E_1) = _(ℒ_0()/ℒ_0() ≤α | E_0, E_1) ≤_(1/3^2β√(n)≤α) = 1, provided that n ≥ (log_3(1/α))^2/(4β^2). Thus, it suffices to show that E_0 and E_1 happen with sufficiently large probability. Using the fact that the Total Variation distance between the n-fold product measures, ((1/2)^n, (1/2 + ϵ_n)^n) ≤ n ϵ_n, we can reason instead about the probability of the events E_0 and E_1 when drawing samples from (1/2), and account for the difference using the Total Variation. Combining this fact with the standard Berry-Esseen bound applied to Bernoulli sums, together with some simple algebraic manipulations, we obtain that for some universal constant C > 0, P(E_0 ∪ E_1) ≥ P(Z < 0) × P(Z < -2√(2)β) - 2 C/√(n) - n ϵ_n. Thus, choosing ϵ_n ≪ 1/n, and β to be a sufficiently small constant, when n is sufficiently large, we obtain that, P(E_0 ∪ E_1) ≥1/8, and thus that, P(∉ C_α(X_1,…,X_n)) ≥ 1/8. § PROOFS FROM SECTION <REF> In this section, we formally verify the claim that the universal set typically includes both the pilot and the projection distribution. We first define the ρ-diameter of the set C as _ρ (C) = sup_P_a, P_b ∈ Cρ(P_a P_b). Let ∈ be any estimate of based on _1. Then, the exact robust universal confidence set C_α,n defined in (<ref>) has diameter at least ρ() with -probability at least 1 - 2α: inf_∈_(_ρ (C_α,n) ≥ρ() ) ≥ 1 - 2 α. Following a similar argument to that in the proof of Theorem <ref>, notice that for any ∈, _ (∉ C_α,n | _1) ≤α. Together with a union bound, we obtain that both and are included in the set C_α,n with -probability at least 1- 2α (conditionally on _1), and on this event, the diameter of the set is at least ρ() in expectation. § PROOFS FROM SECTION <REF> §.§ Proof of Theorem <ref> We first work conditional on the sample 𝒟_1 used to construct the pilot estimate . Let us define M_P,δ,ξ := 𝔼_ [|T_i(P,) + Z_i δ - 𝔼_ T(P,)|^2+ξ | 𝒟_1]. Due to the added Gaussian noise, the variance M_P,δ,0 is always strictly positive (i.e., larger than δ^2). By Minkowski's inequality, conditional on _1, we have M_P,δ,ξ≤[ (_| T_i (P, ) - _ T_i (P, )|^2+ξ | _1)^1/2+ξ + δ( |Z|^2+ξ)^1/2+ξ]^2+ξ. This means that for assumed , there exists a universal constant C_M such that (conditionally on _1) the 2+ξ moment of corrupted statistic T_i (P, ) + δ Z_i is uniformly bounded by C_M for all P∈. Conditionally on _1, the generalized Berry-Esseen bound for the studentized statistic <cit.> yields that, for a universal constant C', sup_t| ℙ_(√(n_0)( T_n_0,δ (P,) - _ T (P, ) )/s_P,δ≥ t  | 𝒟_1) - P(Z ≥ t)| ≤C' M_P,δ,ξ/n_0^ξ/2δ^2+ξ≤ C n_0^-ξ/2, where C = C' C_M δ^-(2+ξ). This holds in particular for ∈𝒫. Consequently, we see that, inf_∈_ (∈_, n) = inf_∈𝔼_ [ _ (∈_, n | 𝒟_1)] ≥ 1 - sup_∈𝔼_ [ _ (∉_, n | 𝒟_1)] ≥ 1 - [α - C n^-ξ /2], as claimed. §.§ Proof of Proposition <ref> Recall that X_i iid∼(ϵ_n) for ϵ_n ≤ (1-α)/n and our hypothesized model is ={(p): p∈{0, 1/2}}. For a fixed β >0, _β((ϵ_n) (p) ) = C + (p^1+β + (1-p)^1+β) - (1 + 1/β) [ϵ_n p^β + (1-ϵ_n) (1-p)^β] where C = ∑_x∈0,1ϵ_n^(1+β)x (1-ϵ_n)^(1+β)(1-x). The DP divergences from to the elements of the working model are _β((ϵ_n) (0) ) ∝ 1 - (1 + 1/β) (1-ϵ_n) = (1 + 1/β) ϵ_n - 1/β _β((ϵ_n) (1/2) ) ∝ - (1/2)^β / β. Therefore, the DP projection is = (0), if ϵ_n ≤ (1 -(1/2)^β) / (1 + β), (1/2), otherwise. Since ϵ_n < (1-α)/n, the projection will be (0) for any β >0, provided that n ≥ (1-α) (1 + β) / (1- (1/2)^β). Now we turn our attention to constructing the DP set. For any fixed (P,Q) ∈^2, the split statistic is uniformly bounded, i.e., |T_i (P,Q) - _ T (P,Q)| ≤ 1 + 1/β since T_i (P, Q) = ∑_x∈{0,1}[ (p^x (1-p)^1-x)^1+β - (q^x (1-q)^x)^1+β] - ( 1+1/β) [ (p^X_i (1-p)^1-X_i)^β - q^X_i (1-q)^1-X_i] (X_i). By Hoeffding's inequality, _,1+1/β,n ensures nominal coverage for any estimator , since we have that: _(∉_,1+1/β,n) = _(_( T_n_0 (,) > β + 1/β√(log(1/α)/2 n_0) | _1 )) ≤α. §.§ Proof of Proposition <ref> Note that Hellinger projection is (0) for n>6 (as long as ϵ_n < 0.146) since ^2((ϵ_n), (0)) = 1 - √(1 - ϵ_n), ^2((ϵ_n), (1/2)) = 1 - √(ϵ_n / 2) - √((1-ϵ_n) / 2). Similarly, ν-approximate Hellinger projection is (0) if ϵ_n < 0.051 or otherwise. Hereafter we only consider n > 20 where _ν =. The minimum Hellinger Distance Estimator (MHDE) is _p∈{0, 1/2}^2 (_n_1, (p)) = _p∈{0, 1/2}√(p X_n_1) + √((1-p) (1 - X_n_1)) where X_n_1 = ∑_i∈_1 X_i / n_1. Thus, = (0), X_n_1 < 0.5-1/(2√(2)) ≈ 0.146 (1/2), Otherwise. This implies that the advertised coverage is guaranteed when X_n_1 < 0.146. Otherwise, Corollary <ref> ensures the asymptotic (approximate) validity. §.§ Proof of Proposition <ref> The proof follows directly by the triangle inequality. 2 _ T (P_0, P_1) = _P_0 f^*_(P_0, P_1) + _P_1 f^*_(P_0, P_1) - 2 _ f^*_(P_0, P_1) = 2 [ _ f^*_(P_0, P_1) - _ f^*_(P_0, P_1)] - _P f^*_(P_0, P_1) - _P_1 f^*_(P_0, P_1) = 2 [ _ f^*_(P_1, P_0) - _P f^*_(P_1, P_0)] - _ (P_0, P_1) ≤ 2 _ (, P_0) - _ (P_0, P_1) ≤ 2 _ (, P_0) - [_(, P_1) - _(, P_0)] (by the triangle inequality) = 3 _(, P_0) - _(, P_1) § PROOFS FROM SECTION <REF> §.§ Proof of Theorem <ref> Recall that the exact robust universal confidence set based on the Hoeffding bound is _ρ,σ,n = { P∈ : T_n_0 (P,) ≤ B √(log (1/α)/2 n_0)}. We denote t_α,n := B √(log (1/α)/2 n_0) throughout the proof, and use C to denote _ρ,B,n. Throughout the proof, we fix a projection distribution and assume an equal split between _0 and _1. Denote δ_ν (P, Q) = ρ( P) - νρ( Q) for any P, Q ∈. We want to show that, for fixed κ > 0, for some finite M > 0, _( sup_P ∈δ_ν(P, ) > M (η_n + ϵ̃_n) ) ≤κ, where ϵ̃_n = ℜ_n(_T,𝒫) ∨ t_α,n and for all n large enough. Let the event E be δ_1 (, ) ≤ (M/ν) η_n which happens with probability at least 1-κ/2. Then, _( sup_P ∈δ_ν(P, ) > M (η_n + ϵ̃_n) ) ≤_( sup_P ∈δ_ν(P, ) > M (η_n + ϵ̃_n)  |  E ) + κ/2 = _( sup_P ∈δ_ν(P, ) > M (η_n + ϵ̃_n) - νδ_1(, )  |  E ) + κ/2 ≤_( sup_P ∈δ_ν(P, ) > M ϵ̃_n  |  E ) + κ/2. Thus, it suffices to show that conditional on E, with -probability at most κ/2, all P∈ such that δ_ν(P, ) > M ϵ̃_n are not included in . Hereafter we condition on event E. Let _ϵ := {P ∈ : δ_ν(P, ) > ϵ}. From Assumption <ref>, we have that _( ∀ P∈_ϵ, P ∈_ρ,B,n | _1 ) = _( sup_P∈_ϵT_n_0(, P) ≥ - t_α,n | _1 ), ≤_( sup_P∈_ϵ[T_n_0(, P) - _ T(, P)] ≥ϵ - t_α,n | _1 ). where the inequality is from noticing that conditional on _1, sup_P∈_ϵ [- _ T(, P)] ≥sup_P∈_ϵδ_ν(P, ) ≥ϵ by Assumption <ref>. To ease the notation, denote the centered statistic as T_P := T_n_0(, P) - _ T(, P). Since |T(,P)| ≤ B, any change in X_i can change sup_P∈_ϵT_P at most 2B/ n_0. By McDiarmid's inequality, we have that _(sup_P∈_ϵT_P ≥ϵ - t_α,n | _1) ≤exp( - n(ϵ - t_α,n - _[sup_P∈_ϵT_P] )^2/2 B^2). Now we focus on bounding _ [sup_P∈_ϵ |T_P|] (which is greater than _ [sup_P∈_ϵT_P ]). Let _T,𝒫 = {T(·; , P) : P ∈}. The symmetrization lemma <cit.> states that _Xsup_f∈_T,𝒫1/n_0| ∑_i=1^n_0[f (X_i) - _ f(X_i)] | ≤ 2 _X,εsup_f∈_T,𝒫|1/n_0∑_i=1^n_0 R_i f(X_i)| := 2 _n_0 (_T,𝒫) where R_i are iid Rademacher random variables. § FINITE-SAMPLE VALID CONFIDENCE SET FOR BOUNDED TEST STATISTIC Suppose the split statistics are uniformly bounded, i.e., |T_i (P)| ≤ B for all i. Classic Cramér-Chernoff bounds yield finite-sample valid exact (ν = 1) or approximate (ν > 1) confidence sets. _HF is a uniformly valid 1-α exact (ν = 1) or approximate (ν > 1) confidence set for where _HF = {P ∈ : T_n_0 (P) ≤√(B^2/2n_0log(1/α))}. Typically, Hoeffding's bound does not scale with the variance which results in a conservative confidence set. Confidence set based on Bernstein's inequality is given as follows. _BS is a uniformly valid 1-α exact (ν = 1) or approximate (ν > 1) confidence set for where _BS = { P∈ : T_n_0(P) ≤√(2 S^2 log (1/α)/n_0 + B^2/9( log (1/α)/n_0)^2) + B log (1/α)/3 n_0} where S^2 = S^2(P) = (c_1 ν)^2 [ρ ( P) + ρ ()]. However, _BS above requires knowledge of to compute S. Empirical Bernstein bounds <cit.> address this issue. Denote T̃_i (P,Q) = (T (X_i; P, Q)+ B) / (2B). _EBS is a valid 1-α confidence set for exact (ν = 1) or approximate (ν > 1) projection where _EBS = { P∈ : ∑_i=1^n_0λ_i T̃_i (P, )≤log(1/α) + ∑_i=1^n_0 v_i ψ_E (λ_i) }, v_i = (T̃_i (P, ) - T̃_i - 1 (P, ))^2, ψ_E(λ) = - (log(1-λ) - λ), and λ_i = √(2log(1/α)/n_0 Ŝ_i - 1^2)∧ c, Ŝ_i^2 = 1/4 + ∑_l=1^i (T̃_l - T̃_l)^2/i + 1, T̃_i = 1/i+1∑_l=1^i T̃_l, for some c ∈ (0,1). When the variance or an upper bound of the variance is known, Bentkus's bound <cit.> is sharper than any Cramér-Chernoff type bounds. See <cit.> for details. Define a Bernoulli random variable G = G(S^2, B) as ( G = B ) = S^2/S^2 + B^2 := p_SB, ( G = - S^2/B) = 1 - p_SB _BK is a valid 1-α confidence set for is a valid 1-α confidence set for exact (ν = 1) or approximate (ν > 1) projection where _BK = { P∈ : T_n_0(P,) ≤ q(α) } where q(α) is the solution to P_2 ( u; ∑_i∈_0 G_i ) := inf_t ≤ u_( ∑_i∈_0 G_i - t )_+^2/(u -t )_+^2 = α, and S^2 = S^2(P) = (c_1 ν)^2 [ρ ( P) + ρ ()]. As in the case of Bernstein's bound (<ref>), Bentkus's bound (<ref>) requires prior knowledge of to compute the variance S. The empirical Bentkus's bound <cit.> addresses this by taking the union bound on variance over-estimation and the Bentkus's inequality. Following <cit.> define the over-estimator of S as, for δ∈[0,1], S_n (δ) = √(S_n_0^2 + g_2,n_0 (δ)) + g_2,n_0(δ), S_n^2 = 1/⌊ n / 2 ⌋∑_i=1^⌊ n / 2 ⌋(T_2i - T_2i-1)^2/2, where g_2,n(δ) := B (√(2) n)^-1√(⌊ n / 2 ⌋)Φ^-1 (1- 2 δ / e^2) and Φ is the cdf of a standard Gaussian. _EBK is a valid 1-α confidence set for is a valid 1-α confidence set for exact (ν = 1) or approximate (ν > 1) projection where for some δ∈[0,1], _EBK = { P∈ : T_n_0(P,) ≤ q(α - δ) } where q(α - δ) is the solution to P_2 ( u; ∑_i∈_0 G_i ( S^2_*(δ), B ) ) = α - δ. with S_* (δ) := min_1 ≤ i ≤ n_0S_i (δ). In Section <ref>, we choose δ = α/3 to construct the empirical Bentkus's bound-based TV set. § CROSSFIT SET Despite universal inference holds for any , let us assume we choose such that sup_P ∈T(·; P, P_1) - T(·; P, )_L_2() = o(1). For any fixed P ∈, consider the following decomposition: _n_0 T(·; P, P_1) - _ T(·; P, ) = (_n_0 - _) [T(·; P, P_1) - T(·; P, )] + _[T(·; P, P_1) - T(·; P, )] + (_n_0 - _) T(·; P, ). The first term is the empirical process which is o_ (1/√(n_0)) applying Lemma 2 of <cit.>. The second part is the bias which is o(1) from our choice of . The last term yields the CLT. Now let T_n_1 (P; P_0) := ∑_i∈_1 T(X_i; P, P_0) /n_1 where we change the role of _0 and _1. Define a cross-fitting estimator as T_n^× (P) = n_1 T_n_1 (P; P_0) + n_0 T_n_0 (P; P_1)/n. The n (T^× (P) - _ T(·; P, )) has the following decomposition: n_0 (_n_0 - _) [T(·; P, P_1) - T(·; P, )] + n_1 (_n_1 - _) [T(·; P, P_0) - T(·; P, )] + n_0 _ [T(·; P, P_1) - T(·; P, )] + n_1 _ [T(·; P, P_0)- T(·; P, )] + n (_n - _) T(·; P, ). Similarly, both empirical process terms in the first line are o_ (1/√(n)), and bias terms in the second line are o (1). Thus, we left with the same CLT term. The decomposition implies that as long as one chooses a “good” candidate estimator, cross-fit estimator also provides an asymptotically (uniformly) valid inference on . Construct a cross-fit ρ-set as follows: C^×_ρ,α, n = {P∈ : T^× (P) ≤ z_αŝ_P^×/√(n)}. where ŝ_P^× 2 = [_ (T (X; P, P_1)) + _ (T (X; P, P_0) )] / 2 is a consistent estimator of _ (T(X; P, )). § ADDITIONAL RESULTS AND TECHNICAL DETAILS ON NUMERICAL STUDIES §.§ Computational detail We adopt a heuristic search method for finding a confidence set in multivariate parameter space. For brevity, we explain the procedure in 2 dimensions, but the procedure can be straightforwardly extended to higher dimensions. From the observation that when _1 is close to θ̃, i.e., ρ( P__1) ≤νρ() as seen in the proof of Theorem <ref>, T_n_0 (_1) = 0 for split statistics that satisfies Assumption <ref> and <ref>. Therefore, we construct a star-convex confidence set that always includes _1. We construct the rays originate from _1, i.e., R_ω = {θ∈Θ : r_ω^⊤ (θ - _1) = 0, r ≥ 0 } where r_ω = (r sinω, - r cosω) for angle ω∈ [- π, π]. For each ω, we find a root of an evidence function (θ) = T_n_0 (θ) - t_α (θ) using Brent's method <cit.> on R_ω constrained with radius r varying from 0 (corresponding θ=_1) to some r_0 > 0 such that the corresponding θ_0 satisfies (θ_0) > 0. §.§ Gaussian contamination - Location family Consider a Gaussian location family = {(θ, 1) : θ∈} where the variance is fixed to that of uncontaminated distributions. Figure <ref> shows the projection parameters along with those of contaminated and uncontaminated distributions. The mean of contaminated distribution and that of uncontaminated distributions are the same for Cases 1 and 3 but not for Case 2. This leads to the interesting observation that forward KL projection is the closest to the uncontaminated distribution in Case 3 unlike location-scale family in Figure <ref>, Section <ref>. Figure <ref> summarizes the performance of confidence sets targeting the forward KL or DP projection over 1000 replications. Clearly, split LRT fails to attain the nominal coverage even for a large enough sample size. All other sets achieve the nominal coverage for moderate to large sample size. _ are shorter than _ and even than the invalid split LRT set for Cases 2 and 3.
http://arxiv.org/abs/2307.04425v1
20230710090012
Identification of Hemorrhage and Infarct Lesions on Brain CT Images using Deep Learning
[ "Arunkumar Govindarajan", "Arjun Agarwal", "Subhankar Chattoraj", "Dennis Robert", "Satish Golla", "Ujjwal Upadhyay", "Swetha Tanamala", "Aarthi Govindarajan" ]
eess.IV
[ "eess.IV", "cs.CV" ]
Article Title]Identification of Hemorrhage and Infarct Lesions on Brain CT Images using Deep Learning [1]Arunkumar [email protected] 2]Arjun [email protected] [2]Subhankar [email protected] 2]Dennis [email protected] 2]Satish [email protected] 2]Ujjwal [email protected] 2]Swetha [email protected] 1]Aarthi [email protected] *[1]Aarthi Scans & Labs, Chennai, Tamil Nadu, India [2]Qure.ai, Mumbai, Maharashtra, India Head Non-contrast computed tomography (NCCT) scan remain the preferred primary imaging modality due to their widespread availability and speed. However, the current standard for manual annotations of abnormal brain tissue on head-NCCT scans involves significant disadvantages like lack of cutoff standardization and degeneration identification. The recent advancement of deep learning-based computer-aided diagnostic (CAD) models in the multidisciplinary domain has created vast opportunities in neurological medical imaging. Significant literature has been published earlier in the automated identification of brain tissue on different imaging modalities. However, determining Intracranial hemorrhage (ICH) and infarct can be challenging due to image texture, volume size, and scan quality variability. This retrospective validation study evaluated a DL-based algorithm identifying Intracranial hemorrhage (ICH) and infarct from head-NCCT scans. The head-NCCT scans dataset was collected consecutively from multiple diagnostic imaging centers across India. The study exhibits the potential and limitations of such DL-based software for introduction in routine workflow in extensive healthcare facilities. [ [ August 12, 2023 =================== § INTRODUCTION In Cognitive Neuroscience, Neuropsychological investigation of stroke patients is widely utilized in advancing our knowledge of brain functions. The considerable insight into the relation of the brain function to its anatomy has been determined via correlation analysis between physical brain damage and impaired behavior <cit.><cit.><cit.>. The stroke topology can be broadly classified into two types: 1) Intracranial hemorrhage (ICH), the rupture blood vessel within the brain which causes bleeding. The common factors related to the cause of ICH are advanced age, heavy alcohol usage, and high blood pressure (hypertension) <cit.>. As per some recent studies, although ICH accounts for 10–15% of all stroke-related deaths, over the last thirty years, the mortality and morbidity have not changed, particularly in developing countries <cit.>. 2) Ischemic stroke or infarct, is interruption of blood flow due to blood clot. Infarct is generally caused by the buildup of plaques (atherosclerosis) over time in the arteries. Globally, over 13.7 million individuals have a stroke each year, of which approximately 70%, i.e., 9.5 million, are infarct <cit.>. Presently, mapping of the stroke lesion is regularly done using Computed tomography (CT) and magnetic resonance imaging (MRI). The MR (T1- weighted and T2- weighted) anatomical images are acquired as a part of routine practice for stroke patients. In stroke suspected patients with negative CT scans, MRI can also be performed. After the first few hours of onset, the ischemic stroke can be identified using the MRI. Additionally, the differentiation of irreparably damaged brain tissue and the tissue at risk due to infraction can be diagnosed using the MRI. However, CT is the preferred imaging modality over MRI in acute stroke care units and clinical trials due to the reduced exclusion criteria compared to MRI, affordability, speed, and accessibility <cit.>. In CT, hemorrhage is percieved as the bright region (hyper-dense) exhibiting sharp contrast and infarct as dark region (hypo-dense) depending on the time progressed after the onset. The manual annotations of abnormal brain tissue by trained neuroradiologists is currently the present standard method for lesion identification <cit.>. However, the manual annotations of abnormal brain tissue have many disadvantages <cit.>. 1) Lack of cutoff standardization:, There is no standard protocol for explicit cutoff, particularly around the ventricles, to differentiate lesioned and non-lesioned tissues; as a result, this approach produces large variability and lacks reproducibility across operators. 2) Degeneration identification: The stroke-induced degeneration occurring in chronic stroke patients outside the lesion is not captured in the standard manual annotations process, even though a significant clinical impact on patients is caused due to the stroke-induced degeneration. The recent advancement of deep learning based computer aided diagnostic (CAD) models in medical imaging and signal processing can significantly assist in overcoming the existing challenges <cit.><cit.><cit.><cit.><cit.>. In addition, the manual editing combined with an automated detection solution of hypo- or hyper-dense regions that remains under operator supervision and can assist in overcoming the present challenges <cit.>. More recently, a study using large CT datasets to remove the inter-subject variability in brain lesion characterization using an automated approach was proposed <cit.>. Several state-of-the-art algorithms have been proposed for lesion segmentation in MR images over the past few years, but very few have been developed to address stroke lesions on CT scans. Most of the earlier work published to validate automated solutions was directed toward identifying ICH. As the ICH appears bright in CT scans, developing an automated solution based on supervised or unsupervised learning algorithm or extracting morphological features from labeled images to differentiate between true lesioned and non-lesioned tissues is less challenging <cit.> <cit.>. Infarct identification, on the other hand, is a less popular problem statement among researchers compared to ICH detection due to its challenging nature. To address this issue very recently, a rule-based approach based on seeded region-growing algorithms was proposed via extracting hand-crafted features such as relative position for an axis of symmetry, texture, and brightness <cit.>. However, the primary disadvantage of this study is that the seeded region-growing algorithms may not be able to define the boundaries of the stroke region distinctively. In this study, we have evaluated an Artificial Intelligence (AI) based automated CAD algorithm based on deep learning, capable of identifying ICH and infarct on Head-Non-contrast Computed Tomography (Head-NCCT) scan. The solution has been earlier validated on detecting ICH on Head-NCCT scan images <cit.>. The Institutional Review Board (IRB) has approved the proposed retrospective study. We demonstrated the effectiveness and validity of the automated CAD solution in detecting ICH infarct and quantifying infarct on Head-NCCT scan. Our proposed validation will provide a rapid and efficient tool for both research and clinical application. It will assist in the broader adaptation of automated CAD solutions at extensive clinical facilities. § MATERIAL AND METHODS The study was a HIPAA-compliant retrospective study with Institutional Review Approval (IRB) from Royal Pune Independent Ethics Committee (RPIEC) (IRB No. RPIEC240123). Informed consent was obtained from all participants. All methods were carried out in accordance with relevant guidelines and regulations. The primary objective was to evaluate the commercially available deep learning-based algorithm qER (Qure.ai Technologies, Mumbai, India) in terms of Area Under the Receiver Operating Characteristics Curve (AUC) in triaging Head-NCCT scan in detection and quantification of infarcts. It was estimated that a minimum sample of 418 Head-NCCT scans (167 Head-NCCT scans image with radiologist-confirmed infarcts, 251 Head-NCCT scans images without infarcts, 2:3 ratio) would provide a minimum of 80% power to estimate an anticipated AUC of 80% with 7% precision assuming a Type I error rate of 5% <cit.><cit.>. The Head-NCCT scans, and their signed-off original radiological report performed from 01-September-2021 to 31-August-2022 were acquired from diagnostic imaging centers across India. A total of 1878 Head-NCCT scan were collected. The original radiological report of these scans was subjected to a manual review by a clinical data abstractor to classify the scans into infarct, and non-infarct reported scans based on the original radiological report. A stratified random sample of 500 Head-NCCT scans stratified by the presence and absence of infarct (based on the original radiological reports) were then selected for independent ground truthing by a radiologist with more than fourteen years of experience. The inclusion criteria were Head-NCCT scans with soft reconstruction kernel covering the complete brain, slice thickness ≤ 6mm. The exclusion criteria were Head-NCCT scans with obvious postoperative defects or from patients who had previously undergone brain surgery, Head-NCCT scans with artifacts such as burr holes, shunts or clips, Head-NCCT scans containing metal artifacts, excessive motion artifacts, Head-NCCT scans containing missing and improperly ordered slices. The ground truther radiologist had access to the original head NCCT scan image but was blinded to the original radiology report. The ground truther reviewed all the Head-NCCT scans and provided segmentation boundaries for infarcts and intracranial hemorrhages. The ground truther radiologist also provided a binary response for the presence or absence of cranial fracture, midline shift, and mass effect. The ground truth output was the reference standard for all downstream statistical analyses, not the original radiological report. The sensitivity and specificity were estimated based on a default device threshold (available from the manufacturer based on internal testing), and the optimum threshold was based on Youden's index. The 95% confidence intervals for sensitivity and specificity are reported based on exact method <cit.>. AUC and 95% confidence interval (CI) was estimated based on the empirical method and De Long methodology, respectively <cit.>. The segmentation provided by the ground truther radiologist was utilized for the quantification analysis of the error in the predicted infarct volume by the DL-based algorithm. Absolute errors in infarct volume estimation in milliliter (mL), and summary statistics of absolute errors were reported. The statistical analyses were performed using RStudio (RStudio version 2022.07.1, R version 4.2.1) and Python version 3.9.7. § EXPERIMENTAL RESULTS §.§ Identification of ICH and Infarct The ground truthing was completed for 428, while 22 Head-NCCT scan were excluded due to the inclusion and exclusion criteria mentioned in section <ref>. A total of 187 Head-NCCT scan confirmed (based on ground truth) the presence, while 241 Head-NCCT scan confirmed the absence of any infarcts. This distribution of scans with and without infarcts met the minimum sample size requirements described earlier in <ref>. In addition, 21 scans with intracranial hemorrhages (ICH) and 23 scans with cranial fractures were present in the sample. A total of 212 (49.5%) of the 428 Head-NCCT scans did not contain any infarcts, intracranial hemorrhages, cranial fracture, midline shift, or mass effect. The distribution of the Head-NCCT scans is shown in Table. <ref>. It can be observed from Table. <ref> that the DL-based algorithm achieved an AUC of 86.8% (95% CI: 83.4 - 90.2) in detecting scans with the presence of infarcts while the sensitivity and specificity were estimated to be 66.8% (95% CI: 59.6-73.5)and 86.7% (95% CI: 81.8-90.7) respectively at the default threshold. The optimum operating threshold was determined using Youden’s index. At this optimum threshold, it was observed that the sensitivity of the DL-based algorithm improved to 80.2% (95% CI: 73.8 - 85.7) without substantial reduction in specificity 80.1% (95% CI: 74.5 - 84.9). For ICH, an AUC of 94.8% (95% CI: 87.4 - 100) was achieved. There was no change in sensitivity compared to the default and optimum threshold, while the specificity increased by 3% using the optimum threshold. In contrast, the sensitivity of cranial fracture compared to the default and optimum threshold, an enhancement of 15.8% was observed while the specificity decreased by 2.7%. In Fig. <ref>, the AUC-ROC plot for Cranial Fracture, ICH, and Infarct is given. §.§ Quantification of Infarct Volume The DL-based algorithm for identifying infarcts produces the infarct volume in mL. A total of 150 true positive scans for which both DL-based algorithms predicted volume and ground truth volume were available for this analysis. The reference standard was radiologist annotations done for each Head-NCCT scan images. The mean absolute error (MAE) was 4.7 mL for overall scans. Based on ground truth volume, the scans were further divided into two categories - scans with 0 - 5 mL and > 5 mL infarcts volume, respectively. It can be observed from Table. <ref> that the MAE for 0 - 5 mL and > 5 mL scans were found to be 3.2 mL and 8.557 mL, respectively. In Fig. <ref> from the scatter plot of infarct volumes (1), it can be observed that with an increase in infarct volume, there is a positive correlation between DL-based algorithm volume and ground-truth annotated volume. The Bland-Altman plots showing good agreement between the ground truther annotation and predicted volume by the DL-based algorithm are shown in Fig. <ref> (2). §.§ Visual Explanations of DL-based Algorithm The experimental findings depict that the evaluated DL-based algorithm achieved superior performance represented in Table. <ref> and <ref>. In most DL-based models the rationale behind the prediction is not reveled explicitly. Since these DL black box models can not be decomposed into intuitive and comprehensive modules, these models are hard to interpret. Consequently, the end-users develop skepticism and find the model difficult to trust. The emergence of explainable artificial intelligence (XAI) is an essential aspect of model transparency and the social right to explain DL inferences <cit.>,<cit.>. XAI encompasses a better understanding of incoherent output, isolates failure modes, and builds trust in intelligent systems for effective incorporation into our everyday lives <cit.>. The present evaluated DL-based algorithm outputs a boundary across the infarcts which revels rationale behind the superior performance. In Fig. <ref> is can be observed that for both small and large infarcts volume on Head-NCCT scan, the model predicted boundary clearly overlaps with the ground truther boundary. § DISCUSSION This retrospective study evaluated a deep learning algorithm for detecting infarcts in Head-NCCT scans. The algorithm had a good AUC of about 86% in detecting infarcts. After adjusting for thresholds, a balanced sensitivity of 80.2% and specificity of 80.1% was estimated to detect infarcts. The algorithm's sensitivity in detecting infarcts in scans with no other target abnormalities was found to be 80% (136 correctly detected out of 170). It did not differ from the overall sensitivity at optimum sensitivity. This states the robustness of the DL-based algorithm to identify infarcts with negligible drop in sensitivity with presence of other abnormalities. Additionally, it is to be noted that the sensitivity of Head-NCCT scans in detecting infarcts is generally considered low, especially in the case of hyperacute and acute ischemic strokes. In one study, the sensitivity of detecting acute ischemic stroke on head NCCT scans ranged from 57% to 71% with considerable inter-reader variability <cit.><cit.>. Additionally, we evaluated the performance to detect ICH and cranial fracture, and both had excellent AUC. However, the interpretation is limited by low sample sizes for these two abnormalities. Our results also show that threshold adjustments might be needed before using such algorithms routinely for clinical decision support. Deep learning or big data are often called "black box" and represent substantial obstacles in introducing intuitive and comprehensive modules into actual clinical practice; these models are challenging to interpret. However, the DL-based method validated in this study provides a post-hoc attention tool for the clinician to identify the lesion visually. In addition, the DL-based algorithm validated in this study encompasses a better understanding of incoherent output, isolates failure modes, and builds trust in intelligent systems for effective incorporation into routine clinical practice. Moreover, the proposed validation of the DL-based algorithm will be beneficial in the resource constraint areas with a limited number of radiologists or with only access to teleradiology facilities. Our study has limitations. First, the differentiation of infarct into acute and chronic infarct was not analyzed. Second, the ground truthing for the head NCCT scans images with the presence of infarcts was done by a single radiologist. Thirdly, there were not enough scans for the ICH and cranial fracture to estimate performance metrics with sufficient precision. § CONCLUSION The present study evaluated a DL-based algorithm to determine the presence and absence of ICH and infarcts on head-NCCT scans. The DL-based algorithm demonstrated high detection performance rate in identifying infarcts, ICH, and cranial fracture. Additionally, the DL-based algorithm exhibits a positive correlation between DL-based algorithm volume and ground-truth annotated volume. The study demonstrated the performance of ICH detection and infarcts detection and quantification to indicate the feasibility of introduction of such DL-algorithms in routine workflow in extensive healthcare facilities. § DATA AVAILABILITY The datasets used or analyzed during the current study are available from the corresponding author on reasonable request.
http://arxiv.org/abs/2307.05921v3
20230712053647
Reading Radiology Imaging Like The Radiologist
[ "Yuhao Wang" ]
cs.CV
[ "cs.CV", "cs.AI" ]
Reading Radiology Imaging Like The Radiologist Yuhao Wang This paragraph of the first footnote will contain the date on which you submitted your paper for review. It will also contain support information, including sponsor and financial support acknowledgment. For example, “This work was supported in part by the U.S. Department of Commerce under Grant BS123456.” Yuhao Wang are with State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, Beijing 100876, China (e-mail:[email protected]). ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= Automated radiology report generation aims to generate radiology reports that contain rich, fine-grained descriptions of radiology imaging. Compared with image captioning in the natural image domain, medical images are very similar to each other, with only minor differences in the occurrence of diseases. Given the importance of these minor differences in the radiology report, it is crucial to encourage the model to focus more on the subtle regions of disease occurrence. Secondly, the problem of visual and textual data biases is serious. Not only do normal cases make up the majority of the dataset, but sentences describing areas with pathological changes also constitute only a small part of the paragraph. Lastly, generating medical image reports involves the challenge of long text generation, which requires more expertise and empirical training in medical knowledge. As a result, the difficulty of generating such reports is increased. To address these challenges, we propose a disease-oriented retrieval framework that utilizes similar reports as prior knowledge references. We design a factual consistency captioning generator to generate more accurate and factually consistent disease descriptions. Our framework can find most similar reports for a given disease from the CXR database by retrieving a disease-oriented mask consisting of the position and morphological characteristics. By referencing the disease-oriented similar report and the visual features, the factual consistency model can generate a more accurate radiology report. Our model mimics the thinking process of a radiologist by utilizing both visual features and past experience with radiology imaging. Experimental results illustrate that our model achieved state-of-the-art performance on two benchmark datasets, including the IU X-Ray and MIMIC-CXR. Furthermore, the ablation study demonstrated the effectiveness of each component we proposed. Radiology Report Generation, Image Captioning,Transformer, Image Retrieval. § INTRODUCTION Radiology Automated radiology report generation aims to generate comprehensive and accurate reports that contain abundant abnormal observations about medical images. Writing such reports manually is time-consuming and difficult, requiring the expertise of an experienced radiologist. Fully automated report generation can assist radiologists in writing imaging reports, improving the accuracy of disease detection, and reducing their workload. Additionally, automated radiology report generation can provide automatic medical image reports to regions with limited access to medical resources, helping to alleviate the shortage of local experts. Unlike traditional healthcare AI tasks like disease classification, radiology reporting requires AI models to possess a higher level of cognitive ability to produce medical image reports with descriptive expressions that resemble human-level cognition. Medical reports are generated based on the task of image captioning. Considerable progress has been made in research on image captioning, with most frameworks adopting an encoder-decoder architecture, such as a CNN image encoder followed by an RNN decoder for report generation. The Transformer architecture, initially proposed for text modeling and later extended to visual-language tasks, has been widely applied in cross-modal domains due to its effectiveness in modeling sequences and reducing the semantic gap between images and text using stacked encoders and decoders with multi-head self-attention. Recent research in image captioning predominantly adopts the Transformer architecture as the main architecture. However, radiology report generation differs significantly from image captioning. The main differences can be summarized as follows: 1. Most radiology images are highly similar to each other, with only subtle differences in the areas with pathological changes. Existing models often struggle to attend to the specific regions of these lesions, making it challenging to generate descriptions that focus on subtle pathological areas. 2. Medical image reporting involves the challenge of generating long-form text, which is more difficult, and the supervision signal for describing disease-specific information is often sparse. Radiology report generation also requires more expertise knowledge and training compared to image captioning in the natural image domain. 3.The data bias in radiology report generation is serious, resulting in the problem of shortcut learning. Deep learning models often generate radiology reports that lack significant disease descriptions crucial for clinical diagnosis. To address the inherent problems in radiology report generation, we propose the RRGnet framework in this paper. It is known that for a medical imaging examination with a disease, only a small part of the corresponding imaging report describes the relevant disease. This situation causes important diagnostic terms to be buried within a large amount of normal statements. Predicting the disease tag of CXR is generally easier and more accurate compared to generating a long description of the symptoms, as it provides a stronger supervision signal. We leverage interpretable artificial intelligence techniques and class activation maps, widely used in tasks such as weakly supervised object detection or image segmentation, to reflect the location and morphological characteristics of targets. Inspired by works that apply class activation maps to analyze the decision-making process in deep learning models, we propose a disease-oriented mask retrieval module. This module effectively retrieves more accurate reports from the database at the disease perspective. Experimental results and qualitative analysis demonstrate the effectiveness of the disease-oriented mask retrieval module in finding samples with the same disease as the input sample, exhibiting a high degree of consistency in disease location and morphological characteristics. The search module based on a disease-oriented mask finds the most similar image reports from the database as reference reports, simulating how radiologists refer back to reports they have seen in the past with the same disease. Additionally, we propose a fact-consistent image report generation module based on copying mechanism. This module utilizes vocabulary-level prior information during the generation of medical image reports in the decoder, enhancing the clinical accuracy and fact consistency of the generated reports. The attention concept within the copying mechanism simulates the varying degrees of reliance that radiologists typically have on different reference reports. Furthermore, the model focuses on the parts of the original image that differ from similar textual descriptions, integrating prior knowledge and input images to generate accurate medical image reports. The main contribution of this paper can be summarized as follows: * We propose a CAM-based<cit.> similarity retrieval method that generates disease-oriented masks, which effectively represent the disease morphology and location information. This significantly improves the accuracy of similar report retrieval, enabling the corresponding decoder to attend to a greater extent to disease-specific descriptions during the decoding stage. * We propose a fact-consistent image report generation module based on a copying mechanism. This module simulates the writing process of radiologists when composing image reports and effectively utilizes the input's relevant diseases. It greatly enhances the clinical accuracy and fact consistency of the generated medical image reports. * We conducted experiments on two widely used medical image report generation datasets, IU-Xray<cit.> and MIMIC-CXR<cit.>. Both qualitative and quantitative experiments were performed to demonstrate the effectiveness of our proposed model. § RELATED WORK §.§ Image Captioning Image captioning, which aims to generate descriptive and meaningful captions for images, has been extensively explored in deep learning. Early approaches to image captioning relied on handcrafted features and language models <cit.>. However, with the advent of deep learning, convolutional neural networks (CNNs) and recurrent neural networks (RNNs) have become the predominant architectures for image captioning <cit.>. These architectures allow for end-to-end training, enabling the models to learn both visual and textual representations. Attention mechanisms have been proposed to improve image captioning by allowing the model to focus on relevant image regions during the caption generation process <cit.>. By attending to specific regions, the model can align the generated words with the corresponding visual content, resulting in more accurate and contextually relevant captions <cit.>. Furthermore, reinforcement learning techniques have been applied to optimize captioning models by incorporating reward-based feedback <cit.>. This approach involves training the model to maximize a reward signal, typically based on the quality of the generated captions, which can lead to improved captioning performance through iterative optimization. §.§ Image Retrieval Image retrieval<cit.> is the task of retrieving similar images from a database based on the content of the images. One approach is to fine-tune convolutional neural networks (CNNs) with a ranking loss function <cit.>. By optimizing the network parameters based on a ranking loss, the model learns to better differentiate between relevant and irrelevant images, resulting in improved retrieval performance. Attention mechanisms have also been incorporated into the image retrieval process <cit.>. Inspired by human visual perception, attention mechanisms allow the model to focus on relevant parts of the image, improving the ability to capture and match distinctive features for retrieval. Despite these successes, challenges persist in deep learning-based image retrieval. One major challenge is the semantic gap that exists between low-level visual features and high-level semantics. Low-level features extracted from images may not capture the complex semantic meaning, making it difficult to accurately match images based on their content. §.§ Radiology Report Generation Radiology report generation is similar to image captioning in many ways, but there are some notable differences. While image captioning typically generates single-sentence captions, radiology reports are paragraphs containing multiple sentences. To address this difference, some methods have adapted hierarchical LSTM models <cit.> to handle the generation of longer radiology reports. In particular, the work by <cit.> employed chest X-ray disease classification as an auxiliary task to improve automatic radiology report generation. Some radiology report generation methods<cit.> utilize image retrieval to obtain a template report that closely resembles the input image. However, these methods suffer from two major shortcomings. Firstly, due to the similarity of chest X-ray images, it is difficult for these methods to retrieve similar reports for images with similar diseases. Secondly, these methods represent the retrieved similar report as a prior knowledge vector, which limits the model's ability to leverage the rich linguistic properties of the reports. With the development of attention mechanisms <cit.>, transformer models have emerged as powerful tools for bridging the gap between image and text modalities. Recent works <cit.> have adopted transformer encoder-decoder architectures for radiology report generation and demonstrated excellent performance. However, most existing methods rely solely on a visual encoder trained jointly with a decoder to extract information from the image, without explicitly leveraging the linguistic properties of similar radiology reports. §.§ Class Activation Map The Class Activation Map (CAM) technique was initially proposed to highlight the regions in an image that are most important for a model's prediction <cit.>. It has primarily been used in image classification tasks <cit.>, where the objective is to identify the main objects in an image. CAM-based methods have been employed in weakly supervised object detection <cit.> to improve localization capabilities when only image-level annotations are available. Additionally, weakly supervised semantic segmentation works <cit.> have utilized CAM to generate pseudo segmentation labels and train segmentation models. The success of CAM-based approaches in weakly supervised object-level tasks highlights their ability to capture position information and morphological characteristics of objects. Inspired by these works, we apply CAM to generate disease-oriented masks and incorporate them into the image retrieval process. By leveraging CAM, we aim to retrieve more accurate and relevant reference reports for similar diseases. To the best of our knowledge, this is the first paper to employ the CAM method to advance the task of radiology report generation. § METHODOLOGY The proposed method is illustrated in <ref>. It consists of two stages aimed at improving radiology report generation. In the first stage, we generate a disease-oriented mask using the Class Activation Map (CAM) technique <cit.>. This is achieved by aggregating the class activation maps corresponding to different disease labels. The aggregated disease representation matrix is then subjected to dimensionality reduction through Singular Value Decomposition (SVD) to obtain the disease-oriented mask. The disease-oriented mask effectively captures various disease information along with their corresponding morphological and location details. This enables precise retrieval of similar diseases during the retrieval process, resulting in higher-quality corresponding image reports as prior knowledge. In the second stage, we incorporate the copy mechanism and propose a fact-consistent image report generation model. The decoder of the model takes advantage of both prior knowledge and input images, allowing for comprehensive consideration of both sources of information during the generation of text tokens. By leveraging prior knowledge and images together, the model generates more clinically meaningful and efficient image reports. §.§ Phase 1:Disease Oriented Mask Generation The process of generating a disease-oriented mask simulates the procedure used by radiology experts to analyze diseases. Initially, we extracted disease labels from the corresponding medical imaging reports using Chexbert<cit.>. These disease labels encompass a range of common diseases such as 'Enlarged Cardiomediastinum,' 'Cardiomegaly,' 'Lung Opacity,' 'Lung Lesion,' 'Edema,' 'Consolidation,' 'Pneumonia,' 'Atelectasis,' 'Pneumothorax,' 'Pleural Effusion,' 'Pleural Other,' 'Fracture,' and 'Support Devices.' These extracted labels encompass a comprehensive set of diseases. Next, we employed a CNN network with a global average pooling layer to perform multi-label classification for the aforementioned diseases. Additionally, we applied the Class Activation Mapping (CAM) method<cit.>to obtain a class activation map for each disease category. Subsequently, we aggregated multiple class activation maps at the channel level. To enhance retrieval efficiency, we utilized the Singular Value Decomposition (SVD) method to reduce the dimensionality of the aggregated class activation map. This process resulted in a matrix that characterizes the location and morphology information of various diseases, which we defined as the disease-oriented mask. For a given image I ∈ℛ^H*W*C, the activation of the k unit feature maps of the last convolutional layer at the spatial location (x, y) is represented by f_k(x, y) for a given image. With performing GAP layer, the class activation map of c class can be formulated as: S_c=∑_kw_k^c∑_x,yf_k(x, y), S_c∈ℛ^H*W*1 where w_k^c is the weight corresponding to class c for unit k. Subsequently, the disease oriented mask can be obtained by : DOM_i=[S_1,S_2,.....S_K], DOM_i∈ℛ^H*W*K k represents the number of diseases defined in the classification model. In our actual experiments, K represents 14, which is the number of disease labels that can be extracted by CheXbert<cit.>. To reduce memory usage and improve retrieval efficiency, we compressed and reduced the dimensionality of disease-oriented mask using SVD for storage. Finally, after the phase 1, we added disease-oriented mask for each image-report pair. Therefore, the basic format of the dataset can be expressed as follows: (I,T,DOM)) where I,T denoted the original Image and radiology report, DOM is corresponding disease-oriented mask. §.§ Phase 2:Fact consistency based radiology report generation The module for generating radiology reports based on fact consistency consists of three components: 1. Similarity report retrieval module rely on disease-oriented mask 2. Prior information extraction and representation module 3. Fact-consistent image report generation module based on the copying mechanism. §.§.§ Similar Report Retrieval Module For each input data unit (I_i,T_i,DOM_i), we have its corresponding disease-oriented mask. We calculate the cosine similarity to obtain the similarity scores between the disease-oriented mask and the disease-oriented mask pool. specifically,the knowledge pool is the disease-oriented mask pool: DOM_Pool=[DOM_1,DOM_2,....DOM_N]. We calculate the cosine similarity between the input DOM_i and DOM_Pool, and selecting Top k samples' radiology report as reference reports. Generally, radiology report is expressed as T={s_1,s_2,...s_l},where s_i denotes the i th sentence. Meanwhile,the sentences s_i is a long sequence {w_1,w_2,...,w_T} and w_i is the i token of the reference report. Using disease-oriented masks, the retrieved prior knowledge demonstrates a remarkable consistency with the input report in terms of disease localization and morphological information. Qualitative analysis has effectively showcased the efficacy of our method in retrieving reports from patients with the same disease, displaying a strong resemblance to the target report and achieving a notable alignment at the disease level. Additionally, we have observed a high degree of consistency between the position and morphological descriptions of the disease in certain retrieval results, with sentences describing abnormalities being remarkably similar. Based on these findings, we can confidently conclude that the retrieved prior knowledge is highly relevant to the target report. It serves as fact-consistent knowledge, contributing to the enhancement of medical image report quality generated by our model. Typically, after Retrieval, the basic data unit is (I_i,T_i,R_1,R_2,...R_K), where R_i denotes the reference report. §.§.§ Prior information extraction and representation module Due to the fact that sentences describing abnormalities in medical image reports typically occupy a small portion of the overall report, utilizing all image reports as knowledge input often leads to a significant amount of redundant information. This redundancy can hinder the model's ability to focus on crucial disease-specific details. To address this issue, we utilized Chexbert<cit.> to analyze every sentence in all of the imaging reports. This approach can effectively identify the sentences that truly describe the diseases in an imaging report, thus obtaining more refined prior knowledge as prior knowledge. For instance, the report "Lungs are clear. No pleural effusions or pneumothoraces.heart size is upper limits of normal, There are low lung volumes with bronchovascular crowding and scattered opacities in the bilateral lung", after identifying, the prior knowledge are "heart size is upper limits of normal" and "There are low lung volumes with bronchovascular crowding and scattered opacities in the bilateral lung". By extracting all possible relevant information regarding lesions, we augment the quantity of valuable information derived from the prior knowledge. Subsequently, we encode both the original image and prior knowledge to obtain multimodal representations. The textual representation preserves vocabulary-level information, empowering the decoder to generate higher-quality medical image reports using the copying mechanism. The clinical effectiveness of the generated reports has been validated through the analysis of relevant metrics. Typically, after Retrieval, the basic data unit is (I_i,T_i,R_1,s_2,...s_m), where s_i denotes the abnoumal sentences, and m is the number of sentences. §.§.§ Fact-consistent image report generation module Pointer Networks<cit.>are specifically designed for sequential decision-making tasks, addressing scenarios where the network needs to select elements from an input sequence based on contextual information. Unlike generating discrete tokens, Pointer Networks employ attention mechanisms to directly output indices or positions within the input sequence. The architecture comprises an encoder, attention mechanism, and decoder, enabling it to handle output sequences of variable lengths. Pointer Networks have demonstrated effectiveness in tasks such as routing, combinatorial optimization, and structure parsing. Furthermore, they have been widely adopted in text summarization, as the copying mechanism effectively captures key words in the input text, enhancing the accuracy of summary extraction. Specifically, the copying mechanism posits that the output generated by a model is derived from the input. In each time-step of the decoder, general seq2seq<cit.> model produces the vector that influences content-based attention weights corresponding to the input sequence. In the case of the pointer network, these attention weights act as indicators pointing to specific positions in the input sequence. The input time-step with the highest weight is considered the output for that particular decoder time-step. The formulation can be be expressed in <ref> u_j^i=v^T tanh(W_1 e_j+W_2 d_i) j ∈(1, …, n) p(C_i | C_1, …, C_i-1, 𝒫)=softmax(u^i) the encoder and decoder hidden states as (e_1, . . . , e_n) and (d_1, . . . , d_p), and v^T,W_1 are all parameters of the model. The output of softmax operation points to the input token having the maximum value. In the process of generating medical imaging reports, doctors also exhibit a similar implicit thinking process. When reviewing medical images, doctors draw upon their encounters with similar images in the past and reference previous writing styles while composing reports. Moreover, they need to consider the distinctions between existing imaging and reference images. Our model effectively simulates this thinking process. As the decoder produces an imaging report, it can simultaneously consider highly reliable prior knowledge and the original input image. The image report generation module, based on fact consistency, fully leverages vocabulary-level prior information while focusing on the input image, resulting in more precise medical imaging reports. Our specific implementation is as follows: p(𝒞^𝒫|𝒫 ; θ)=∏_i=1^m(𝒫) p_θ(C_i | C_1, … ,C_i-1; 𝒦; ℐ ; θ) here 𝒞^𝒫={C_1, … ,C_m(𝒫)} is target report, consisting a sequence of text tokens. ℐ={I_1, … ,I_n} is the the radiographics, are mede up of a sequence of image pathes tokens. 𝒦={K_1, … ,K_m} is prior knowledge, which is composed of a sequence of text tokens. Given a training triplet((I_i,T_i,K_i),We denote the final output of the decoder as z_1, ..., z_t, our model aims at computing the conditional probability z_i: 3 𝐘(z_i)=(𝐘_Gen(z_i)+𝐘_Copy(z_i))/2 The output probability 𝐘(z_i)is composed of both the image attention 𝐘_Gen(z_i)and the prior knowledge attention 𝐘_Copy(z_i) in the model. Specifically, the model generates predictions by attending to both the input image and prior knowledge. For the image attention part, we calculate the attention coefficients between the decoder hidden state vector and all the encoder input hidden state vectors. The softmax function normalize the attention coefficients u_j^i over all the image patches in the input. [ u_j^i =v^T tanh(W_1 I_j+W_2 d_i) j ∈(1, …, n); a_j^i =softmax(u_j^i) ; d_i^' =∑_j=1^n a_j^i e_j ] where v^T,W_1,W_2 are all learnable parameters, I_j is the element of the input image patch sequences ℐ. The d_i^' is the generation vector,d_i is the decoder hidden states. Then, the 𝐘_Gen(z_i) is obtained by: 𝐘_Gen(z_i)=softmax(Linear(d_i^';d_i)) Lastly, d_i^' and d_i are concatenated and used as the hidden states from which we make predictions and which we feed to the next time step in the recurrent model. For the part that incorporates prior knowledge, we calculate the similarity coefficients between the decoder hidden state vector and the token embeddings of all the prior knowledge inputs. The attention coefficients indicate the relevance of the prior knowledge to the current decoding step, and are used to compute the probability of outputting each token at the current time step. Specifically, in the last transformer block, attention weights are generated that represent the probabilities of copying the text from each token of prior knowledge. We define a token z_i to be produced if a node k_j is selected, and the text of that token starts with z_i. The computation process for incorporating prior knowledge using attention is as follows: [ u_j^i =v^T tanh(W_1 K_j+W_2 d_i) j ∈(1, …, n); a_j^i =softmax(u_j^i) ; ] 𝐘_Copy(z_i)=∑_j ∈ V j=z_i a_j^i In summary, our model for generating image reports considers both the token generation probability from the input image and the token copying probability from the prior knowledge, which helps improve the quality of the generated reports and their clinical relevance. The detail of the Fact-consistent image report generation module is illustated in <ref> § EXPERIMENTS §.§ Datasets and Tool §.§.§ IU-Xray IU X-Ray is a widely recognized benchmark dataset for evaluating medical image report generation models. The dataset comprises over 7470 chest X-rays and 3955 corresponding radiology reports, which have been manually annotated by expert radiologists. The reports typically consist of multiple sentences, outlining the observations, impressions, and recommendations based on the images. We adopt the "Findings" section which provides a detailed paragraph describing the observed evidence as the target sequence. §.§.§ MIMIC-CXR MIMIC-CXR is a dataset comprised of 64,588 patients collected at the Beth Israel Deaconess Medical Center between 2011 and 2016. The MIMIC-CXR dataset was collected over multiple years and encompasses a large volume of patient data. It provides a rich resource of chest X-ray images and associated radiology reports, enabling extensive research and algorithm development in the field of chest imaging analysis. It includes 77,110 chest X-ray images and 227,835 corresponding free-text radiology reports. To ensure experimental fairness, we followed the experimental setup of previous studies, resulting in a training set of 222,758 samples, and validation and test sets consisting of 1,808 and 3,269 samples. §.§.§ Chexbert CheXbert is a method that combines automatic labeling and expert annotations to accurately label radiology reports using BERT. It can annotate 14 common medical observations, including fractures, consolidation, enlarged mediastinum, not detected, pleural other, cardiomegaly, pneumothorax, atelectasis, support devices, edema, pleural effusion, lung lesions, and lung opacities. We utilized CheXbert to perform label extraction on IU-Xray and MIMIC-CXR datasets, extracting corresponding disease labels. Since CheXbert was pre-trained on MIMIC-CXR, it provides more accurate disease label extraction for MIMIC-CXR. Therefore, subsequent analysis experiments, such as ablation studies, were mainly conducted based on MIMIC-CXR. §.§ Implementation Details In the first stage, we employ ResNet<cit.> with GAP layer as the network for disease classification, and adopt Class Activation Maps (CAM)<cit.> as the method for generating disease-oriented masks. The size of each class activation map for each category is 2242241. Furthermore, Chexbert provides a total of 14 disease labels. By aggregating multiple class activation maps along the channel dimension, we obtain a disease-oriented mask with dimensions of 224*224*14. During the generation of disease-oriented masks, we utilize Singular Value Decomposition (SVD) to reduce the dimensionality of the aggregated class activation maps, thereby enhancing the retrieval efficiency and reducing the storage space required for the masks. After compression, the size of each disease-oriented mask is 224*224*3. In the second stage, we get the disease-oriented mask vectorization and computed the similarity between the disease-oriented mask of the source image and the disease-oriented masks in the mask pool, then selected the top k medical image reports that strictly cannot correspond to the source image as prior knowledge. k is a hyperparameter, and we define it as 3 in our main experiment. We used all samples in the training set to construct the disease-oriented mask pool. Subsequently, we extract sentences from the selected reports that indicate the presence of diseases, serving as prior knowledge. Specifically, we use Spacy<cit.> to tokenize a given reference imaging report into sentences based on punctuation. After dividing the text into individual sentences, we apply Chexbert for annotation analysis. We retain the sentences that contain positive disease labels as reference prior knowledge. We feed both the original images and the prior knowledge into a multimodal input encoder. The number of layers in the multimodal input encoder and text decoder is set to 3, and the number of attention heads is set to 8. All input images are resized to 224*224 pixels and split into 7*7 patches. We concentrate several abnormal sentences.The max token length of prior knowledge is set 100. The maximum output token length is set to 60. We utilize the Adam optimizer with a learning rate of 1e-4. The training process spans 100 epochs, while maintaining the same parameter settings for both datasets. We use two NVIDIA A40 GPUs to trained our model and set the batch size 32. The experimental settings remain consistent across the IU-Xray and MIMIC-CXR. §.§ Evaluation Metric §.§.§ Natural Language Generation Metrics To evaluate the generated report quality, we adopted BLUE-1,BLUE-2,BLUE-3,BLUE-4<cit.>,ROUGE-L<cit.>,METEOR<cit.>,CIDR<cit.> as the NLG metric. The assessment of predictive reports' descriptive accuracy relies on the utilization of NLG metrics. BLEU (Bilingual Evaluation Understudy) was originally designed for machine translation tasks and calculates the overlap of word n-grams between predictions and references, capturing the semantic and fluency aspects of sentences. However, it has limitations in accurately evaluating long sentences. Rouge effectively considers recall and enables effective evaluation of long texts. Meteor combines multiple evaluation metrics, taking into account precision, recall, and edit distance, among other factors. It considers the similarity of synonyms and word order, which allows for better evaluation of semantic variations. CIDER considers lexical consistency and coherence evaluation, providing a comprehensive assessment of the similarity between generated descriptions and reference descriptions. It is widely used in image captioning tasks. The CIDEr value is used to evaluate whether a model can generate more accurate thematic descriptions by assessing the frequency of non-repetitive sentences in the training set. §.§.§ Clinical Efficacy Metrics To evaluate the clinical efficacy of the generated report, we utilized Chexbert<cit.> to extract disease labels from the model-generated report. Precision, recall, and F1-score were employed as metrics to assess the clinical efficacy of our model. It should be noted that Chexbert was pretrained on MIMIC-CXR, and its extraction results may not be sufficiently accurate for IU-Xray. Therefore, we only present the clinical efficacy metrics based on the MIMIC-CXR datasets. § RESULTS AND DISCUSSION §.§ Comparison with SOTA §.§.§ Description Accuracy We compare our methods with a range of previous SOTA radiology report generation methods and image captioning methods. For Image captioning methods, State-of-the-art (SOTA) models in the field of image captioning, such as ADAATT<cit.>,ATT2IN<cit.>,CoAT<cit.>, that utilize encoder-decoder architectures, are included in the comparison. For previous radiology report generation methods, we compared our methods with R2Gen<cit.>,CMN<cit.> which employ the memory mechanism to restore the patter information during train process and other knowledge enhanced radiology generation methods like KERP<cit.>,HRGR<cit.>,ARRG<cit.> Methods like CMCL<cit.>,CA<cit.> utilize contrastive learning to model the pairing relationship between images and text, enhancing the generation of image captions. As show in <ref>,our methods achieve State-of-the-Art among all metrics comparing with all other methods. This indicates that our model has achieved comprehensive improvement in terms of language fluency and accuracy in generating medical image reports. Specifically, Our model surpasses others in terms of CIDEr<cit.> and ROUGE-L<cit.> metrics, while achieving comparable results in BLEU-4<cit.> and METEOR metrics. The superior CIDEr values indicate that our model avoids redundant sentences from the training set and produces reports with more precise and relevant topics. The improvement in the CIDER metric indicates that our model has effectively addressed the issue of generating redundant text and to some extent mitigated data bias. Additionally, we prioritize clinical correctness in our approach. §.§.§ Clinical Efficacy We evaluate the model by adopting chexbert<cit.> to extract common abnormalities from the generated radiology reports. Due to ChexBERT<cit.> was only trained on MIMIC-CXR, we present the clinical effectiveness metrics of MIMIC-CXR to demonstrate the clinical efficacy of our model. Since could not access the code of some methods, we only compare our results with some methods can be reproduced or report the clinical effectiveness. In <ref>,it is evident that our model has made substantial advancements in terms of clinical effectiveness, exhibiting notable improvements in clinical accuracy, precision, and F1 score. These results highlight the remarkable efficacy of our model. Moreover, it demonstrates that incorporating prior abnormal knowledge as input effectively aids the model in focusing on abnormal information when generating corresponding medical imaging reports. This approach leads to the production of more reliable medical imaging reports. §.§ Ablation Study §.§.§ Effectiveness of every component To assess the effectiveness of each module, we conducted ablation experiments specifically targeting those modules. In order to contrast with the general approach of using image encoding vectors for retrieval, we trained the network's backbone using a commonly used image-based autoencoder for image reconstruction. Subsequently, we extracted image encoding vectors through the encoder for retrieval purposes. In Table <ref>, the term "with general" denotes the traditional retrieval approach. Furthermore, we incorporated the anomalous sentences retrieved by this method as prior knowledge embeddings into the model. The metrics reveal no significant degradation in language generation quality, but there is a noticeable decline in clinical effectiveness. After applying the general retrieval method in the model, there was a significant decrease of 5% in the F1 score. This suggests that our disease-oriented approach effectively enhances the quality of retrieved similar reports, thereby improving the model's perception of diseases. "W/o Retrieval" refers to the model structure after removing the entire retrieval branch, causing the model to become a general seq2seq model that generates image reports solely through a transformer encoder-decoder. The model experienced a significant decrease in both language quality and clinical effectiveness. Specifically, language quality metrics such as BL-3, RG-L, and MTOR decreased by 1.3%, 2.9%, and 3.0%, respectively. The F1 score showed a substantial decline of 7.2%. These results indicate that embedding prior knowledge can effectively assist the model in generating higher quality medical image reports. Further exploration is warranted in the realm of more efficient knowledge integration. "W/o FC mechanism" indicates the absence of a fact-consistent decoder based on the copying mechanism in the decoder. Similarly, we observed that the generated medical image reports did not exhibit significant degradation in language quality, but there was a severe decline in clinical effectiveness. Surprisingly, multiple language generation evaluation metrics of the model exhibited improvements, but there was a severe decline in clinical effectiveness indicators. Specifically, the language generation quality metrics, BL-3, RG-L, and MTOR, improved from 1.151 to 0.165, from 0.281 to 0.293, and from 0.145 to 0.152, respectively. However, the F1 score experienced a decline from 0.315 to 0.275. The increase in language generation metrics may be attributed to the fact that when the model removes the copying mechanism, it tends to generate more normal descriptions. As normal descriptions constitute a large portion of medical image reports, it becomes a "shortcut" for improving language generation metrics. However, the decline in clinical effectiveness indicators demonstrates that although language generation metrics have improved, the actual quality of generated medical image reports has decreased. The introduced copying mechanism effectively utilizes prior input knowledge at the vocabulary level during the generation of medical image reports, leading the model to generate descriptions similar to the prior knowledge. This ultimately improves the clinical effectiveness of the model. Our ablation experiments on multiple branches validate the effectiveness of the disease-oriented masked similar report retrieval and the fact-consistent decoder based on the copying mechanism proposed in our model. §.§.§ The amount of reference reports We also conducted an ablation experiment to investigate the effectiveness of using different numbers of reference image reports in generating medical image reports. We maintained the same hyperparameter settings, and the specific experimental results are shown in <ref>. It was observed that when three reference image reports were used as input, the model achieved the best performance. This phenomenon suggests that in the generation of image reports, an excessive number of reference reports can introduce redundant information, causing the model to overlook important information in the prior knowledge. On the other hand, using too few reference reports can result in insufficient experiential knowledge for the model, leading to performance degradation. §.§ Qualitative Results: The <ref> presents a quantitative analysis of our model on the MIMIC dataset, where we provide a comparison between the reports generated by our model and the similar image reports retrieved through different methods. We have highlighted the abnormal sentences in different colors, while the reports describe normal conditions. We compare the reports obtained using the disease-oriented mask retrieval approach with those obtained using the conventional image retrieval method, selecting three reference reports as benchmarks. Through this comparison, we have found that the disease-oriented mask retrieval approach yields more accurate reference reports compared to the traditional image reconstruction-based image retrieval method. We observe that many disease-related abnormal description sentences in the Ground Truth exhibit a high degree of similarity with the abnormal description sentences in the retrieved reports, particularly in terms of the location information and morphological features of the diseases, indicating a significant overlap. By comparing the retrieved reports, the reports generated by our model, and the Ground Truth, we find that our proposed fact-consistent decoder based on the copy mechanism effectively incorporates relevant information from the reference similar reports. This enables the model to produce descriptive statements for important disease descriptions by leveraging the copied content, thereby enhancing the model's perception of diseases and clinical effectiveness in generating medical image reports. § CONCLUSION We propose a method based on disease-oriented mask retrieval, knowledge embedding, and fact-consistent image report generation. Our work encompasses two main innovations. Firstly, we are the first to generate disease-oriented masks for each sample by utilizing a classification model to generate multi-class class activation maps and then aggregating and reducing them. These disease-oriented masks possess powerful disease representation capabilities, encompassing rich disease categories, morphology, and positional information. Extensive experimental results demonstrate that disease-oriented masks can replace original images for retrieval, as the retrieved medical image reports often exhibit high similarity to the target reports. This provides the model with strong prior knowledge, significantly enhancing its clinical effectiveness. Secondly, we propose a fact-consistent decoder based on the copy mechanism. This decoder can effectively leverage vocab-level prior information while also considering the input image information, enabling comprehensive output generation. Extensive experiments demonstrate that our model achieves remarkable performance improvements in terms of language generation quality and clinical effectiveness on two large benchmark datasets, IU-Xray and MIMIC-CXR. Moreover, the disease-oriented masks we propose can be further paired with medical image reports due to their stronger semantic representation capabilities. The fact-consistent decoder based on the copy mechanism, proposed in our work, can be effectively applied to the domain of medical image report generation, which requires highly specialized training. Its powerful copying ability simulates the process of radiologists writing image reports, and qualitative analysis indicates that the decoder can replicate highly credible prior knowledge, thus enhancing the clinical effectiveness of our proposed model. For future work, as our approach heavily relies on similar text reports as knowledge, which may have relatively narrow expertise, and lacks broad domain knowledge as credibility constraints, we look forward to incorporating medical textbooks like PUMed to improve the model's understanding of knowledge. Additionally, using widely accepted textbook language can provide factual constraints for image report generation models, expanding their applicability. IEEEtran
http://arxiv.org/abs/2307.04272v1
20230709215943
Nuclear-spin-dependent corrections to the transition polarizability in cesium
[ "D. Xiao", "H. B. Tran Tan", "A. Derevianko" ]
physics.atom-ph
[ "physics.atom-ph" ]
The Stark-interference technique is commonly used to amplify the feeble parity-violating signal in atomic experiments. As a result, interpretation of these experiments in terms of electroweak observables requires knowledge of the Stark-induced E1 transition amplitudes or, equivalently, transition polarizabilities. While the literature assumes that these transition polarizabilities do not depend on the nuclear spin, here we prove the contrary. The nuclear spin dependence arises due to hyperfine mixing of atomic states and requires a third-order perturbation theory (one hyperfine interaction and two electric-dipole interactions) treatment. We demonstrate that the so far neglected tensor contribution appears in the transition polarizability and present numerical results for the nuclear-spin-dependent corrections to the 6S_1/2→7S_1/2 transition polarizability in ^133Cs. We investigate the effect of these corrections to transition polarizabilities on the extraction of the ^133Cs anapole moment from the Boulder experiment [Science 275, 1759 (1997)]. We also consider their effect on the extraction of the ratio between the scalar and vector transition polarizabilities from the measurements [Phys. Rev. A 55, 2 (1997)]. While the corrections are minor at the current level of experimental accuracy, our analysis provides a framework for future experiments. Department of Physics, University of Nevada, Reno, 89557, USA Department of Physics, University of Nevada, Reno, 89557, USA [][email protected] Department of Physics, University of Nevada, Reno, 89557, USA Nuclear-spin-dependent corrections to the transition polarizability in cesium A. Derevianko August 12, 2023 ============================================================================= § INTRODUCTION In 1988, an experiment performed by the Boulder group <cit.> provided the first evidence of the nuclear-spin-dependent parity-non-conserving (PNC) interactions in ^133Cs atom, which later led to the discovery of the ^133Cs nuclear anapole moment <cit.>. However, the extracted nuclear anapole moment <cit.> has been found to disagree with the nuclear-physics determination <cit.>. In nuclear physics, to bridge different manifestations of PNC, theorists operate in terms of the weak meson-nucleon couplings <cit.>. The weak meson-nucleon couplings propagate through the nuclear structure evaluation of anapole moments <cit.> and other nuclear processes. Linear combinations of these couplings can be constrained by comparison with available experimental data and theoretical estimates within the Standard Model framework. In particular, the nuclear physics constraints come from the scattering of polarized protons on unpolarized protons and ^4He targets as well as the emission of circularly polarized photons from ^18F and ^19F nuclei. These constraints form nuclear experimental bands whose intersection yields the nuclear physics determinations of the couplings. However, the bounds derived from the measured Cs anapole moment lie outside this nuclear physics favored region. Our paper is motivated in part by this tension between the nuclear and atomic physics determinations of the weak meson-nucleon couplings. The anapole moment is extracted from the difference between the two measured PNC amplitudes E1_PNC connecting different hyperfine components of the ground 6S_1/2 and the excited 7S_1/2 states in ^133Cs. The Boulder results read <cit.> Im(E1_PNC)/β= -1.6349(80) mV/cm for 6S_1/2, F_i=4→7S_1/2, F_f=3 , -1.5576(77) mV/cm for 6S_1/2, F_i=3→7S_1/2, F_f=4 . Here, F is the grand total angular momentum in Cs formed by adding the nuclear spin I=7/2 and the total electronic angular momentum J, and β is the vector transition polarizability. A weighted average of the two values in Eq. (<ref>) yields the nuclear-spin-independent electroweak observable (weak charge), while their difference – the nuclear-spin-dependent quantity (nuclear anapole moment). Notice the appearance of the vector transition polarizability β in the results (<ref>), as the Boulder group used the Stark-interference technique <cit.>. This technique amplifies the feeble PNC effect by the means of an externally applied DC electric field which opens an additional Stark-induced excitation pathway for the nominally E1-forbidden 6S_1/2→ 7S_1/2 transition. Then the transition rate acquires a cross-term between the Stark-induced and PNC amplitudes. This interference term flips sign under parity reversals enabling its experimental extraction. One of the assumptions made in the Boulder analysis is that β does not depend on the nuclear spin. Contrary to this assumption, here we identify nuclear spin-dependent corrections to the Stark-induced transition amplitudes or, equivalently, to the transition polarizabilities (β in particular). While the effects of our newly-introduced corrections turn out to be negligible at the Boulder experiment's level of accuracy, our analysis provides a framework for future experimental efforts. The paper is organized as follows. In Sec. <ref>, we review the Stark-interference technique and derive the second-order transition polarizabilities. The hyperfine-mediated corrections to the transition polarizabilities are derived in Sec. <ref> and numerically evaluated in Sec. <ref>. Our reanalysis of the Boulder APV experiment <cit.> is given in Sec. <ref>. We also compute correction to the experimentally extracted ratio of the vector and scalar transition polarizabilities in Sec. <ref>. While we keep the discussion sufficiently general, all our numerical work refers to the 6S_1/2→ 7S_1/2 transition in ^133Cs. Unless stated otherwise, atomic units are used throughout. § GENERALIZATION OF STARK-INDUCED TRANSITION POLARIZABILITY We are interested in driving an electric-dipole transition from the an initial state i to a final state f. We assume that these states are of the same parity, precluding E1 transitions. To open the otherwise forbidden E1 pathway, we apply a DC electric field which admixes intermediate states of opposite parity into i and f <cit.>. The relevant amplitude for the resulting E1 transition between such mixed states can be derived in the second order of perturbation theory (see Ref. <cit.> for a detailed derivation). The two perturbations are the electric dipole interactions with the applied DC and driving laser fields. The Stark-induced transition amplitude A_i→f is conventionally expressed in terms of the transition polarizability a_i→f as A_i→f= a_i→fℰ_s ℰ_L, which factors out ℰ_s and ℰ_L, the static and laser field amplitudes, respectively. The transition polarizability for the transitions between two S_1/2 states is conventionally parameterized as <cit.> a_i→f= α(ε̂̌̂·ě̂) δ_F_iF_fδ_M_iM_f+ i β(ě̂×ε̂̌̂ ) ·⟨f|σ̌|i⟩ . Here, the two atomic-structure-dependent quantities α and β are the scalar and the vector transition polarizabilities. The unit vectors ε̂̌̂ and ě̂ characterize polarizations of the laser and static electric fields, respectively. The states i and f are hyperfine basis states, e.g, |i⟩ = |n_i(IJ_i)F_iM_i⟩ is a state of grand total angular momentum F_i obtained by the conventional coupling of the total electron angular momentum J_i and the nuclear spin I_i, with M_i and n_i being the magnetic and principal quantum numbers. The matrix element of Pauli matrices σ̌ is understood as involving the angular parts of the wavefunctions. Qualitatively, Eq. (<ref>) is obtained <cit.> in the second order of perturbation theory by recouping the product of two dipole couplings (Ď·ε̂̌̂) (Ď·ě̂) into a sum over the irreducible tensor operators (ITO) containing scalar products of compound tensors[A scalar product of two rank-k ITOs is understood as P^(k)·Q^(k) = ∑_q=-k^k (-1)^q P^(k)_qQ^(k)_-q , and a compound ITO of rank Q is defined as {P^(k_1)⊗R^(k_2)}_q^(Q)=∑_q_1q_2 C^Qq_k_1q_1k_2q_2 P^(k_1)_q_1 R^(k_2)_q_2 , where q_1 and q_2 label the spherical basis components of the ITOs with C^Qq_k_1q_1k_2q_2 being the conventional Clebsch-Gordan coefficients. ] (ε̂̌̂⊗ě̂)^(Q)· (Ď⊗Ď)^(Q). Here, Ď is the electron electric dipole moment operator. Based on the angular selection rules, the rank Q can accept the values of 0, 1, and 2, corresponding to the scalar, vector, and tensor contributions. Hereto, previous analyses of the 6S_1/2→ 7S_1/2 transition polarizability in Cs have neglected the tensor (Q=2) contribution. The reason for this is that the dipole operators involve only electronic degrees of freedom and the matrix element of the rank-2 tensor between the S_1/2 states vanishes due to the angular selection rules. However, if we account for the hyperfine interaction (HFI), the states involved would need to be characterized by the grand-total angular momentum F and the tensor contribution would no longer vanish since F=3 or 4 for the hyperfine manifolds attached to the S_1/2 electronic states in ^133Cs. Notice that the inclusion of the HFI requires a third-order perturbation theory treatment and therefore leads to the tensor contribution being suppressed compared to the scalar and vector contributions. The tensor contribution to Eq. (<ref>) can be parameterized as a_i→f = ... + γ⟨f|{I⊗I}^(2)|i⟩(ε̂̌̂⊗ě̂)^(2) , where our newly-introduced tensor transition polarizability γ depends on both the nuclear and electronic structure. We have introduced an auxiliary rank-2 tensor {I⊗I}^(2) in front of the tensor polarizability to factor out the dependence on magnetic quantum numbers. Combined with this tensor term, Eq. (<ref>) is the most general parametrization of the transition polarizability as long as we only keep interactions linear in the static and laser fields. It is worth noting that in the second order, due to a particular selection of prefactors in Eq. (<ref>), α and β do not depend on the hyperfine components of the initial and final states. We will demonstrate that the HFI-mediated corrections would introduce the F_i- and F_f-dependence to the scalar and vector polarizabilities. Based on these arguments, and taking into account the fact that the HFI is a scalar (see the discussion in Sec. <ref>), we rewrite Eq. (<ref>) in the following generalized form that now includes the tensor contribution (<ref>), as well as the F-dependence of the scalar and vector polarizabilities a_i→f =-√(3(2F_f+1))w_0( ε̂̌̂,ě̂)α^F_i → F_fδ_F_iF_fδ_M_iM_f -√(2)fσiw_1( ε̂̌̂,ě̂) β^F_i → F_f +f{I⊗I}^(2)iw_2( ε̂̌̂,ě̂)γ^F_i → F_f , where we have used the Wigner-Eckart theorem and introduced the multipolar polarization weights <cit.> w_Q( ε̂̌̂,ě̂) =(-1)^Q∑_M_Q(-1)^M_Q+F_f-M_f ×F_fQF_i-M_f-M_QM_i(ε̂̌̂⊗ě̂)_M_Q^(Q) , with M_f, M_Q, and M_i being the magnetic quantum numbers. Note that selection rules fix the value of M_Q = M_i -M_f. The compound tensors of rank Q for the two vectors ε̂̌̂ and ě̂ are understood as (ε̂̌̂⊗ě̂)^(Q)_M_Q=∑_μνC_1μ1ν^Q M_Qϵ̂_μê_ν, where C_1μ1ν^Q M_Q are Clebsch-Gordan coefficients and the A_μ components of a vector Ǎ in the spherical (or helicity) basis expressed in terms of its Cartesian components as <cit.> A_0 = A_z, A_+1 = - (A_x + i A_y )/√(2), A_-1 = (A_x - i A_y )/√(2). In particular, the combinations of polarization vectors are (ε̂̌̂⊗ě̂)^(0) =- (ε̂̌̂·ě̂)/√(3) and (ε̂̌̂⊗ě̂)^(1) =i (ε̂̌̂×ě̂)/√(2), in agreement with Eq. (<ref>). We will consider the relevant components of the rank-2 tensor (ε̂̌̂⊗ě̂)^(2) in Sec. <ref>. Here, we note that the reduced matrix elements of the auxiliary rank-2 tensor {I⊗I}^(2) present in Eq. (<ref>) is given by f{I⊗I}^(2)i =(-1)^2F_i-F_f+I-J_f√(5) [F_f,F_i]^1/2 × I(I+1)[I]112IIIδ_J_iJ_f , where [J_1, J_2, ... J_n]≡(2J_1+1)(2J_2+1)…(2J_n+1). For our target 6S_1/2→ 7S_1/2 transition in ^133Cs, the above expression evaluates to F_f{I⊗I}^(2)F_i =(-1)^F_f6√(35)[F_f,F_i]^1/2 , which, for the special case where F_f,i=3,4, gives 3{I⊗I}^(2)3 = -42 √(35) , 4{I⊗I}^(2)4 = 54 √(35) , 3{I⊗I}^(2)4 = -126 √(5) , 4{I⊗I}^(2)3 = 126 √(5) . We will need these values in our analysis for ^133Cs To reiterate, due to the HFI, the scalar, vector, and tensor transition polarizabilities entering Eq. (<ref>) have an F-dependence of the form (X=α,β,γ) X^F_i → F_f = X^[2] + δ X^F_i → F_f , where the second-order term X^[2] is F-independent. For the S_1/2→ S_1/2 transitions, γ^[2]≡ 0, thereby γ^F_i → F_f =δγ^F_i → F_f. Expressions for the second-order scalar and vector transition polarizabilities, α^[2] and β^[2], can be found, e.g., in Ref. <cit.>. Substantial attention <cit.> has been paid over the years to determining their accurate values since they are required for interpreting the results of APV experiments. As the reference values for the second-order polarizabilities for the 6S_1/2→ 7S_1/2 transition in ^133Cs, we use values computed recently by our group <cit.> α^[2] = -266.31(23) , β^[2] = 26.912(30) , γ^[2] = 0 . These values are in atomic units, a_0^3, where a_0 is the Bohr radius. We now proceed to the derivation of the hyperfine corrections δα^F_i → F_f, δβ^F_i → F_f, and δγ^F_i → F_f to transition polarizabilities. § HYPERFINE CORRECTIONS TO TRANSITION POLARIZABILITIES To evaluate the hyperfine-mediated corrections to the transition polarizability, we follow the third-order formalism developed in Refs. <cit.>. References <cit.> computed the static differential polarizabilities for transitions between levels of the hyperfine manifold attached to the S_1/2 ground state. Reference <cit.> generalized that formalism to the evaluation of dynamic (AC) polarizabilities. These papers focused on the characterization of clock shifts, which formally map into the evaluation of the diagonal matrix elements of the transition amplitude. Here we further generalize our earlier formalism and consider off-diagonal matrix elements of the transition amplitude. In the context of APV, Ref. <cit.> has considered transition polarizabilities (including tensor contribution) for transitions between hyperfine components attached to the Cs ground state. The four relevant diagrams representing third-order contributions to the i→ f transition amplitude, top (T), center (C), bottom (B), and residual (R), are shown in Fig. <ref>, with each diagram involving one hyperfine interaction and two E1 interactions (one with the laser, and another one with the static field). These diagrams are named after the position of the hyperfine interaction in the string of three operators. Explicitly, these terms read T_i→f =∑_abV_fa^HFI(ε̂̌̂·Ď_ab)(ě̂·Ď_bi)/ΔE_faΔE_ib +∑_abV_fa^HFI(ě̂·Ď_ab)(ε̂̌̂·Ď_bi)/ΔE_faΔE_fb , B_i→f =∑_ab(ε̂̌̂·Ď_fa)(ě̂·Ď_ab)V_bi^HFI/ΔE_iaΔE_ib +∑_ab(ě̂·Ď_fa)(ε̂̌̂·Ď_ab)V_bi^HFI/ΔE_faΔE_ib , C_i→f =∑_ab(ε̂̌̂·Ď_fa)V_ab^HFI(ě̂·Ď_bi)/ΔE_iaΔE_ib +∑_ab(ě̂·Ď_fa)V_ab^HFI(ε̂̌̂·Ď_bi)/ΔE_faΔE_fb , R_i→f =-V_ii^HFI∑_a(ε̂̌̂·Ď_fa)(ě̂·Ď_ai)/(ΔE_ia)^2 -V_ff^HFI∑_a(ě̂·Ď_fa)(ε̂̌̂·Ď_ai)/(ΔE_fa)^2 , where Δ E_ij≡ E_i-E_j. Note that the two terms inside each combination differ by the swap of the two polarization vectors ε̂̌̂ and ě̂. Otherwise, the structure of the terms is similar. Further, the bottom and top diagrams are related as B_f↔i=T^*_i↔f. Before carrying out the angular reduction of the expressions above , we briefly review the hyperfine interaction present in Eqs. (<ref>). Following notation of Ref. <cit.>, the interaction of electrons with nuclear multipolar moments may be expressed as V^HFI=∑_N𝒯^(N)·𝒩^(N) , where the rank-N tensors 𝒯^(N) act in the electron space, and 𝒩^(N) act in the nuclear space. Note that V^HFI is a scalar ITO. The nuclear reduced matrix elements γI𝒩^(N)γI are expressed in terms of the conventional nuclear magnetic-dipole (M1) μ and electric-quadrupole (E2) Q moments as γI𝒩^(1)γI = √((2I+1)(I+1)/I)μ , γI𝒩^(2)γI = √((2I +1)(I+1)(2I+3)/ 4I(2I -1)) Q . Here the magnetic-dipole moment μ≡ g_I Iμ_N with μ_N being the nuclear magneton and g_I being the gyromagnetic ratio. For ^133Cs, g_I=0.73714. As for the nuclear electric-quadrupole moment Q, the measured hyperfine constant B can be used to extract its value using theoretical values of the hyperfine electronic matrix elements. However, different measurements of B yield different determinations. For instance, the measured <cit.> hyperfine constant B in the ^133Cs 6P_3/2 state is -0.4934(17) MHz, which differs from a more recent result <cit.> of, -0.5266(57) MHz by about 7%. Because the uncertainty in B of Ref. <cit.> is smaller, we simply adopt the value Q=-3.55(4) mbarn therefrom. Moreover, we find that the nuclear quadrupole contributions to the transition polarizabilities are suppressed compared to those due to the magnetic-dipole hyperfine interaction. For the same reason, we neglect even higher-rank nuclear multipoles, such as the poorly known magnetic octupole moment <cit.>, due to their diminishing role as compared to the magnetic-dipole contribution. To flesh out the tensorial structure of the transition polarizability resulting from the diagrams (<ref>), we use the same re-coupling angular momentum algebra technique as in our derivation of the second-order expressions <cit.>. Since the HFI is a scalar ITO, the resulting tensorial structure of the transition polarizability is indeed given by Eq. (<ref>). The hyperfine corrections to transition polarizabilities are therefore given by δα^F_i → F_f =-fT^(0)+B^(0)+C^(0)+R^(0)i/√(3(2F_f+1)) , δβ^F_i → F_f =-fT^(1)+B^(1)+C^(1)+R^(1)i/√(2)fσi , δγ^F_i → F_f =fT^(2)+B^(2)+C^(2)+R^(2)i/f{I⊗I}^(2)i . We remind the reader that the various transition polarizabilities entering Eq. (<ref>) are assembled as X^F_i → F_f = X^[2] + δ X^F_i → F_f , where the second-order term X^[2] is F-independent. We listed our recommended values <cit.> for the second-order transition polarizabilities in Eqs. (<ref>). The reduced matrix elements of individual diagrams entering Eqs. (<ref>) are given by fT^(Q)i =∑_NJ_aJ_b(-1)^F_f-F_i+J_a+J_i[F_f,F_i,Q]^1/2 ×F_fIJ_aNJ_fIQJ_iJ_aIF_fF_iQJ_iJ_aJ_b11 ×{S_T^(J_aJ_bN)[fi]+(-1)^QS^(J_aJ_bN)_T[ff]} , fB^(Q)i =∑_NJ_aJ_b(-1)^J_i+J_b[F_f,F_i,Q]^1/2 ×F_iIJ_bNJ_iIQJ_fJ_bIF_iF_fQJ_bJ_fJ_a11 ×{S_B^(J_aJ_bN)[ii]+(-1)^QS_B^(J_aJ_bN)[fi]} , fC^(Q)i = ∑_NJ_aJ_b(-1)^J_a-J_i+F_i-F_f+N+1 [F_f,F_i,Q]^1/2 ×∑_j [j] J_fJ_ijF_fF_iQIINJ_fJ_ij11QJ_aJ_bN ×{S_C^(J_aJ_bN)[ii]+(-1)^QS_C^(J_aJ_bN)[ff]} , fR^(Q)i =(-1)^2F_f-I+F_i+J_i+1[F_f,F_i,Q]^1/2 ×QJ_fJ_iIF_iF_f∑_J_aQJ_iJ_fJ_a11 ×{V[i]S_R^J_a[f]+(-1)^QV[f]S_R^J_a[i]} , which are expressed in terms of the reduced sums S_T^(J_aJ_bN)[αβ] =∑_n_an_bI𝒩^(N)In_fJ_f𝒯^(N)n_aJ_an_aJ_aDn_bJ_bn_bJ_bDn_iJ_i/ΔE_αaΔE_βb , S_B^(J_aJ_bN)[αβ] =∑_n_an_bn_fJ_fDn_aJ_an_aJ_aDn_bJ_bI𝒩^(N)In_bJ_b𝒯^(N)n_iJ_i/ΔE_αaΔE_βb , S_C^(J_aJ_bN)[αβ] =∑_n_an_bn_fJ_fDn_aJ_aI𝒩^(N)In_aJ_a𝒯^(N)n_bJ_bn_bJ_bDn_iJ_i/ΔE_αaΔE_βb , S_R^(J_a)[α] = ∑_n_an_fJ_fDn_aJ_an_aJ_aDn_iJ_i/(Δ E_α a)^2 , and the HFI diagonal matrix elements V[α]= (-1)^I+J_α+F_α∑_NF_αJ_αINIJ_α ×n_α J_α𝒯^(N)n_α J_αI𝒩^(N)I . § NUMERICAL RESULTS FOR HYPERFINE CORRECTIONS In Sec. <ref>, we have presented the formulation for the hyperfine corrections to the scalar, vector and tensor transition polarizabilities. In this section, we our numerical results, which are compiled in Table <ref>. To arrive at these values, we employed relativistic many-body methods for computing atomic structure. A detailed discussion of these methods and their numerical implementation can be found in Refs. <cit.> and references therein. Simply put, we used the frozen-core V^N-1 Dirac-Hartree-Fock (DHF), Brueckner orbitals (BO), and random phase approximation (RPA). Among these approximations the RPA(BO) approach is the most complete as it incorporates the core polarization and the core screening effects. The RPA(BO) results are listed in Table <ref>. In our calculations, we use a dual-kinetic balance B-spline DHF basis set <cit.> containing N=60 basis functions of order k=9 per partial wave generated in a cavity of radius R_max=250 a.u., the same as in Refs. <cit.>. To improve the accuracy of our calculations, we also employ a semi-empirical approach. To this end, we point out that there are three atomic properties entering the reduced sums: the energies, the E1 matrix elements, and the HFI matrix elements. We therefore replace a certain subset of ab initio RPA(BO) quantities with the experimental or other high-accuracy values. Determining this subset, however, requires some care. Indeed, although the low-n orbitals from the finite basis set closely resemble those obtained with the conventional finite-difference technique computed with practically infinite cavity radius, as n increases, the mapping of the basis states to physical states deteriorates. In our basis set, we find the boundary for the transition from physical to non-physical orbitals to be at the radial quantum number n_r=12, without loss of numerical accuracy for matrix elements and energies. Because of this, while evaluating the reduced sums, we use the NIST recommended <cit.> energies for the physical states, n_a, bP_J with n_a, b=6-12 and n_a, bD_J with n_a, b=5-11. For the same reasons, we replace the RPA(BO) E1 matrix elements for the 6S_1/2→ n_a, bP_J and 7S_1/2→ n_a, bP_J channels with their experimental values tabulated in Ref. <cit.> for n_a, b=6, 7 and with their high-accuracy relativistic coupled-cluster counterparts <cit.> for n_a, b=8-12. The semi-empirical matrix elements of the hyperfine interaction involve “physical” states with principle quantum numbers 6≤n_a, b≤12. We evaluate them as follows. The diagonal hyperfine matrix elements are extracted from the experimental values <cit.> of hyperfine constants A from the relation, A=⟨𝒯^(1)⟩_J⟨𝒩^(1)⟩_I/(IJ) , where ⟨𝒯^(1)⟩_J and ⟨𝒩^(1)⟩_I are the so-called stretched matrix elements expressed in terms of the reduced matrix elements ⟨𝒪^(N)⟩_J=[ J N J; -J 0 J ]γ J𝒪^(N)γ J . The off-diagonal HFI matrix elements between the S_1/2 states were evaluated as the geometric mean of the diagonal matrix elements <cit.> ⟨n'S_1/2|V^HFI|nS_1/2⟩ =⟨n'S_1/2|V^HFI|n'S_1/2⟩^1/2 ×⟨nS_1/2|V^HFI|nS_1/2⟩^1/2 , where the diagonal matrix elements come from the experimental values of the hyperfine constant A. The high accuracy of this approximation has been confirmed in Ref. <cit.>. The remaining off-diagonal magnetic-dipole HFS matrix elements between the “physical” states were determined using the relativistic coupled-cluster method, with the code described in <cit.>. As for the nuclear quadrupole HFI contributions, we found them to be suppressed. Thereby, we kept their RPA(BO) matrix elements. Our numerical results for the hyperfine corrections to transition polarizabilities are listed in Table <ref>. Overall, the corrections to the polarizabilities are below the 10^-2 a.u. level. The δα corrections are identically zero for the F_i ≠ F_f transitions due to the scalar nature of the underlying ITO. Otherwise, |δα^F_i→ F_f| ∼ 5 × 10^-3 is about 5 orders of magnitude smaller than |α^[2]| ≈ 3 × 10^2. As to the vector transition polarizability, |δβ^F_i→ F_f| is about 4-5 orders of magnitude smaller than |β^[2]| ≈ 3 × 10^1. The |δβ^F_i→ F_f| corrections to the F_i=F_f transitions are an order of magnitude larger than those for the F_i≠ F_f transitions. We observe that the tensor transition polarizability, γ^F_i → F_f =δγ^F_i → F_f, is in the order of 10^-5 a.u. The relative smallness of the numerical values of the tensor transition polarizabilities as compared to their scalar and vector counterparts is due, in part, to the large values, ∼ 3× 10^2, of the prefactors F_f{I⊗I}^(2)F_i in Eq. (<ref>). Further, γ^3 → 4 = γ^4 → 3 as can be proven by a direct examination of our analytical expressions. Finally, the difference between our RPA(BO) and semi-empirical estimates does not exceed 10%, which we take as the uncertainty of our results. § DISCUSSION We have presented the theoretical formulation and numerical estimate for the hyperfine corrections to the transition polarizabilities. In this section, we investigate the impact of neglecting the hyperfine-mediated tensor polarizability γ and the hyperfine-state dependence of the scalar α, and vector β polarizabilities on the extraction of electroweak observables from APV experiments. In particular, we reanalyze two Boulder experiments <cit.> and compute corrections to their extracted value of the ^133Cs anapole moment and the ratio α/β of the scalar and vector transition polarizabilities. §.§ Reinterpretation of the Boulder parity violation measurement We start by reviewing the Boulder APV experiment <cit.> and the assumptions that went into its analysis. The experiment utilized the Stark interference technique to extract the ratio of the PNC amplitude to the vector transition polarizability, Im(E1_PNC)/β. Notice the use of β without specifying hyperfine components, as the hyperfine corrections were neglected. It is our goal to introduce F-dependent corrections to β here. The Boulder experiment used a spin-polarized ^133Cs beam subjected to a uniform and static electric field, with a laser driving the nominally E1-forbidden transition between various hyperfine components of the ground 6S_1/2 and the excited 7S_1/2 state. The DC electric field opens up an E1 transition channel between these states by mixing the S and P states. The total transition rate R is determined by a combination of the Stark-induced, parity-violating (PNC), and M1 transition amplitudes R=|A^Stark_i→f+A^PNC_i→f+A^M_1_i→f|^2 , where <cit.> A^Stark_i→f =αĚ_L·Ě_S δ_F_fF_iδ_M_fM_i +iβ(Ě_S×Ě_L)·⟨f|σ̌|i⟩, A^PNC_i→f =iIm(E1_ PNC)ℰ̌_S·⟨f|σ̌|i⟩, A^M1_i→f =(M1)_rad (ǩ̂_L×ℰ̌_S)·⟨f|σ̌|i⟩ . Here we have changed the notation of Ref. <cit.> to be consistent with that of the previous sections. In Eqs. (<ref>), Ě_L = ℰ_Lε̂̌̂ is the laser field driving the transition with ǩ̂_L being a unit vector in its propagation direction, ℰ̌_S=ℰ_Sě̂ is the DC electric field, and α and β are the scalar and vector transition polarizabilities, introduced in earlier sections. To set the stage, for now, as in Ref. <cit.>, we neglected the F-dependence of α and β and omitted the tensor (γ) contribution. The PNC amplitude E1_PNC includes both the nuclear spin-dependent and spin-independent effects and (M1)_rad stands for the radial integral of the 6S_1/2-7S_1/2 M1 matrix element <cit.>. We will neglect the A_i→f^M1 M1 amplitude for reasons discussed in Ref. <cit.>. The Stark interference technique amplifies the feeble PNC amplitude A^PNC_i→f with the help of the much stronger A^Stark_i→f amplitude: the interference between A^PNC_i→f and A^Stark_i→f manifests itself as a cross term when expanding the square in the rate expression, Eq. (<ref>). To access this Stark-PNC interference term, the experiment <cit.> involved measuring the change in the transition rate R, Eq. (<ref>), under various parity reversals, which included flipping the direction of the applied DC electric field, flipping the sign of the relevant component of the laser polarization, or changing the sign of the magnetic quantum numbers <cit.>. The PNC amplitude was extracted from two transition rates, R^+ and R^- measured under opposite parities. A parity reversal results in a sign flip of the PNC amplitude A^PNC_i→f, while leaving the sign of the Stark-induced amplitude A^Stark_i→f unaffected. The Stark-induced amplitude A^Stark_i→f in Eq. (<ref>) generally depends on both the scalar and vector polarizabilities. However, in the Boulder experiment, the transitions were driven between the states of different values of F (F_i≠ F_f), and thereby, only the vector polarizability contribution remained in Eq. (<ref>). Therefore, it is the vector polarizability β that enters the interference term with E1_PNC. Explicitly, the PNC amplitude was extracted from the normalized difference in the two transition rates, R^+-R^-/R^++R^-∝Im(E1_PNC)/β . Next, we specify the geometry of the Boulder experiment <cit.>. In the setup of the Boulder experiment, a ^133Cs atomic beam travels along the z-axis and an externally-applied magnetic field is aligned along the beam propagation direction, defining the quantization axis. Before entering the excitation-laser interaction region, the Cs atoms are optically pumped into the “stretched” hyperfine sub-levels of the 6S_1/2 ground states, either F_i=3, M_i=±3 or F_i=4, M_f=±4. The transitions to the 7S_1/2 hyperfine manifold are driven by a standing wave laser with the cavity axis aligned along the y-axis. The excitation laser field E_L is elliptically polarized, E_L=ℰ_L^zž̂+iℰ_L^Ix̂̌̂. Finally, a static and uniform electric field Ě_S=ℰ_S^xx̂̌̂ is aligned along the x-axis. Having reviewed the Boulder experiment, now we examine the effect of our newly-introduced tensor transition polarizability γ, as well as the nuclear-spin-dependent corrections to α and β, and assess whether they affect the extraction of the PNC amplitude E1_PNC. To this end, we rewrite Eq. (<ref>) as A_i→f^Stark = α^F_i→ F_fE_L·E_Sδ_F_fF_iδ_M_fM_i +iβ^F_i→ F_f(E_L×E_S)·⟨f|σ̌|i⟩ +γ^F_i→ F_fw_2( ε̂̌̂,ě̂) ℰ_Lℰ_S f{I⊗I}^(2)i , where we have again used A_i→f^Stark = ℰ_Lℰ_S a_i→f. The reduced matrix element f{I⊗I}^(2)i is again given by Eq. (<ref>) and the polarization and state-dependent factor is, explicitly (c.f. Eq. (<ref>)) w_2( ε̂̌̂,ě̂) =∑_M_Q=-2^2(-1)^M_Q+F_f-M_f ×F_f2F_i-M_f-M_QM_i(ε̂̌̂⊗ě̂)^(2)_M_Q . The components of the rank-two compound tensor of electric field polarizations are (ε̂̌̂⊗ě̂)^(2)_M_Q = ∑_μ,λ=-1^1 C^2M_Q_1μ1λε̂̌̂_μě̂_λ . Note that the selection rules for the 3j-symbol fix M_Q = M_i - M_f in Eq. (<ref>). Moreover, since we are interested in transitions between stretched hyperfine states |F,M_F=± F⟩ with F_i = F_f±1, only terms with M_Q = ± 1 survive in Eq. (<ref>). For the Boulder experiment where ε̂̌̂ = ε_L^z ž̂ + i ε_L^I x̂̌̂ and ě̂ = x̂̌̂, we find the needed components of the second-rank tensor to be (ε̂̌̂⊗ě̂)^(2)_± 1 = ∓1/2ε_L^z. Then the Stark-induced amplitude for transitions between stretched states with F_i =F_f±1 can be simplified to A^Stark_i→f =β^F_i → F_fℰ_L^zℰ_S^xC^F_fM_f_F_iM_f±1 ±γ^F_i → F_fℰ^z_Lℰ^x_SU^F_fM_f_F_iM_f±1/2 , where ℰ_L^z and ℰ_S^x are the components of the laser and the applied DC electric fields, respectively. The coefficients C^F_fM_f_F_iM_f±1 are defined as C^F_fM_f_F_iM_f±1 =(-1)^I+S+F_i+1/2√(3) [F_f,F_i]^1/2 ×1/2F_fIF_i1/21F_f1F_i-M_f∓1M_f ±1 , and are tabulated in Ref. <cit.>. Here we introduced a similar coefficient, U^F_fM_f_F_i M_f±1 =(-1)^F_f-M_fF_f2F_i-M_f∓1M_f±1 ×⟨F_f||{I⊗I}^(2)||F_i⟩ , which specifies the dependence of the tensor contribution on the magnetic quantum numbers. The “±” signs appearing in the C^F_fM_f_F_iM_f±1 and U^F_f M_f_F_iM_f±1 factors indicate the values of the magnetic quantum numbers for the initial state, given a fixed final state value of M_f. The “±” sign preceding the γ term originates from the rank-two compound tensor of the electric fields (ε̂̌̂⊗ě̂)^(2)_M_Q when the value of M_Q is changed from +1 to -1. The values of the angular factors C^F_fM_f_F_iM_f±1 and U^F_fM_f_F_iM_f±1 relevant to our computation are, explicitly C^4-4_3-3 =C^44_33=-C^33_44=-C^3-3_4-4=√(7/8) , U^4-4_3-3 =-U^44_33=U^33_44=-U^3-3_4-4=-42√(3) . It is clear that these factors satisfy the following identities C^F_fM_f_F_iM_f±1 =C^F_f-M_f_F_i-M_f∓1 , U^F_fM_f_F_iM_f±1 =-U^F_f-M_f_F_i-M_f∓1 . The measured quantities <cit.> are the transition rates R^±, Eqs. (<ref>), whose computation involves squaring out the sum of the Stark and PNC transition amplitudes. Our generalized Stark-induced amplitude is given by Eq. (<ref>). The simplified PNC (<ref>) amplitude reads <cit.> A^PNC_i→f =∓lm(E1_PNC)ℰ_L^IC^F_fM_f_F_iM_f±1δ_M_i, M_f±1 , Note that while A^Stark_i→f depends on the z component of the laser field, A^PNC_i→f depends on ℰ_L^I=|ℰ_L^x|. Then the generalized rates R^+ and R^- for the two transitions of opposite handedness are given by R^+ ≡ R(F_i,M_f-1→F_f,M_f) =β^2 (ℰ_S^x ℰ_L^z)^2 (C^F_fM_f_F_iM_f-1)^2 -βγℰ_S^x ℰ_L^z^2 C^F_fM_f_F_iM_f-1U^F_fM_f_F_iM_f-1 +2βIm(E1_ PNC)ℰ_S^xℰ_L^zℰ_L^I(C^F_fM_f_F_iM_f-1)^2 -γℰ_L^zℰ_S^xℰ_L^IIm(E1_ PNC)C_F_iM_f-1^F_fM_fU^F_fM_f_F_iM_f-1 , R^- ≡ R(F_i,M_f+1→F_f,M_f) = β^2(ℰ_S^xℰ_L^z)^2 (C^F_fM_f_F_iM_f+1)^2 +βγ(ℰ_S^x ℰ_L^z)^2 C^F_fM_f_F_iM_f+1U^F_fM_f_F_iM_f+1 -2β Im(E1_ PNC)ℰ_S^xℰ_L^zℰ_L^I(C^F_fM_f_F_iM_f+1)^2 -γℰ_L^zℰ_S^xℰ_L^IIm(E1_ PNC)C_F_iM_f+1^F_fM_fU^F_fM_f_F_iM_f+1 . In the above expressions, F_i and F_f remain fixed in R^±, while the sign of M_f flips when going from Eq. (<ref>) to Eq. (<ref>). We remind the reader that we focus on the transitions between stretched hyperfine states. For example, for the |3, 3⟩→|4, 4⟩ transition[Here we used the abbreviation |F_i, M_i⟩→|F_f, M_f⟩, suppressing the electronic term parts of the wave-functions.] one would use the R^+ expression, while the matching transition of opposite handedness would be |3, -3⟩→|4, -4⟩ with the R^- expression to be used. We will distinguish between four transition rates R^+_3→4, R^-_3→4, R^+_4→3, and R^-_4→3 referring to the transitions |3, 3⟩→|4, 4⟩, |3, -3⟩→|4, -4⟩, |4, -4⟩→|3, -3⟩, and |4, 4⟩→|3, 3⟩, respectively. For the sake of clarity, we have also suppressed the F_i → F_f superscripts in various polarizabilities. Following Ref. <cit.>, we are interested in the rate ratio r_F_i → F_f≡( R^+-R^-/R^++R^-)_F_i → F_f as it separates out the PNC amplitude. With the help of Eq. (<ref>) and the identifies (<ref>), the rate ratio generalizes to r_F_i → F_f = 1+ γ^F_i → F_f/2β^F_i → F_f U^F_fM_f_F_iM_f+1/C^F_fM_f_F_iM_f+1/1+γ^F_i → F_f/β^F_i → F_f U^F_fM_f_F_iM_f+1/C^F_fM_f_F_iM_f+12Im(E1_PNC^F_i → F_f)ℰ_L^I/β^F_i → F_fℰ_L^zℰ_S^x , where we have emphasized the nuclear-spin dependence of the PNC amplitude by reintroducing the F_i → F_f superscript into β and γ. In the limit of vanishing tensor polarizabilities γ^F_i → F_f and β being independent of F_i and F_f, Eq. (<ref>) reproduces the Boulder experiment's expression <cit.> r_F_i → F_f^Boulder = 2 Im(E1_PNC^F_i → F_f)ℰ_L^I/β^[2]ℰ_L^zℰ_S^x . From Eq. (<ref>), the ratios r_F_i → F_f for the F_i=3→F_f=4 and F_i=4→F_f=3 transitions are, explicitly r_3→4 =2-12√(42)γ^3→4/β^3→4/1-12√(42)γ^3→4/β^3→ 4Im(E1_ PNC^3→4)ℰ^I_L/β^3→ 4ℰ_L^zℰ_S^x , r_4→3 =2+12√(42)γ^4→3/β^4→3/1+12√(42)γ^4→3/β^4→3Im(E1_ PNC^4→3)ℰ^I_L/β^4→3ℰ_L^zℰ_S^x , which can be further simplified to r_3→4 ≈ r_3 → 4^Boulder(1- δβ^3→ 4/β^[2] +6 √(42)γ^3→4/β^[2]) , r_4→3 ≈ r_4 → 3^Boulder(1 - δβ^4→ 3/β^[2] -6√(42)γ^4→3/β^[2]) , where the last two terms in the brackets are the F-dependent corrections to the Boulder expressions. With our results from Table <ref>, the corrections evaluate to -5× 10^-5 and 3× 10^-5 for the 3→4 and the 4→3 transitions, respectively. These are smaller than the experimental uncertainties in the Im(E1_PNC^F_i→F_f) determination. The PNC amplitudes E1_PNC include both the nuclear-spin-independent and nuclear-spin-dependent contributions. The largest impact of our analysis is on the extraction of the nuclear-spin-dependent part (which includes the anapole moment contribution). If we neglect the hyperfine corrections to the transition polarizabilities, the anapole moment contribution is extracted as <cit.> Im(E1_PNC^anapole)^Boulder/β^[2]= (r_3→4^Boulder-r_4→3^Boulder)ℰ_S^xℰ_L^z/2ℰ^I_L , where the authors of Ref. <cit.> associated the measured rates with r_F_i→ F_f^Boulder. The measured rates r_F_i → F_f are, however, more accurately given as in Eq. (<ref>). To account for the nuclear-spin dependent effects on transition polarizabilities, we thus reexpress r^ Boulder_3→4 and r^ Boulder_4→3 in terms of r_3→4 and r_4→3 using Eqs. (<ref>) and use these “adjusted” Boulder rates in Eq. (<ref>). With our semi-empirical values from Table <ref>, we find r^ Boulder_3→4=1.00005 r_3→ 4 and r^ Boulder_4→3=0.99997 r_4→ 3, respectively, which cause the extracted value of Im(E1_PNC^4→3)/β^[2] to decrease by 3×10^-5 while Im(E1_PNC^3→4)/β^[2] to increase by 5×10^-5. Because both Im(E1_PNC^4→3)/β^[2], and Im(E1_PNC^3→4)/β^[2] were reported <cit.> at about 1.6 mV/cm, this means that the anapole contribution in our evaluation is slightly smaller, by about ∼1×10^-4 mV/cm. The reported <cit.> value of the anapole moment is 0.077(11) mV/cm so our correction of 1×10^-4 mV/cm is below the uncertainty. This suggests that the impact due to the spin-dependent effects on polarizabilities is negligible at the current level of experimental uncertainty. §.§ The effect of hyperfine-mediated polarizabilities on the ratio analysis We now turn our attention to the another Boulder experiment <cit.> which used the Stark-interference technique to determine the ratio of scalar and vector polarizabilities β/α in Cs[We note in passing that the authors of Ref. <cit.> refer to β as a “tensor” polarizability, while we call it “vector” to be consistent with the literature and to distinguish it from the true tensor γ contribution.]. This measured ratio is important in deducing the value of β through the more computationally reliable determination of α (see, e.g., Ref. <cit.> and the references therein). An accurate value of β is required for extracting the PNC amplitude from the APV measurement as described in Sec. <ref>. In the β/α experiment <cit.>, the ^133Cs atoms were spin-polarized by an external magnetic field aligned along the y-axis. This magnetic field defined the quantization axis that is different from that of the APV experiment described in Sec. <ref>. To simplify our analysis, we thus define a new coordinate system (x', y',z') obtained by a rotation from the (x, y, z) laboratory frame defined in Sec. <ref>. The unit vectors in this new system are related to those in the frame in Sec. <ref> as follows: ž̂'=ŷ̌̂, ŷ̌̂'=x̂̌̂, and x̂̌̂'=ž̂. This transformation aligns the quantization axis with ž̂' while preserving the handedness of the coordinate system. As a result, the electric fields in this new reference frame are given by ℰ̌_S'=ℰ_S^xŷ̌̂' and ℰ̌_L'=ℰ_L^zx̂̌̂'+iℰ^I_Lŷ̌̂'. The ^133Cs atoms in the α/β experiment <cit.> underwent transitions from the initial 6S_1/2, F_i=3, M_i=3 state to the final 7S_1/2, F_f=3, M_f=3 state. This particular choice of states guarantees an nonvanishing contribution of the scalar polarizability to the Stark-induced amplitude, Eq. (<ref>). Then for the described experimental geometry, one has A_i→f^Stark= iα^3→3 ℰ^I_Lℰ_S^x+iβ^3→3 ℰ_S^xℰ_L^zC^F_f, F_i_M_f,M_i + i K γ^3→3ℰ_S^xℰ^I_L , where K ≡ -i w_2( ε̂̌̂,ě̂) F_f=3{I⊗I^(2)}F_i=3. Explicitly, since F_f=3{I⊗I}^(2)F_i=3=-42√(35) (see Eq. (<ref>)) and w_2( ε̂̌̂,ě̂) =- (i/6)√(5/14), K = 35/√(2). Here, the angular coefficient C^F_f F_i_M_f M_i is defined as C^F_fF_i_M_fM_i=g_F⟨M_F⟩ with the gyromagnetic ratio g_F=-1/4 and ⟨M_F⟩ being a population average over all the possible magnetic quantum numbers <cit.>. In contrast to the APV experiment of Sec. <ref>, the parity reversal in the α/β experiment was effected by switching the laser polarization from the left to right elliptical polarization, which is equivalent to reversing the sign of the ℰ^I_L in Eq. (<ref>). This reversal flips the sign of the scalar and tensor contributions, while preserving the sign of the vector term in Eq. (<ref>). It is clear that the interference term extracted in the experiment contains the combination (α^3→3 +K γ^3→3) β^3→3. This means that we need to interpret α/β→(α/β)_eff = α^3→3 +K γ^3→3/β^3→3 , as being the ratio measured by Ref. <cit.>. To prove Eq. (<ref>), we recall that the experiment <cit.> employed a complementary modulation of the DC electric field strength synchronous with the elliptical polarization reversals. Two Stark-induced rates were measured, R^+ =|α^3→3ℰ^I_L+β^3→3ℰ_L^zC^F_f F_i_M_fM_i+K γ^3→3ℰ^I_L |^2 (ℰ_S,1^x)^2 , R^- =|α^3→3ℰ^I_L-β^3→3ℰ_L^zC^F_fF_i_M_fM_i+ K γ^3→3ℰ^I_L|^2 (ℰ_S,2^x)^2 , where ℰ_S,1^x and ℰ_S,2^x stand for the magnitudes of the two DC electric fields, whereas ℰ_L^z and ℰ_L^I≡Im(ℰ_L^x) are the magnitudes of the two components of the laser field driving the transition. The fields ℰ_S,1^x and ℰ_S,2^x were adjusted until there was no modulation of the rate signal under reversals of the laser field's polarization. This amounts to equating the two rates in Eqs. (<ref>), thus leading to ℰ_S,2^x-ℰ_S,1^x/ℰ_S,2^x+ℰ_S,1^x= β^3→3/α^3→3+ Kγ^3→3ℰ_L^z/ℰ^I_LC^F_fF_i_M_fM_i . The inverse of the first factor on the r.h.s. of Eq. (<ref>), was extracted using the above equation and identified as α/β in Ref. <cit.>. As mentioned above, the measured quantity is in fact (α/β)_eff=(α^3→3 +K γ^3→3) / β^3→3. To the best of our knowledge, all the previous literature has identified the measured <cit.> α/β ratio with α^[2]/β^[2], neglecting the hyperfine corrections to transition polarizabilities. We extract this ratio from the measured value <cit.> (α/β)_eff = -9.905(11) as α^[2]/β^[2]≈(α/β)_eff( 1 - δα^3→3/α^[2] - K γ^3→3/α^[2] + δβ^3→3/β^[2]) . With the recommended values of α^[2] and β^[2] as in Eqs. (<ref>) and the hyperfine corrections from Table <ref>, the corrective factor on the r.h.s of Eq. (<ref>) evaluates to (1+1.3 × 10^-4), equivalent to a ∼ 0.01% fractional correction to the value of α^[2]/β^[2]. The inclusion of the hyperfine correction thus modifies the last significant digit of the reported result, leading to α^[2]/β^[2] = -9.906(11) , but is below the 0.1% accuracy of the experiment <cit.>. § CONCLUSION In summary, we have introduced and evaluated the hyperfine corrections to the polarizabilities, which include the non-vanishing tensor transition polarizability γ. These HFI-mediated effects lead to a slightly smaller anapole moment extracted from the measurements of atomic parity violation by the Boulder group <cit.>. However, our computed correction is insufficient to resolve the tension with the nuclear physics interpretation and data. We also showed that the effects of the tensor transition polarizability γ and hyperfine corrections to the scalar, α, and vector, β, transition polarizabilities are minor but not negligible for the determination of the α/β ratio from the measurements <cit.>. As the accuracy of experiments improves, our analysis should prove useful for interpretation of future measurements. § ACKNOWLEDGEMENTS This work was supported in part by the U.S. National Science Foundation grants PHY-1912465 and PHY-2207546, by the Sara Louise Hartman endowed professorship in Physics, and by the Center for Fundamental Physics at Northwestern University. apsrev4-2
http://arxiv.org/abs/2307.04606v2
20230710145046
Well-Orderedness of the Bashicu Matrix System
[ "Samuel Vargovčík" ]
math.LO
[ "math.LO", "03E10" ]
Maximal violation of the Bell-CHSH inequality via bumpified Haar wavelets Silvio P. Sorella August 12, 2023 ========================================================================= The Bashicu Matrix System is a recursive system of ordinal notations created by the user BashicuHyudora of the japanese Googology Wiki. In this paper, we prove that the Bashicu Matrix System is well-ordered. § INTRODUCTION The Bashicu Matrix System (BMS) is a recursive system of ordinal notations with a large order type created by the user BashicuHyudora of the japanese Googology Wiki <cit.>. Originally, it was defined informally in pseudocode based on the programming language BASIC, and the following is the agreed-upon formalization: An array is a sequence of equal-length sequences of natural numbers, i.e. an element of (ℕ^n)^m for some n,m∈ℕ. For every array A∈(ℕ^n)^m, the columns of A are its elements, and for each n'<n, the n'-th row of A is the sequence of length m such that for each m'<m, the m'-th element of the n'-th row is the n'-th element of the m'-th column. We will denote concatenation of sequences by +. Let A be any array and n be any natural number. For every m smaller than the length of A's columns and every i smaller than the length of A, the m-parent of the i-th column is the last column before it whose m-th element is smaller than the m-th element of the i-th column, and which is an (m-1)-ancestor of the i-th column if m>0, if such a column exists. If no such column exists, then the i-th column does not have an m-parent. The m-ancestors (also called strict m-ancestors) of a column are its m-parent and the m-ancestors of its parent. The non-strict m-ancestors of a column are the column itself and its m-ancestors. If A is empty, then the expansion of A at n is A[n]=A. Otherwise let C be the last element of A and let m_0 be maximal such that C has an m_0-parent, if such an m_0 exists, otherwise m_0 is undefined. Let arrays G,B_0,B_1,...,B_n be such that: * A=G+B_0+(C). * The first element of B_0 is the m_0-parent of C if m_0 is defined and otherwise B_0 is empty. * For each D in B_0 and m<m_0, if the first column in B_0 is D or an m-ancestor of D, then it the m-th element of D is said to ascend. * B_i is a copy of B_0, but for each ascending element of each column in B_0, its copy in B_i is increased by i·((m-th element of C)-(m-th element of the first column in B_0)), where m is the index of the row in which that element is. Then the expansion A[n] of A at n is G+B_0+B_1+...+B_n, with all rows of zeroes at the bottom removed. BMS is the closure of {((0,0,...,0,0_n),(1,1,...,1,1_n)) : n∈ℕ} under expansion at each natural number, ordered by the ⊆-minimal partial order such that A[n]≤ A for each n∈ℕ and A∈ BMS. Here, a partial order ≤ is the set of pairs (x,y) such that x≤ y. This is the fourth official version of the system, which is why it is also referred to as BM4. The previous versions BM1, BM2 and BM3 were not well-founded, but as we prove below, BM4 is well-founded. There are also unofficial versions, of which BM2.3 is strongly believed to be equivalent to BM4 <cit.>, and BM3.3 is also notable for its similarity to BM4 and temporarily more predictable behavior. However, they are not the focus of this paper, so from now on, we will only refer to BM4. The question of whether BMS is well-ordered has been an open problem for almost 8 years, and it was among the most significant open problems in googology. Although BMS is yet to be used outside of this field, its simplicity and large order type provide hope for future uses in proof theory and model theory. Before this paper, the research about BMS has brought the following results. BMS restricted to arrays with one row is also called the Primitive Sequence system (or PrSS), and has a simple isomorphism with the iterated base-ω Cantor normal form - intuitively, each column represents a single ω in the string, the element of the column is the "height" of the ω (the number of exponents it appears in), and distinct ωs with the same height are separated by a + at the same level, unless there is an ω between them with a lower height. ωs that do not have any ω in their exponent in the resulting string are exponentiated to 0. This isomorphism can be proven easily by transfinite induction on the Cantor normal form expression, thus the order type of PrSS is ε_0. BMS restricted to arrays with two rows is also called the Pair Sequence System (or PSS), and was proven well-founded in 2018 <cit.>, with its order type shown to be the proof-theoretic ordinal of Π^1_1-CA_0, i.e. the countable collapse of ω_ω using standard collapsing functions (such as Buchholz's function in this case). If we abbreviate ⟨ L_α,∈⟩≺_Σ_1⟨ L_β,∈⟩ as α<_0β, then informal estimates say that the order type of the set of arrays in BMS smaller than ((0,0,0),(1,1,1),[0](2,2,2)) is most likely the supremum of, for each n, a recursive collapse (using standard collapsing functions) of the smallest ordinal α_0 for which there exist α_1,α_2,...,α_n such that α_0<_0α_1<_0α_2<_0...<_0α_n. The order type of the entirety of BMS has not been carefully estimated in terms of ordinal functions yet, but is expected to be the supremum of, for each n, a recursive collapse of the smallest ordinal α for which there exists β with ⟨ L_α,∈⟩≺_Σ_n⟨ L_β,∈⟩, using collapsing functions that may be standard in the future. Subjectively, BMS is a very elegant way to represent large recursive ordinals. With enough formalization effort, it could give rise to a system of recursively large ordinals. This system would be similar to stability in structure and, as far as we know, similar to stability in scale too, but perhaps easier to understand or easier to use for some purposes such as ordinal analysis. We utilize this similarity to prove that BMS is well-ordered. Specifically, we first prove that BMS is totally ordered and the order is precisely the lexicographical order. We then prove that a certain reflection property holds for stable ordinals. We show that this property allows us to map elements of BMS to ordinals while preserving the order. Using this order-preserving function from BMS to Ord, any infinite descending sequence in BMS would be mapped to an infinite descending sequence in Ord, which cannot exist by definition, thus BMS is well-ordered. § THE PROOF Given that a property holds for every element of a set X, and that if it holds for x then it holds for f(x) for each f in some set F of functions, it is easy to see from the definition of closure that the property holds for all elements of the closure of X under the functions f∈ F. We consider this fact trivial enough to be used implicitly. It is clear that for A,A'∈ BMS, A'<A iff A is non-empty and A'=A[n_0][n_1]...[n_m] for some m,n_0,n_1,...,n_m∈ℕ. For all A∈ BMS and n∈ℕ, A[n] is lexicographically smaller than A (with the columns also compared lexicographically). Using the variable names from the definition of BMS, we have A=G+B_0+(C) and A[n]=G+B_0+B_1+...+B_n. Then A[n]<_lexA iff B_1+B_2+...+B_n<_lex(C), which is trivial if m_0 is undefined (the empty sequence is lexicographically smaller than all other sequences, including (C)), and otherwise equivalent to the first column in B_1 being lexicographically smaller than C. Let R_i be the first column in B_i. Since R_0 is the m_0-parent of C, it is an m-ancestor of C for each m≤ m_0, thus the m-th element of R_0 is less than the m-th element of C. By definition, R_1 is a copy of R_0, but for each m<m_0, the m-th element is increased either by 0 or by the difference between itself and the m-th element of C. Then it is less than or equal to the m-th element of C, so the sequence of the first m_0 elements of R_1 is pointwise smaller than or equal to the sequence of the first m_0 elements of C (in fact, it is equal, but that is not necessary for this proof). However, the m_0-th element of R_1 is necessarily equal to the m_0-th element of R_0 since m_0<m_0 is false, thus it is strictly smaller than the m_0-th element of C. Therefore R_1<_lexC, which implies B_1+B_2+...+B_n<_lex(C), and thus A[n]<_lexA. For all A,A'∈ BMS, A'<A implies A'<_lexA. BMS is totally ordered. For every non-empty A∈ BMS, A[0] is simply A without the last column, as it is equal to G+B_0, and thus A=A[0]+(C). Then it is trivial to prove by induction that for all A,A'∈ BMS, if A' is a subsequence of A, then A'=A[0][0]...[0][0]_n for some n∈ℕ, and thus A'≤ A. Together with A[n] being a subsequence of A[n+1] for all A∈ BMS and n∈ℕ, this also means that for all A,A'∈ BMS and n∈ℕ, A[n]≤ A[n+1], and if A[n]<A'≤ A[n+1], then A[n]≤ A'[0]. This implies that if some subset X of BMS is totally ordered, then X∪{A[n] : A∈ X n∈ℕ} is also totally ordered. By induction, it is clear that if X⊆ BMS is totally ordered, then X∪{A[n_0] : A∈ X n_0∈ℕ}∪{A[n_0][n_1] : A∈ X n_0,n_1∈ℕ}∪...∪{A[n_0][n_1]...[n_m] : A∈ X n_0,n_1,...,n_m∈ℕ} is totally ordered for each m∈ℕ. Let X_0={((0,0,...,0,0_n),(1,1,...,1,1_n)) : n∈ℕ}. Since each A∈ BMS is in {A”[n_0][n_1]...[n_m] : A”∈ X_0 n_0,n_1,...,n_m∈ℕ} for some m∈ℕ, it is obvious that if X_0 is totally ordered, then for all A,A'∈ BMS, there's some m∈ℕ such that A,A'∈ X_0∪{A”[n_0] : A”∈ X_0 n_0∈ℕ}∪{A”[n_0][n_1] : A”∈ X_0 n_0,n_1∈ℕ}∪...∪{A”[n_0][n_1]...[n_m] : A”∈ X_0 n_0,n_1,...,n_m∈ℕ}, which is totally ordered, and therefore A,A' are comparable. So if X_0 is totally ordered, then BMS is totally ordered. It is now sufficient to prove that X_0 is totally ordered. This is easy, since ((0,0,...,0,0_n+1),(1,1,...,1,1_n+1))[1] is trivially ((0,0,...,0,0_n),(1,1,...,1,1_n)), and thus by induction, for each n<m∈ℕ, ((0,0,...,0,0_n),(1,1,...,1,1_n))=((0,0,...,0,0_m),(1,1,...,1,1_m))[1][1]...[1][1]_m-n<((0,0,...,0,0_m),(1,1,...,1,1_m)), and all elements of X_0 are of this form, so all elements of X_0 are pairwise comparable. The ordering of BMS coincides with the lexicographical ordering with columns compared lexicographically. Let A be a non-empty array and n be a natural number, let G,B_0,B_1,...,B_n,m_0 be as in Definition <ref>, and let l_0,l_1 be the lengths of G,B_0.(i) For all i<l_0, j<l_1 and k∈ℕ, in A[n], the i-th column in G is a k-ancestor of the j-th column in B_0 iff it is a k-ancestor of the j-th column in B_n.(ii) For all i,j<l_1 and k∈ℕ, the i-th column in B_0 is a k-ancestor of the j-th column in B_0 iff the i-th column in B_n is a k-ancestor of the j-th column in B_n.(iii) If n>0, then for all i<l_1 and k<m_0, in A, the i-th column in B_0 is a k-ancestor of the last column of A iff in A[n], the i-th column in B_n-1 is a k-ancestor of the first column in B_n.(iv) For all 0<i<l_1 and k∈ℕ, in A[n], the k-parent of the i-th column in B_n is either in B_n or in G.(v) For all i,j<l_1 and k∈ℕ and n_0<n_1<n, in A[n], the i-th column in B_n_0 is a k-ancestor of the j-th column in B_n_1 iff it's a k-ancestor of the j-th column in B_n_1+1. We can prove this by induction on k. The proof is relatively straightforward, but tedious. The author recommends drawing the mentioned ancestry relations in order to see what is happening. Assume all 5 statements hold for all k'<k. For (ii), fix i and j. If j=0 then it is trivial, so we will only consider the case j>0. From the assumption, it follows that for all k'<k and i'<l_0, the i'-th column in B_0 is a k'-ancestor of the j-th column in B_0 iff the i'-th column in B_n is a k'-ancestor of the j-th column in B_n. Let I be the set of i' such that for all k'<k, the i'-th column in B_0 is a k'-ancestor of the j-th column in B_0. Since for all k'<k, k'-ancestry is a total order on the columns with indices in I, the k-parent of each such column is simply the last such column before it with a smaller k-th element. The k-th element of the j-th column in B_0 ascends iff the first column in B_0 is in I and is a k-ancestor of the j-th column in B_0, which is equivalent to the k-th element of the first column in B_0 being smaller than the k-th element of all columns between it and the j-th column in B_0, so it is also a k-ancestor of all other columns with indices in I. This means that either the k-th elements of all columns in B_0 with indices in I ascend or the k-th element of the j-th column in B_0 doesn't ascend. In the first case, the differences between the columns in B_n with indices in I are the same as in B_0, and since k'-ancestry relations between them are also the same as in B_0 for k'<k, k-ancestry must be the same too, because everything it depends on is the same. In the second case, since the j-th column doesn't ascend, in B_n, there trivially cannot be any k-ancestors of the j-th column that aren't copies of k-ancestors of the j-th column in B_0. Since this possibility requires that the first column in B_0 is not a k-ancestor of the j-th column, it is also not a k-ancestor of any k-ancestor of the j-th column, thus the k-th elements of the k-ancestors of the j-th column also don't ascend, and therefore the differences between them are the same, implying that the k-ancestry relations are preserved. Either way, (ii) holds for k. The above can trivially be extended to include the next copy of the first column in B_0, and then since the k-th element of the first column in B_1 is easily seen to be the same as C as long as k<m_0, (iii) holds for k. Then to prove (iv), we first observe that if for some k'<k, the k'-parent of the i-th column in B_n is in G, then all of its k'-ancestors are in G, and its k-parent must be one of its k'-ancestors so it is also in G. So we're left with the case that for all k'<k, the k'-parent of the i-th column in B_n is in B_n. If the first column of B_n is a k'-ancestor of the i-th column in B_n for all k'<k, and yet its k-parent is not in B_n or G, then the first column in B_n is not a k-ancestor of the i-th column in B_n. Therefore from (ii) for k, which we have already proven, we get that the k-parent of the i-th column in B_0 is not in B_0 (therefore it is in G), which also implies that the k-th element of the i-th column in B_0 does not ascend in the expansion of A, so it is equal to the k-th element of the i-th column in B_n. But from (ii) for all k'<k and the fact that the first column in B_n is a k'-ancestor of the i-th column in B_n for all k'<k, we get that the first column in B_0 is a k'-ancestor of the i-th column in B_0 for all k'<k. This, together with its k-parent being in G, means that for all columns in B_0 that are k'-ancestors of the i-th column in B_0 for all k'<k, their k-th element is at least as large as the k-th element of the i-th column in B_0, and therefore at least as large as the k-th element of the i-th column in B_n. This includes the first column in B_0, and since the k-th element of the first column in B_n is by definition at least as large as the k-th element of the first column in B_0, which is at least as large as the k-th element of the i-th column in B_n, which is by definition strictly larger than the k-th element of the k-parent of the i-th column in B_n, we get that the k-th element of the first column in B_n is strictly larger than the k-th element of the k-parent of the i-th column in B_n. With that, and due to the facts that k'-ancestry is a total order on the set of k'-ancestors of each column for each k', and that both the first column in B_n and the k-parent of the i-th column in B_n are k'-ancestors of the i-th column in B_n for every k'<k, and the latter is before the former, we get that the k-parent of the i-th column in B_n is also a k-ancestor of the first column in B_n. If k≥ m_0 (using variable names from Definition <ref>), then this is already a contradiction, because the m_0-parent of the first column in B_n is easily seen to be in G. Otherwise, let n'<n be the natural number such that the k-parent of the i-th column in B_n is in B_n'. From repeated applications of (iii) for k, which we have already proven, we get that the first column in B_n' is a k-ancestor of the first column in B_n, and therefore by k-ancestry being a total order on the set of k-ancestors of the first column in B_n, we get that the first column in B_n' is a k-ancestor of the k-parent of the i-th column in B_n, and thus is also a k-ancestor of the i-th column in B_n. This, however, by more repeated applications of (iii), implies that the first column in B_0 is a k-ancestor of the i-th column in B_n, which is in contradiction with the fact that the k-th element of the first column in B_0 is at least as large as the k-th element of the i-th column in B_n. Now, for (iv), we're left with the case that for some k'<k, the first column in B_n is not a k'-ancestor of the i-th column in B_n. However, if we choose a specific such k'<k, then by (iv) for k' we get that the k'-parent of every k'-ancestor in B_n of the i-th column in B_n is either in B_n or in G, from which it follows that all k'-ancestors of the i-th column in B_n are either in B_n or in G, and that includes the k-parent of the i-th column in B_n, proving (iv) for k. With (iv) proven for k, the proof of (ii) and (iii) for k can also be easily modified for relations between G and B_0 and between G and B_n, with all nontrivialities accounted for by (iv) for k: either the k-th element of the j-th column in B_0 ascends and the j-th column in B_n trivially has the first column in B_0 as a k-ancestor, thus the k-ancestors in G are simply the k-ancestors of that (by totality of k-ancestry on the set of k-ancestors of the j-th column in B_n), or the k-th element of the j-th column in B_0 doesn't ascend and B_n's copy C_n of the (j-th column in B_0)'s first non-strict k-ancestor C_0 in B_0 is easily seen to have the same k-parent as C_0, because the k-parents of C_0 and C_n are both in G, the k-th elements of C_0 and C_n are equal, and the sets of k'-ancestors of C_0 and of C_n are the same for every k'<k by (i) for k', thus the k-ancestors in G of both C_0 and C_n are that k-parent and its k-ancestors. Therefore (i) also holds for k. Finally, (v) can be proven for k by simply letting {n_2,n_3}={n_1,n_1+1} (the two options together give the proofs of both directions of (v)), and noticing that if the j-th column in B_n_2 has a k-ancestor in B_n_0, then the first column in B_n_2 must also be its k-ancestor (similarly to the reasoning near the end of the previous paragraph - using (iv) for the last k-ancestor in B_n_2 of the j-th column in B_n_2), and therefore by totality of k-ancestry on the set of k-ancestors of the j-th column in B_n_2, the i-th column in B_n_0 is a k-ancestor of the first column in B_n_2. Then if k≥ m_0, we get a contradiction, because the k-ancestors of the j-th column in B_n_2 are all in B_n_2 or G, as we've already proven, so it must be that k<m_0. In that case, by application of (iii) and either (depending on n_2-n_1) another application of the totality of k-ancestry on the set of k-ancestors of the j-th column in B_n_2 or an application of transitivity of k-ancestry, we get that the i-th column in B_n_0 is also a k-ancestor of the first column in B_n_3, and finally by an application of (ii) for k, we get that the first column in B_n_3 is a k-ancestor of the j-th column in B_n_3, so by transitivity of k-ancestry, the i-th column in B_n_0 is a k-ancestor of the j-th column in B_n_3, which concludes the proof of (v) for k. By induction, all 5 statements in the lemma always hold. We will abbreviate ⟨ L_α,∈⟩≼_Σ_n+1⟨ L_β,∈⟩ as α≤_nβ, and similarly for the strict versions of these relations. Here, L_α is the α-th level of the constructible hierarchy, and M≼_Σ_nN means that M is a Σ_n-elementary substructure of N. Let σ be the smallest ordinal α such that there exists an ordinal β with ∀ n∈ℕ(α<_nβ). For all α,β∈σ and n∈ℕ, if ω<α<_nβ, then for all finite X,Y⊆ Ord such that γ<α≤δ<β for all γ∈ X and δ∈ Y, there exists a finite Y'⊆ Ord and a bijection f: Y→ Y' such that for all γ∈ X, all δ_0,δ_1∈ Y, all k∈ℕ and all m<n: * γ<f(δ_0)<α * γ<_kδ_0⇒γ<_kf(δ_0) * δ_0<δ_1⇒ f(δ_0)<f(δ_1) * δ_0<_kδ_1⇒ f(δ_0)<_kf(δ_1) * δ_0<_mβ⇒ f(δ_0)<_mα We can prove this by constructing a Σ_n+1 formula that, when interpreted in L_β, asserts all the true instances of the statements on the left side of the implications, and when interpreted in L_α, asserts the corresponding instances of the statements on the right side of the implications. One small issue is the first assertion, which is unconditional. However, the f(δ_0)<α part is simply asserting that f(δ_0) exists in L_α, which will be done by existentially quantifying the variable, and since γ<α≤δ_0 is necessarily true, γ<f(δ_0) is equivalent to γ<δ_0⇒γ<f(δ_0), which is a conditional statement. We construct a formula with parameters γ_0,γ_1,...,γ_|X|-1, which are all the elements of X. Since they're ordinals smaller than α, they are in L_α, therefore we can use them as parameters in a formula that we want to reflect using the stability relation between α and β. Let φ_0(η,ξ) be a formula asserting η<ξ. Let φ_1(η,ξ,k) be a formula asserting η<_kξ. Let φ_2(η,k) be a formula asserting η<_kOrd, i.e. ⟨ L_η,∈⟩≺_Σ_k+1⟨ L,∈⟩. φ_0 is clearly Σ_0, as it is simply the atomic formula η∈ξ. This means it is Σ_n+1. φ_1 only needs to assert the existence of L_ξ, the defining characteristics of it (specifically that it is a level of L, which is simply V=L relativized to it, and that the ordinals in it are precisely the elements of ξ, which is trivially Σ_0), and then it needs to assert that φ_2(η,k) relativized to L_ξ holds. The relativization of a first-order formula to a set is trivially always Σ_0. Assuming φ_2 is first-order, the only unbounded quantifier in φ_1 is the one existentially quantifying L_ξ. Then φ_1 is Σ_1, which means it's also Σ_n+1. Finally, φ_2(η,k) is Π_k+1, as shown in <cit.> (Theorem 1.8), which means it is Σ_k+2, and therefore first-order. In all non-relativized uses of φ_2, we will require k<n, which means k+2≤ n+1, thus it is Σ_n+1. X and Y are finite, and all of their elements are smaller than σ so for each η,ξ∈ X∪ Y, there are only finitely many k for which φ_1(η,ξ,k) is true. Then there are finitely many instances of φ_0(γ_i,δ_j), φ_1(γ_i,δ_j,k), φ_0(δ_i,δ_j), φ_1(δ_i,δ_j,k) and φ_2(δ_i,m) with k∈ℕ and m<n, which are true when each δ_i is interpreted as the i-th element of Y. So their conjunction φ is a conjunction of finitely many Σ_n+1 formulae, therefore it is itself a Σ_n+1 formula. Then we only need a Σ_n+1 formula ψ asserting that all the δ_i are ordinals, which is trivial. Now, the formula ψφ is Σ_n+1, therefore the formula ∃δ_0,δ_1,...,δ_|Y|-1(ψφ) is also Σ_n+1. In L_β, the witnesses of that existential quantifier are the elements of Y, therefore the formula is true in L_β. Then by α<_nβ, it must be true in L_α, and since it encodes all the relations between elements of X, elements of Y and β that need to be reflected to relations between elements of X, elements of Y' and α, the witnesses of that formula in L_α form a set Y' that, together with the unique order isomorphism f: Y→ Y', satisfies the conditions in the lemma. Note that this reflection is similar to reflection in Patterns of Resemblance, and those could be used too. However, the author is not as experienced in working with Patterns of Resemblance, so it was easier to use stability. BMS is well-ordered. We will define a function o: BMS→ Ord in the following way. Consider an array A with length n. A stable representation of A is a function f: n→ Ord such that for all i,j<n, i<j⇒ f(i)<f(j) and for all m, if the i-th column of A is an m-ancestor of the j-th column of A, then f(i)<_mf(j). Let o(A) be the minimal α∈ Ord such that for some stable representation f of A, all outputs of f are smaller than α. This proof is similar to the proof of Lemma <ref> - we prove by induction on the number of expansions needed to reach an array, that o is defined and order-preserving on all of BMS, by starting from X_0 and proving that if it holds for some Z, then it holds for Z∪{A[n] : A∈ Z n∈ℕ}, and using the fact that every pair A,A' of arrays is reached after finitely many applications of this induction step. Of course, o(A) is defined for A∈ X_0={((0,0,...,0,0_n),(1,1,...,1,1_n)) : n∈ℕ}, and it is easy to see that o(((0,0,...,0,0_n),(1,1,...,1,1_n)))<o(((0,0,...,0,0_m),(1,1,...,1,1_m))) iff n<m, thus o is also order-preserving in this set. Let Z be a set of arrays, on which o is order-preserving and defined for all of Z's elements. Let A∈ Z. If A is empty, then trivially for all n∈ℕ, o(A[n])=o(A)≤ o(A), and thus o(A[n]) is also defined and order is preserved. Otherwise, let f be a stable representation of A whose outputs are all smaller than o(A). We can then recursively define stable representations of A[n] in the following way. Let l_n be the length of A[n] for all n∈ℕ. A stable representation f_0 of A[0] is simply f restricted to l_0. Using variable names from the definition of BMS, this representation trivially maps indices (in A[0]) of columns in B_0 to the ordinals to which f maps indices (in A) of columns in B_0. Let f_n be a stable representation of A[n] that maps indices (in A[n]) of columns in B_n to the ordinals to which f maps indices (in A) of columns in B_0. Then using the reflection property from Lemma <ref> with α being the ordinal to which f_n maps the index of the first column in B_n, β being the ordinal to which f maps the last column of A, X being the set of ordinals to which f_n maps indices of columns before B_n, and Y being the set of ordinals to which f_n maps indices of columns in B_n (or to which f maps indices of columns in B_0), we get a set Y' of ordinals due to α<_m_0β. We can then define f_n+1 by making it the same as f_n for indices of columns before B_n, mapping indices of columns in B_n to the elements of Y', and mapping indices (in A[n+1]) of columns in B_n+1 to the elements of Y. It follows from Lemma <ref> that f_n+1 is a stable representation of A[n+1]. Then it's trivially a stable representation of A[n+1] that maps indices of columns in B_n+1 to the ordinals to which f maps indices of columns in B_0, therefore by induction, for all m∈ℕ, there is a stable representation of A[m] that maps indices of columns in B_m to the ordinals to which f maps indices of columns in B_0. Since all these ordinals are smaller than β, o(A[m]) is defined and is at most β, and since β is an output of f, it is smaller than o(A), so o(A[m])<o(A), which means o is defined and order-preserving (due to the order being originally defined only by comparing an array with its expansions) on Z∪{A[m] : A∈ Z m∈ℕ}. Now, similarly to the proof of Lemma <ref>, with X_0={((0,0,...,0,0_n),(1,1,...,1,1_n)) : n∈ℕ}, we conclude that o is defined and order-preserving on X_0∪{A[n_0] : A∈ X_0 n_0∈ℕ}∪{A[n_0][n_1] : A∈ X_0 n_0,n_1∈ℕ}∪...∪{A[n_0][n_1]...[n_m] : A∈ X_0 n_0,n_1,...,n_m∈ℕ} for each m∈ℕ, and since all A,A'∈ BMS are also in this set for some m, o is defined for them their order is preserved by o, so o is defined and order-preserving on all of BMS. Then if BMS was not well-ordered, there would be an infinite descending sequence in BMS, which would get mapped to an infinite descending sequence of ordinals by o, and that cannot exist by the definition of ordinals. Therefore BMS is well-ordered. § FUTURE RESEARCH We hope to use BMS in ordinal analysis, first using it to rewrite analyses of theories that have already been analyzed by other means, and then analyzing even stronger theories, ideally up to full second-order arithmetic if the order type of BMS is large enough for that. Once this approach proves viable, we also plan to continue proving the well-orderedness of similar notation systems with larger order types, such as Y sequence <cit.> and its extension ω-Y sequence <cit.>. Another challenge that is relevant is the task to find a "self-contained" proof of well-orderedness of BMS (that is, a proof using only concepts that are directly related to BMS, which excludes stability and ordinal collapsing functions), as this would simplify the translation of the proof to theories that only deal with basic structures, such as third-order arithmetics. § ACKNOWLEDGEMENTS I would like to express my deepest gratitude to the discord user C7X (also known as Convindix) for proofreading this paper, as well as introducing me to the concept of stability years ago, which led to this paper's very existence. I also want to thank the googology and apeirology community for the doubt that motivated me to finish this paper. § REFERENCES
http://arxiv.org/abs/2307.04796v1
20230710180007
Towards $gg\to HH$ at next-to-next-to-leading order: light-fermionic three-loop corrections
[ "Joshua Davies", "Kay Schönwald", "Matthias Steinhauser" ]
hep-ph
[ "hep-ph" ]
-3cm14pt P3H-23-043, TTP23-024, ZU-TH 34/23 1.5cm Towards gg→ HH at next-to-next-to-leading order: light-fermionic three-loop corrections Joshua Davies^a, Kay Schönwald^b, Matthias Steinhauser^c (a) Department of Physics and Astronomy, University of Sussex, Brighton BN1 9QH, UK (b) Physik-Institut, Universität Zürich, Winterthurerstrasse 190, 8057 Zürich, Switzerland (c) Institut für Theoretische Teilchenphysik, Karlsruhe Institute of Technology (KIT), Wolfgang-Gaede Straße 1, 76128 Karlsruhe, Germany ============================================================================================================================================================================================================================================================================================================================================================================================================================ empty We consider light-fermion three-loop corrections to gg→ HH using forward scattering kinematics in the limit of a vanishing Higgs boson mass, which covers a large part of the physical phase space. We compute the form factors and discuss the technical challenges. The approach outlined in this letter can be used to obtain the full virtual corrections to gg→ HH at next-to-next-to-leading order. § INTRODUCTION The simultaneous production of two Higgs bosons is a promising process to obtain information about their self-coupling in the scalar sector of the Standard Model and beyond. Its study will be of primary importance after the high-luminosity upgrade of the Large Hadron Collider and thus it is important that there are precise predictions from the theory side. The cross section for Higgs boson pair production is dominated by the gluon-fusion process, which is loop-induced <cit.>. Thus, at next-to-leading (NLO) order the virtual corrections require the computation of two-loop four-point function with massive internal top quarks. There are numerical results which take into account the full dependence of all mass scales <cit.>. Furthermore, there are a number of analytic approximations which are valid in various limits, which cover different parts of the phase space. Particularly appealing approaches have been presented in Refs. <cit.> where the expansion around the forward-scattering kinematics has been combined with the high-energy expansion and it has been shown that the full phase space can be covered. Thus, these results are attractive alternatives to computationally expensive purely numerical approaches. Beyond NLO, current results are based on expansions for large top quark masses. Results in the infinite-mass limit are available at NNLO <cit.> and N^3LO <cit.> and finite 1/m_t corrections have been considered at NNLO in Refs. <cit.>. In Ref. <cit.> the renormalization scheme dependence on the top quark mass has been identified as a major source of uncertainty of the NLO predictions. In general, such uncertainties are reduced after including higher-order corrections, i.e., virtual corrections at NNLO including the exact dependence on the top quark mass. This requires the computation of 2→ 2 scattering amplitudes at three-loop order with massive internal quarks; this is a highly non-trivial problem. Current analytic and numerical methods are not sufficient to obtain results with full dependence on all kinematic variables, as is already the case at two loops. However, after an expansion in the Mandelstam variable t (see Refs. <cit.>) and the application of the “expand and match” <cit.> method to compute the master integrals, one obtains semi-analytic results which cover a large part of the phase space. Such a result allows the study of the renormalizations scheme dependence at three-loop order. In this letter we outline a path to the three-loop calculation and present first results for the light-fermionic corrections. Let us briefly introduce the kinematic variables describing the 2→ 2 process, with massless momenta q_1 and q_2 in the initial state and massive momenta q_3 and q_4 in the final state. It is convenient to introduce the Mandelstam variables as s = (q_1+q_2)^2 , t = (q_1+q_3)^2 , u = (q_1+q_4)^2 , where all momenta are incoming. For gg → HH we have q_1^2=q_2^2=0 , q_3^2=m_H^2 , q_4^2=m_H^2 , and the transverse momentum of the final-state particles is given by p_T^2 = u t-m_H^4/s . For Higgs boson pair production one can identify two linearly independent Lorentz structures A_1^μν = g^μν - 1/q_12q_1^ν q_2^μ , A_2^μν = g^μν + 1/p_T^2 q_12( q_33 q_1^ν q_2^μ - 2q_23 q_1^ν q_3^μ - 2q_13 q_3^ν q_2^μ + 2q_12 q_3^μ q_3^ν) , where q_ij = q_i· q_j, which allows us to introduce two form factors in the amplitude M^ab = ε_1,με_2,ν M^μν,ab = ε_1,με_2,νδ^ab X_0 s ( F_1 A_1^μν + F_2 A_2^μν) . Here a and b are adjoint colour indices and X_0 = G_F/2√(2)× T_F α_s(μ)/(2π) with T_F=1/2. G_F is Fermi's constant and α_s(μ) is the strong coupling constant evaluated at the renormalization scale μ. We write the perturbative expansion of the form factors as F = F^(0) + (α_s(μ)/π) F^(1) + (α_s(μ)/π)^2 F^(2) + ⋯ , and decompose F_1 and F_2 into “triangle” and “box” form factors F_1^(k) = 3 m_H^2/s-m_H^2 F^(k)_ tri+F^(k)_ box1 , F_2^(k) = F^(k)_ box2 . In this notation F^(k)_ box1 and F^(k)_ box2 contain both one-particle irreducible and reducible contributions. The latter appear for the first time at two-loop order; exact results for the so-called “double-triangle” contributions can be found in <cit.>. Analytic results for the leading-order form factors are available from <cit.> and the two-loop triangle form factor has been computed in Refs. <cit.>. The main focus of this letter is on the light-fermionic contribution to the three-loop quantities F^(2)_ box1 and F^(2)_ box2 for t=0 and m_H=0. Expansions around the large top quark mass limit of F^(2)_ tri, F^(2)_ box1 and F^(2)_ box2 can be found in Ref. <cit.> and results for F^(2)_ tri valid for all s/m_t^2 have been computed in Refs. <cit.>. We decompose the three-loop form factors as F^(2) = n_lT_F F^(2),n_l = n_lT_F (C_F F^FL + C_A F^AL) + … , where the ellipses stand for further colour factors which we do not consider here. Sample Feynman diagrams contributing to F^FL and F^AL are shown in Fig. <ref>. In this letter we consider t=0 and m_H=0, i.e. the leading term in an expansion around t→ 0 and m_H→ 0. This constitutes a crude approximation, however, in a large part of the phase space it contributes a major part of the corrections. For example, choosing t=0 and m_H=0 at two loops (NLO), at a transverse momentum of p_T=100 GeV the form factor F_ box1 deviates from its exact value by at most 30%, depending on the value of √(s) considered. This means that more than two thirds of the form factor value are covered by the t=0, m_H=0 approximation. Furthermore, we concentrate on the one-particle irreducible contributions. We note that F_ box2 vanishes for t=0. More details are given below in Section <ref>. We present here results for the light-fermionic (“n_l”) terms and show that this approach can be used to obtain the three-loop virtual corrections to gg→ HH. The remaining contributions contain many more integral topologies and more complicated integrals, which have to be integration-by-parts (IBP) reduced to master integrals. In the next section we outline the techniques used for the calculations and discuss the results in Section <ref>. In Section <ref> we conclude and provide an outlook for the computation of the full corrections. § TECHNICAL DETAILS The basic philosophy of our calculation has already been outlined in Ref. <cit.>, where the two-loop amplitude for gg→ HH has been considered in the small-t and high-energy limit and it has been shown that the combination of both expansions covers the whole phase-space. The starting point for both expansions is the amplitude expressed in terms of the same master integrals which are obtained from a reduction problem which involves the dimensional variables s, t and m_t.[A Taylor expansion in m_H in a first step eliminates the Higgs boson mass from the reduction problem.] Using currently available tools such a reduction is not possible at three loops. To avoid such an IBP reduction, one can try to expand the unreduced amplitude in the respective limit. The high-energy expansion is obtained via a complicated asymptotic expansion which involves a large number of different regions. On the other hand, the limit t→ 0 leads to a simple Taylor expansion which can be easily realized at the level of the integrands. Furthermore, the expansion around forward-scattering kinematics covers a large part of the physically relevant phase space <cit.>. Our computation begins by generating the amplitude with qgraf <cit.>, and then using tapir <cit.> and exp <cit.> to map the diagrams onto integral topologies and convert the output to FORM <cit.> notation. The diagrams are then computed with the in-house “calc” setup, to produce an amplitude in terms of scalar Feynman integrals. These tools work together to provide a high degree of automation. We perform the calculation for general QCD gauge parameter which drops out once the amplitude is expressed in terms of master integrals. This is a welcome check for our calculation. The scalar integrals can be Taylor expanded in m_H at this point, as done at two loops in Refs. <cit.>, however at three loops in this letter we keep only the leading term in this expansion, i.e., set m_H=0. The next step is to expand the amplitude around the forward kinematics (t→ 0) at the integrand level. This is implemented in FORM by introducing q_δ = q_1+q_3 in the propagators and expanding in q_δ to the required order. Note that q_δ^2=t. After treating the tensor integrals, where q_δ appears contracted with a loop momentum, we need to perform a partial-fraction decomposition to eliminate linearly dependent propagators. The partial fractioning rules are produced automatically by tapir when run with the forward kinematics (q_3=-q_1) specified[In an alternative approach, we have also used LIMIT <cit.> to generate the partial fractioning rules.]. Note that although for the present publication we compute the “t=0 contribution”, we must properly expand in q_δ to produce the amplitude to order t^0 due to inverse powers of t appearing in the projectors. These inverse powers ultimately cancel in the final result. This procedure yields amplitudes for F_ box1 and F_ box2 in terms of scalar Feynman integrals which belong to topologies which depend only on s and m_t (and not on t). At this point the amplitudes are written in terms of 60 integral topologies, however these are not all independent; they can be reduced to a smaller set by making use of loop-momentum shifts and identification of common sub-sectors. In one approach we find these rules with the help of LiteRed <cit.>, which identifies a minimal set of 28 topologies. In a second approach we use Feynson <cit.> to generate these maps and end up with 53 topologies. The difference in the number of topologies is due to LiteRed mapping topology sub-sectors, while we used Feynson only at the top level. When considering the full amplitude, i.e., not just the light-fermionic corrections, only the Feynson approach is feasible for performance reasons. It is also possible to use Feynson to find sub-sector mappings, which we will also use when considering the full amplitude (which is written initially in terms of 522 integral topologies). The amplitude is now ready for a reduction to master integrals using Kira <cit.> and  <cit.>. The most complicated integral topology took about a week on a 16-core node, using around 500GB of memory. After minimizing the final set of master integrals across the topologies with Kira, we are left with 177 master integrals to compute. Comparing results obtained via the LiteRed and Feynson topology-mapping approaches reveals one additional relation within this set which is missed by Kira, however, we compute the set of 177 master integrals which was first identified. To compute the master integrals, we first establish a system of differential equations w.r.t. x=s/m_t^2. Boundary conditions are provided in the large-m_t (x→ 0) limit: we prepare the three-loop integrals in the forward kinematics, and pass them to exp which automates the asymptotic expansion in the limit that m_t^2 ≫ s. This leads to three-loop vacuum integrals, as well as products of one- and two-loop vacuum integrals with two- and one-loop massless s-channel 2→ 1 integrals, respectively. This expansion leads to tensor vacuum integrals, which our “calc” setup can compute up to rank 10. We compute the first two expansion terms in s/m_t^2 for each of the 177 master integrals. To fix the boundary constants for the differential equations we only need about half of the computed coefficients; the rest serve as consistency checks. The differential equations are then used to produce 100 expansion terms for the forward-kinematics master integrals in the large-m_t limit which we use to compute F_ box1. Since these results are analytic in the large-m_t limit we can compare with the results obtained in Ref. <cit.> in the limit t=0, and find agreement. The final step is to use the “expand and match” approach <cit.> to obtain “semi-analytic” results which cover the whole s range. Note that this approach properly takes into account the threshold effects at the point s = 4m_t^2. “Semi analytic” means that our final results consist of expansions around a set of x values, where the expansion coefficients are available only numerically. Starting from the (analytic) expansion around x=0, each expansion provides numeric boundary conditions to fix the coefficients of the subsequent expansion. Each expansion is only ever evaluated within its radius of convergence. § THREE-LOOP LIGHT-FERMIONIC CONTRIBUTIONS TO F_ BOX1 In this section we present the light-fermionic three-loop corrections to the form factor F_ box1 for Higgs boson pair production. We note again that in our t=0, m_H=0 approximation, F_ box2 vanishes; we observe this after IBP reduction and writing the result in terms of the minimal set of master integrals. We obtain the renormalized form factors after the renormalization of the parameters α_s and m_t and the wave functions of the gluons in the initial state. We then express our results in terms of α_s^(5) and treat the remaining infrared divergences following Ref. <cit.>.[For more details see Section 4 of Ref. <cit.> where analytic large-m_t results for F_ box1 and F_ box2 have been computed at three-loop order.] This leads to finite results for F_ box1. In the following we present numerical results. For the top quark and Higgs boson masses, we use the values m_t = 173.21 GeV and m_H=125.1 GeV. Let us first discuss the one- and two-loop results. In Fig. <ref> we show the real part of F_ box1 for p_T=100 GeV. In red, we show the approximation that we use at three loops, i.e., t=0 and m_H=0. In black, we show curves with the full dependence on t and m_H. At one loop this is the fully exact result, but at two loops this is an expansion to order t^5 and m_H^4; we have shown in Ref. <cit.> that this provides an extremely good approximation of the (unknown) fully exact result. We observe that the t=0, m_H=0 curves approximate the “exact” results with an accuracy of about 30% in the region below about √(s)=500 GeV. For higher energies the approximation works better. In Fig. <ref> we also show blue curves which include expansion terms up to t^5, but still only the leading term in the m_H expansion. These curves lie very close to the red t=0, m_H=0 curves, which show that for p_T≈ 100 GeV it is more important to incorporate additional terms in the m_H expansion than in the t expansion. For higher values of p_T we expect that higher t expansion terms become more important. This can be seen in Fig. <ref> where results of the two-loop form factor are shown for various values of p_T. The panels also show that a large portion of the cross section is covered by the t=0 approximation, even for p_T=200 GeV where, for lower values of √(s), about 50% are captured by the red curve. In Fig. <ref> we show the new results obtained in this letter. The plots show both the real (in red) and imaginary (in green) parts of the light-fermionic part of F_ box1, both separated into the C_F and C_A colour factor contributions, and their combination. We observe a strong variation of the form factor around the top quark pair threshold region. This behaviour is not caused by a loss of precision of our semi-analytic expansions around this threshold; indeed F_ box1 is finite in the limit s → 4 m_t^2, however whereas at two loops we observe leading logarithmic contributions which go like v logv, where v = √(1-4m_t^2/s), at three loops we find an additional power of logv which is responsible for the large variation around this point. The numerical value of the light-fermionic contribution to F_ box1 at three-loops exceeds the size of the two-loop form factor by almost an order of magnitude. Although this is compensated by the additional factor of α_s/π, this hints at sizeable three-loop corrections. However, for a final conclusion, the remaining diagrams need to be computed. The full computation will also allow a study of the top quark mass scheme dependence. These issues will be addressed in a future publication. § CONCLUSIONS The computation of three-loop corrections to 2→2 scattering processes with massive internal particles is a technically challenging task. Currently-available techniques are most likely not sufficient to obtain analytic or numerical results without applying any approximation. In this letter we apply the ideas of Refs. <cit.> to gg→ HH and show that three-loop corrections can be obtained. We concentrate on the light-fermionic three-loop contributions which is a well-defined and gauge-invariant subset. The obtained results are valid for t=0 and m_H=0 which approximates the full result to 30% or better for p_T≈ 100 GeV. The approach outlined in this letter can also be used to compute the remaining colour factor contributions, which are needed to study the overall impact of the three-loop virtual corrections and also the top quark mass renormalization scheme dependence. In addition to the remaining colour factors, we ultimately aim to compute the t^1 m_H^2 approximation which would address the 30% error discussed in Section <ref>, improve the approximation for higher values of p_T, and provide a non-zero value for F_ box2. To compute these terms will require significantly more CPU time and, most likely, improvements to IBP reduction software in order to efficiently reduce the large numbers of integrals produced by the expansions. § ACKNOWLEDGEMENTS We would like to thank Go Mishima for many useful discussions and Fabian Lange for patiently answering our questions concerning Kira. This research was supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under grant 396021762 — TRR 257 “Particle Physics Phenomenology after the Higgs Discovery” and has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme grant agreement 101019620 (ERC Advanced Grant TOPUP). The work of JD was supported by the Science and Technology Facilities Council (STFC) under the Consolidated Grant ST/T00102X/1. 99 Glover:1987nx E. W. N. Glover and J. J. van der Bij, Nucl. Phys. B 309 (1988), 282-294 Borowka:2016ehy S. Borowka, N. Greiner, G. Heinrich, S. P. Jones, M. Kerner, J. Schlenk, U. Schubert and T. Zirke, Phys. Rev. Lett. 117 (2016) no.1, 012001 [erratum: Phys. Rev. Lett. 117 (2016) no.7, 079901] [arXiv:1604.06447 [hep-ph]]. Borowka:2016ypz S. Borowka, N. Greiner, G. Heinrich, S. P. Jones, M. Kerner, J. Schlenk and T. Zirke, JHEP 10 (2016), 107 [arXiv:1608.04798 [hep-ph]]. Baglio:2018lrj J. Baglio, F. Campanario, S. Glaus, M. Mühlleitner, M. Spira and J. Streicher, Eur. Phys. J. C 79 (2019) no.6, 459 [arXiv:1811.05692 [hep-ph]]. Bellafronte:2022jmo L. Bellafronte, G. Degrassi, P. P. Giardino, R. Gröber and M. Vitti, JHEP 07 (2022), 069 doi:10.1007/JHEP07(2022)069 [arXiv:2202.12157 [hep-ph]]. Davies:2023vmj J. Davies, G. Mishima, K. Schönwald and M. Steinhauser, JHEP 06 (2023), 063 [arXiv:2302.01356 [hep-ph]]. deFlorian:2013jea D. de Florian and J. Mazzitelli, Phys. Rev. Lett. 111 (2013), 201801 [arXiv:1309.6594 [hep-ph]]. deFlorian:2013uza D. de Florian and J. Mazzitelli, Phys. Lett. B 724 (2013), 306-309 [arXiv:1305.5206 [hep-ph]]. Grigo:2014jma J. Grigo, K. Melnikov and M. Steinhauser, Nucl. Phys. B 888 (2014), 17-29 [arXiv:1408.2422 [hep-ph]]. Chen:2019lzz L. B. Chen, H. T. Li, H. S. Shao and J. Wang, Phys. Lett. B 803 (2020), 135292 [arXiv:1909.06808 [hep-ph]]. Chen:2019fhs L. B. Chen, H. T. Li, H. S. Shao and J. Wang, JHEP 03 (2020), 072 [arXiv:1912.13001 [hep-ph]]. Grigo:2015dia J. Grigo, J. Hoff and M. Steinhauser, Nucl. Phys. B 900 (2015), 412-430 [arXiv:1508.00909 [hep-ph]]. Davies:2019djw J. Davies and M. Steinhauser, JHEP 10 (2019), 166 [arXiv:1909.01361 [hep-ph]]. Davies:2021kex J. Davies, F. Herren, G. Mishima and M. Steinhauser, JHEP 01 (2022), 049 [arXiv:2110.03697 [hep-ph]]. Baglio:2020wgt J. Baglio, F. Campanario, S. Glaus, M. Mühlleitner, J. Ronca and M. Spira, Phys. Rev. D 103 (2021) no.5, 056002 [arXiv:2008.11626 [hep-ph]]. Degrassi:2022mro G. Degrassi, R. Gröber, M. Vitti and X. Zhao, JHEP 08 (2022), 009 doi:10.1007/JHEP08(2022)009 [arXiv:2205.02769 [hep-ph]]. Fael:2021xdp M. Fael, F. Lange, K. Schönwald and M. Steinhauser, SciPost Phys. Proc. 7 (2022), 041 [arXiv:2110.03699 [hep-ph]]. Fael:2022miw M. Fael, F. Lange, K. Schönwald and M. Steinhauser, Phys. Rev. D 106 (2022) no.3, 034029 [arXiv:2207.00027 [hep-ph]]. Degrassi:2016vss G. Degrassi, P. P. Giardino and R. Gröber, Eur. Phys. J. C 76 (2016) no.7, 411 [arXiv:1603.00385 [hep-ph]]. Plehn:1996wb T. Plehn, M. Spira and P. M. Zerwas, Nucl. Phys. B 479 (1996), 46-64 [erratum: Nucl. Phys. B 531 (1998), 655-655] [arXiv:hep-ph/9603205 [hep-ph]]. Harlander:2005rq R. Harlander and P. Kant, JHEP 12 (2005), 015 [arXiv:hep-ph/0509189 [hep-ph]]. Anastasiou:2006hc C. Anastasiou, S. Beerli, S. Bucherer, A. Daleo and Z. Kunszt, JHEP 01 (2007), 082 [arXiv:hep-ph/0611236 [hep-ph]]. Aglietti:2006tp U. Aglietti, R. Bonciani, G. Degrassi and A. Vicini, JHEP 01 (2007), 021 [arXiv:hep-ph/0611266 [hep-ph]]. Davies:2019nhm J. Davies, R. Gröber, A. Maier, T. Rauh and M. Steinhauser, Phys. Rev. D 100 (2019) no.3, 034017 [erratum: Phys. Rev. D 102 (2020) no.5, 059901] [arXiv:1906.00982 [hep-ph]]. Davies:2019roy J. Davies, R. Gröber, A. Maier, T. Rauh and M. Steinhauser, PoS RADCOR2019 (2019), 079 [arXiv:1912.04097 [hep-ph]]. Harlander:2019ioe R. V. Harlander, M. Prausa and J. Usovitsch, JHEP 10 (2019), 148 [erratum: JHEP 08 (2020), 101] [arXiv:1907.06957 [hep-ph]]. Czakon:2020vql M. L. Czakon and M. Niggetiedt, JHEP 05 (2020), 149 doi:10.1007/JHEP05(2020)149 [arXiv:2001.03008 [hep-ph]]. Nogueira:1991ex P. Nogueira, J. Comput. Phys. 105 (1993), 279-289; . Gerlach:2022qnc M. Gerlach, F. Herren and M. Lang, Comput. Phys. Commun. 282 (2023), 108544 [arXiv:2201.05618 [hep-ph]]. Harlander:1997zb R. Harlander, T. Seidensticker and M. Steinhauser, Phys. Lett. B 426 (1998) 125 [hep-ph/9712228]. Seidensticker:1999bb T. Seidensticker, hep-ph/9905298. Ruijl:2017dtg B. Ruijl, T. Ueda and J. Vermaseren, [arXiv:1707.06453 [hep-ph]]. Davies:2018ood J. Davies, G. Mishima, M. Steinhauser and D. Wellmann, JHEP 03 (2018), 048 [arXiv:1801.09696 [hep-ph]]. Davies:2018qvx J. Davies, G. Mishima, M. Steinhauser and D. Wellmann, JHEP 01 (2019), 176 [arXiv:1811.05489 [hep-ph]]. Herren:2020ccq F. Herren, “Precision Calculations for Higgs Boson Physics at the LHC - Four-Loop Corrections to Gluon-Fusion Processes and Higgs Boson Pair-Production at NNLO,” PhD thesis, 2020, KIT. Lee:2013mka R. N. Lee, J. Phys. Conf. Ser. 523 (2014), 012059 [arXiv:1310.1145 [hep-ph]]. Magerya:2022esf V. Magerya, “Semi- and Fully-Inclusive Phase-Space Integrals at Four Loops,” Dissertation, University of Hamburg, 2022. . Klappert:2020nbg J. Klappert, F. Lange, P. Maierhöfer and J. Usovitsch, Comput. Phys. Commun. 266 (2021), 108024 [arXiv:2008.06494 [hep-ph]]. Klappert:2020aqs J. Klappert, S. Y. Klein and F. Lange, Comput. Phys. Commun. 264 (2021), 107968 [arXiv:2004.01463 [cs.MS]]. Klappert:2019emp J. Klappert and F. Lange, Comput. Phys. Commun. 247 (2020), 106951 [arXiv:1904.00009 [cs.SC]]. Catani:1998bh S. Catani, Phys. Lett. B 427 (1998), 161-171 [arXiv:hep-ph/9802439 [hep-ph]].
http://arxiv.org/abs/2307.04937v1
20230710232803
Improving Fairness of Graph Neural Networks: A Graph Counterfactual Perspective
[ "Zhimeng Guo", "Jialiang Li", "Teng Xiao", "Yao Ma", "Suhang Wang" ]
cs.LG
[ "cs.LG" ]
The Pennsylvania State University United States [email protected] New Jersey Institute of Technology United States [email protected] The Pennsylvania State University United States [email protected] New Jersey Institute of Technology United States [email protected] The Pennsylvania State University United States [email protected] Graph neural networks have shown great ability in representation (GNNs) learning on graphs, facilitating various tasks. Despite their great performance in modeling graphs, recent works show that GNNs tend to inherit and amplify the bias from training data, causing concerns of the adoption of GNNs in high-stake scenarios. Hence, many efforts have been taken for fairness-aware GNNs. However, most existing fair GNNs learn fair node representations by adopting statistical fairness notions, which may fail to alleviate bias in the presence of statistical anomalies. Motivated by causal theory, there are several attempts utilizing graph counterfactual fairness to mitigate root causes of unfairness. However, these methods suffer from non-realistic counterfactuals obtained by perturbation or generation. In this paper, we take a causal view on fair graph learning problem. Guided by the casual analysis, we propose a novel framework , which can select counterfactuals from training data to avoid non-realistic counterfactuals and adopt selected counterfactuals to learn fair node representations for node classification task. Extensive experiments on synthetic and real-world datasets show the effectiveness of . <ccs2012> <concept> <concept_id>10010147.10010257</concept_id> <concept_desc>Computing methodologies Machine learning</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10010405.10010455</concept_id> <concept_desc>Applied computing Law, social and behavioral sciences</concept_desc> <concept_significance>300</concept_significance> </concept> </ccs2012> [500]Computing methodologies Machine learning [300]Applied computing Law, social and behavioral sciences 20 February 2007 [revised]12 March 2009 [accepted]5 June 2009 Improving Fairness of Graph Neural Networks: A Graph Counterfactual Perspective Suhang Wang October 2023 =============================================================================== § INTRODUCTION Graphs are pervasive in real-world, such as knowledge graphs <cit.>, social networks <cit.> and biological networks <cit.>. Recently, graph neural networks (GNNs) <cit.> have shown great ability in modeling graph-structural data. Generally, GNNs adopt the message passing mechanism, which updates a node's representation by iteratively aggregating its neighbors' representations. The resulting representation preserves both node attributes and local graph structure information, facilitating various downstream tasks such as node classification <cit.> and link prediction <cit.>. Despite their great performance, recent studies <cit.> show that GNNs tend to inherit bias from training data, which may result in biased predictions towards sensitive attributes, such as age, gender and race. In addition, the message passing mechanism of GNNs and graph structure could magnify the bias <cit.>. For example, in social networks, nodes of the same race are more likely to connect to each other. The message passing of GNNs would make the representation of linked nodes similar, resulting in a high correlation of node representation with race, hence the biased prediction. The biased prediction has raised concerns from ethical and societal perspectives, which severely limits the adoption of GNNs in high-stake decision-making systems, such as job applicants ranking <cit.> and criminal prediction <cit.>. Hence, many efforts have been taken for fair GNNs <cit.>. However, most existing methods are based on statistical fairness notions, which aim to make statistically fair predictions for different sub-groups or individuals <cit.>. Several works have pointed out such fairness notions fail to detect discrimination in the presence of statistical anomalies <cit.>. Therefore, there has been a recent shift toward counterfactual fairness in graph modeling <cit.>. This approach aims to eradicate the root causes of bias by mapping the causal relationships among variables. The identified causal structure allows for the adjustment of sensitive data to generate counterfactuals, ensuring that the prediction remains unaltered by the sensitive information through the utilization of these counterfactuals. For example, NIFTY <cit.> perturbs sensitive attributes to obtain counterfactuals and maximizes the similarity between original representations and perturbed representations to make representations invariant to sensitive attributes. GEAR <cit.> adopts GraphVAE <cit.> to generate counterfactuals and minimizes the discrepancy between original representations and counterfactual representations to get rid of the influence of sensitive attributes. Despite their superior performance, existing graph counterfactual fairness works need to flip sensitive attributes or generate counterfactuals with GraphVAE, which can easily result in non-realistic counterfactuals Such non-realistic counterfactuals may disrupt the underlying latent semantic structure, thereby potentially undermining the model's performance. This is because simply flipping sensitive attributes cannot model the influence on other features or graph structure causally caused by sensitive attributes <cit.>, and the generative approach lacks supervision of real counterfactuals and could be over-complicated <cit.>. Motivated by the discussion above, in this paper, we investigate whether one can obtain counterfactuals within the training data. For example, if a female applicant was rejected by a college, we aim to find another male applicant who has a similar background as the counterfactual applicant. Thus, we can get realistic counterfactuals and avoid the ill-supervised generation process. To achieve our goal, we are faced with several challenges: (i) Graph data is quite complex, thus it is infeasible to directly find counterfactuals in the original data space. Besides, some guidance or rules are needed to find the counterfactuals. (ii) To achieve graph counterfactual fairness, learned representation should be invariant to sensitive attributes and information causally influenced by sensitive attributes. It is critical to design proper supervision to help models get rid of sensitive information. To tackle the aforementioned challenges, we propose a casual view of the graph, label and sensitive attribute. The causal interpretation guides us to find counterfactuals and learn disentangled representations, where the disentangled content representations are informative to the labels and invariant to the sensitive attributes. Guided by the causal analysis, we propose a novel framework, Counterfactual Augmented Fair GNN (), to simultaneously learn fair node representations for graph counterfactual fairness and keep the performance on node classification tasks. Specifically, based on the causal interpretation, we derive several constraints to enforce the learned representations being invariant across different sensitive attributes. To obtain proper counterfactuals to guide representation learning, we utilize labels and sensitive attributes as guidance to filter out potential counterfactuals in representation space. The main contributions of our work can be summarized as: * We provide a causal formulation of the fair graph learning process and fair node representation learning task. * We propose a novel framework to learn node representations for graph counterfactual fairness. Specifically, we find counterfactuals in representation space and design novel constraints to learn the content representations. * We conduct extensive experiments on real-world datasets and synthetic dataset to show the effectiveness of our model on the fairness-prediction trade-off. § RELATED WORKS In this section, we review related works, including graph neural networks and fairness-aware GNNs. §.§ Graph Neural Networks Graph neural networks (GNNs) have dominated various tasks on graph-structured data, such as node classification <cit.>, graph classification <cit.> and link prediction <cit.>. Existing GNNs can be categorized into spatial-based GNNs and spectral-based GNNs. Spatial-based GNNs leverage the graph structure directly, focusing on the relationships between nodes and their immediate neighbors to inform feature learning. On the other hand, spectral-based GNNs operate in the spectral domain defined by the graph Laplacian and its eigenvectors, making them better suited to capture global properties of the graph. The superior performance of GNNs has greatly extended their application scenarios <cit.>. For example, banks may leverage GNNs to process transaction networks to detect the abnormal behavior of users <cit.>. The applications in critical decision-making systems place higher requirements for GNNs, such as being fair and interpretable <cit.>. Despite their extensive utility and efficacy, recent studies <cit.> show that GNNs can harbor implicit biases on different groups, which can lead to skewed or unfair outcomes. This bias issue is particularly critical when GNNs are deployed in high-stake scenarios, making it necessary to ensure fairness in the modeling process <cit.>. Thus, mitigating bias and promoting fairness in GNNs are active and necessary research areas <cit.>. The source of bias in Graph Neural Networks (GNNs) primarily originates from two areas. First, it comes from the inherent bias in the input data, which may contain unequal representation or prejudiced information about nodes or connections in the graph. Second, the bias can stem from the algorithmic design of the GNN itself, which may unintentionally emphasize certain features or connections over others during the learning process. Therefore, there is a trend for the research community to design fairer GNN models to deal with graph-based tasks <cit.>. §.§ Fairness in GNNs Fairness is a widely-existed issue of machine learning systems <cit.>. Researchers evaluate the fairness of models with many kinds of fairness notions, including group fairness <cit.>, individual fairness <cit.> and counterfactual fairness <cit.>. The metrics can also be used to measure the fairness performance of GNNs <cit.>. The commonly used fairness notions in GNNs are statistical parity <cit.> and equal opportunity <cit.>. FairGNN <cit.> utilizes adversarial training to establish fairness in graph-based models, refining its representation through an adversary tasked with predicting sensitive attributes. EDITS <cit.>, on the other hand, is a pre-processing technique that focuses on ensuring fairness in graph learning. It aims to eliminate sensitive information from the graph data by correcting any inherent biases present within the input network. However, these methods and their metrics are developed based on correlation <cit.>, which has been found to be unable to deal with statistical anomalies, such as Simpson's paradox <cit.>. Based on the causal theory, counterfactual fairness can model the causal relationships and gets rid of the correlation-induced abnormal behavior <cit.>. There is an increasing interest to apply counterfactual fairness on graphs to design fairer GNNs <cit.>. NIFTY <cit.> perturbs sensitive attributes for each node to obtain counterfactuals and omits the causal relationships among variables. GEAR <cit.> uses GraphVAE <cit.> to generate the graph structure and node features causally caused by the sensitive attributes. However, the encoder-decoder-encoder scheme is over-complex and may suffer from information loss. Our paper is inherently different from existing work: (i) Unlike existing works that might generate unrealistic counterfactuals, our work avoids the generation process and selects counterfactuals with sensitive attributes and labels as guidance; and (ii) We propose a causal view to understand the source of bias. Based on the causal interpretation, we also design several constraints to help our model learn the fair node representations. § PRELIMINARIES In this section, we start by introducing the necessary notation and defining the problem at hand. Following this, we employ the Structural Causal Model to frame the issue, which will then motivate our solution - the disentangled fair representation learning method. §.§ Notations and Problem Definition Throughout the paper, we use italicized uppercase letters to represent random variables (e.g., S, E) and use italicized lowercase letters to denote the specific value of scalars (e.g., s, y_i). Non-italicized bold lowercase and uppercase letters are used to denote specific values of vectors (e.g., 𝐱_i) and matrices (e.g., 𝐗), respectively. Let 𝒢=(𝒱, ℰ, 𝐗) denote an attributed graph, where 𝒱={v_1, ..., v_N} is the set of N nodes, ℰ⊆𝒱×𝒱 is the set of edges, 𝐗∈ℝ^N × D is the node attribute matrix. The i-th row of 𝐗, i.e., 𝐱_i is the feature vector of node v_i. 𝐀∈{0,1}^N × N is the adjacency matrix of the graph 𝒢, where 𝐀_ij=1 if nodes v_i and v_j are connected; otherwise 𝐀_ij=0. We use 𝐬∈{0,1}^N × 1 to denote the sensitive attributes, where s_i is the sensitive attribute of v_i. Following <cit.>, we only consider binary sensitive attributes and leave the extension of multi-category sensitive attributes as future work. We use 𝐲∈{1,...,c}^N× 1 to denote the ground-truth node labels, where y_i is the label of v_i. In this paper, we assume that both target labels and sensitive attributes are binary variables for convenience. For the semi-supervised node classification task, only part of nodes 𝒱_L ∈𝒱 are labeled for training and the remaining nodes 𝒱_U=𝒱\𝒱_L are unlabeled. The goal is to train a classifier f to predict the labels of unlabeled nodes, which has satisfied node classification performance and fairness performance simultaneously. Given 𝐗, 𝐀 and 𝐘_L, the goal of semi-supervised node classification is to learn a mapping function f to predict the labels of unlabeled nodes, i.e., f: (𝐀, 𝐗) →𝒴_U, where 𝒴_U the set of predicted labels of unlabeled nodes 𝒱_U. §.§ The Desiderata for Fair Graph Learning GNNs have shown remarkable capabilities in the realm of semi-supervised node classification. However, they are not immune to bias issues, primarily stemming from imbalanced or prejudiced input data, and potentially from the structural design of the GNNs themselves, which may inadvertently prioritize certain features or connections. Therefore, substantial efforts have been directed towards developing fairness-aware methodologies within GNNs. The majority of these methods strive to ensure correlation-based fairness notions, such as demographic parity or equality of opportunity. However, these correlation-based fairness notions can be inherently flawed, particularly in the presence of statistical anomalies, which calls for more nuanced and robust approaches to achieve fairness in GNNs. Recent advance <cit.> shows that causal-based fairness notions can help resolve this issue. Thus, to help design a fair GNN classifier, we take a deep causal look under the observed graph. Without loss of generality, in this work, we focus on the node classification task and construct a Structural Causal Model <cit.> in Figure <ref>. It presents the causal relationships among five variables: sensitive attribute S, ground-truth label Y, environment feature E, content feature C and ego-graph G for each node. Each link denotes a deterministic causal relationship between two variables. We list the following explanations for the SCM: * S → E. The variable E denotes latent environment features that are determined by the sensitive attribute S. For example, people of different genders will have different heights or other physical characteristics, where S is the sensitive attribute of genders and E is physical characteristics that are causally determined by the sensitive attribute. This relationship will lead to bias in latent feature space, which we will explain shortly. * C → Y. The variable C denotes the content feature that determines ground-truth label Y. Taking the credit scoring as an example, ideally, we assign credit scores using personal information not related to the sensitive attribute, i.e., we use content feature C instead of E to assign credit score Y. * E → G ← C. The ego-graph G is determined by the content feature C and the environment feature E, which are two disjoint parts. E and C are latent features and G is the observed ego-graph. Considering a one-hop ego-graph, it contains the social connections of center node and the observed feature of center node. The causal relationship indicates environment feature E and content feature C can determine one's social connections and personal features (node attributes). The SCM paves us a way to understand the source of bias and how to design a fair GNN classifier. Next, we give details about source of bias and disentangled learning. Our objective is to approximate the content feature C with a content representation denoted as Ĉ, and similarly, approximate the environment feature E with an environment representation denoted as Ê. To streamline our discussion, we will slightly abuse notation by also employing the symbols C and E to signify the corresponding content and environment representations throughout the remainder of the paper. §.§.§ Source of Bias From the causal graph, we can observe that the sensitive variable S and the label variable Y are independent with each other, i.e., the only path from S to Y, S→ E → G ← C ← Y is blocked by the collider G. However, it is worthy noting that S and Y are dependent conditioned on G, i.e., P(Y,S|G) P(Y|G) P(S|G) The conditional dependency of Y and S on G is one major reason that leads to biased prediction. If we directly learn a GNN model that aims to predict Y based on G, as Y and S are dependent given G, the learned label Y will have correlation with S, resulting in the biased prediction on sensitive attribute S. Alternatively, we can understand the bias by treating existing GNNs as composed of a feature extractor g and a classifier c. The feature extractor g takes the subgraph centered at a node as input and learns node representation as 𝐳 = g(G). Then the classifier c uses the representation 𝐳 to predict the label as ŷ = c(𝐳). As G is dependent on E and C, the learned representation 𝐳 is likely to contain mixed information of both E and C. Hence, the predicted label ŷ is also likely to have correlation with S. §.§.§ Disentangled Fair Representation Learning From the above analysis motivates, we can observe that in order to have fair prediction, we need to learn disentangled representation E and C to block the path from S to Y conditioned on G, and only use the content information C to predict Y, i.e., P(Y|C). As C determines Y, it contains all the label information to predict Y. Meanwhile, observing E and C can block the conditional path from S to Y, i.e., P(Y,S|E,C,G)=P(Y|C,E,G)P(S|C,E,G). Note that observing C blocks the path from E to Y and the path from G to Y. Hence, we have P(Y|C,E,G) = P(Y|C). Meanwhile, observing E blocks the path from S to G and the path from S to C, thus, we have P(S|C,E,G)=P(S|E). This gives us P(Y,S|E,C,G)=P(Y|C)P(S|E) The above equation shows that observing E and C would make Y and S independent and P(Y|C) is unbiased. Hence, if we can learn disentangled latent representation E and C, we would be able to use C for fair classification. However, the main challenge is we do not have ground-truth E and C to help us train a model that can learn disentangled representation. With a slight abuse of notation, we also use C to denote the learned content representation and use E to denote the learned environment representation. Fortunately, we can use the SCM to derive several properties of the optimal representation, which would be used to help learn the latent representation of C and E: * Invariance: C E. This property can be understood in two perspectives. That is, the content representations should be independent to the sensitive attributes and the environment representation induced by the sensitive attribute. Meanwhile, the environment representations should be independent to the labels and the content representation which is informative to the labels. * Sufficiency: (C, E) → G. The combined representation can used to reconstruct the observed graph. * Informativeness: C → Y. The content representations should have the capacity to give accurate predictions of labels Y. § METHODOLOGY The causal view suggests us to learn disentangled representation 𝐜 and 𝐞 for node v, with 𝐜 capturing the content information that is useful for label prediction and irrelevant to sensitive attributes, and 𝐞 capturing the environment information depends on sensitive attribute only. With the disentanglement, 𝐜 can be used to give fair predictions. However, how to effectively disentangle 𝐜 and 𝐞 remains a question given that we do not have ground-truth of disentangled representation. Intuitively, for a node v with sensitive attribute s, its content representation 𝐜 should remain the same when the sensitive attribute is flipped to 1-s while its environment representation 𝐞 should change correspondingly. Hence, if we know the counterfactual of node v, we will be able to utilize the counterfactual to help learn disentangled representation for fair classification; while the counterfactual is not observed. To address the challenges, we propose a novel framework as shown in Figure <ref> (a), which is composed of: (i) a GNN encoder that takes ego-graph 𝒢 of node v to learn disentangled representation 𝐜 and 𝐞; (ii) the counterfactual augmentation module, which aims to discover counterfactual for each factual observation and utilize the counterfactual to help learn disentangled representation; (iii) a fair classifier which takes 𝐜 as input for fair classification. Next, we give the details of each component. §.§ Disentangled Representation Learning For each node v_i, the content representation 𝐜_i should capture the important node attribute and neighborhood information for predicting the label while the environment representation 𝐞_i should capture all important information relevant to sensitive attribute. As GNNs have shown great ability in modeling graph structured data, we adopt GNNs to learn 𝐜_i and 𝐞_i. Instead of adopting two GNNs to learn 𝐜_i and 𝐞_i separately, to reduce the number of parameters, we adopt one GNN to learn 𝐜_i and 𝐞_i. We empirically found that using two GNNs and one GNN have similar performance due to constraints we designed to disentangle 𝐜_i and 𝐞_i, which will be introduced later. Specifically, the GNN f_θ parameterized by θ takes 𝒢 as input and learns representation as: [𝐂, 𝐄] = 𝐇 = f_θ(𝐀, 𝐗) where 𝐇∈ℝ^N × d is the learned representation matrix with the i-th row, i.e., 𝐡_i, as the representation of node v_i. We treat the first d_c columns as the content representation matrix 𝐂 and use the next d_e columns as the environment representation matrix 𝐄. Note that d = d_c+d_e. In our implementation, we set d_c = d_e. 𝐂∈ℝ^N × d_c is the content representation matrix with the i-th row, i.e., 𝐜_i, as the content representation of node v_i. Similarly, 𝐄∈ℝ^N × d_e is the environment representation matrix with the i-the row, i.e., 𝐞_i as the environment representation of node v_i. f_θ is flexible to be various GNNs such as GCN <cit.> and GraphSAGE <cit.>. To make sure 𝐜_i captures the content information for fair label prediction, and 𝐞_i and 𝐜_i are disentangled, based on the causal analysis in Section <ref>, we add following constraints: Informativeness Constraint. First, the content representation 𝐜_i should be informative to the downstream tasks, i.e., C → Y. Hence, for node v_i, we should be able to get accurate label prediction from 𝐜_i. Hence, we introduce a classifier f_ϕ with model parameter ϕ. It takes 𝐜_i as input and predicts the class distribution of v_i as: 𝐲̂_i=f_ϕ(𝐜_i), The loss function for training the classifier is given as: ℒ_pred = 1/|𝒱_L|∑_v_i ∈𝒱_Lℓ(𝐲̂_i, 𝐲_i) where 𝐲_i is the one-hot encoding of ground-truth label of v_i. ℓ(𝐲̂_i, 𝐲_i) denotes the cross entropy between 𝐲̂_i and 𝐲_i. Sufficiency Constraint. As shown in our causal view, the representation (𝐜_i and 𝐞_i) should be sufficient to reconstruct the observed factual graph 𝒢_i. In disentangled representation learning research, the reconstruction supervision is usually adopted to guide the learning process <cit.>. However, existing graph counterfactual fairness approaches <cit.> fail to provide supervision to preserve graph information in the representations. Thus, they put their models at a risk of being stuck in trivial solutions to merely get spurious information in the representations, which contradicts the SCM and is not sufficient to reconstruct the observed graph 𝒢_i . In our model, we formalize the sufficiency constraint as a reconstruction of the graph structure. Specifically, for a pair of nodes (v_i,v_j), we predict the link existence probability as p_ij = σ(𝐡_i 𝐡_j^T), where 𝐡_i = [𝐜_i, 𝐞_i] is the node representation of node v_i. The sufficiency constraint is ℒ_suf = 1/|ℰ|+|ℰ^-|∑_(v_i,v_j) ∈ℰ∪ℰ^- -e_ijlog p_ij - (1 - e_ij) log p_ij where ℰ^- is the set of sampled negative edges. e_ij=1 if node v_i and v_j are connected; otherwise e_ij=0. Orthogonal Constraint. The above model can help to learn 𝐜_i that captures graph information for label prediction, however, it doesn't guarantee that 𝐜_i doesn't contain sensitive attribute information. To make sure that 𝐜_i and 𝐞_i are disentangled, i.e., 𝐜_i doesn't contain any environment information relevant to sensitive attribute, we further impose the orthogonal constraint, i.e., 𝐜_i^T 𝐞_i = 0. §.§ Counterfactual Augmented Learning As we do not have ground-truth of 𝐜_i and 𝐞_i, we used several constraints to implicitly supervise the learning of 𝐜_i and 𝐞_i. To fully learn disentangled 𝐜_i and 𝐞_i, we propose to learn better 𝐞_i and 𝐜_i that follows the counterfactual constraints. As shown in Figure <ref> (b), generally, for a node v_i with observe the factual sensitive attribute s_i and label y_i, its content representation 𝐜_i should remain similar when the sensitive attribute is flipped to 1-s_i but its environment representation 𝐞_i should change correspondingly, which forms the counterfactual subgraph 𝒢_i^e. Similarly, when flip label y_i but keep the sensitive attribute s_i unchanged, then v_i's environment representation 𝐞_i remain the same, while its content representation should change accordingly, leading to the counterfactual subgraph 𝒢_i^c. Thus, if we know 𝒢_i^e and 𝒢_i^c, we would be able to use these counterfactual graphs together with factual graph 𝒢_i to guide the learning of 𝐜_i and 𝐞_i. However, in real-world, we can only observe factual graphs. To solve this challenge, we propose to find potential candidate counterfactuals with the observed factual graphs. The sensitive attribute and label are used to find counterfactuals in our model. Considering the fair credit scoring problem, when someone was assigned a low score, straightforward thinking is to know the results of people who have a similar background to her but of a different gender. For example, Sarah, a female, got a low credit score. Then she may ask, what if I were a male, what will my credit score be? This thinking inspires us to directly find counterfactuals from the observed node samples instead of performing perturbation or generating <cit.>. The advantages of selecting counterfactuals from the observed node samples are twofold: (1) It avoids making assumptions about the graph generation process with sensitive attributes. (2) It does not need additional supervision signal. Compared with GEAR <cit.>, we do not need additional supervision to guide counterfactual selection. Another problem comes: selecting counterfactuals from the original data space is also challenging due to the complexity of graph distance calculation. To get counterfactual 𝒢^e_i, we need to find some nodes which have different sensitive attribute and the same label. Similarly, we find some nodes with the same sensitive attribute and different labels as counterfactual 𝒢^c_i. The task can be formalized as: 𝒢^c_i = _𝒢_j ∈𝔾{m(𝒢_i, 𝒢_j) | y_i ≠ y_j, s_i = s_j } 𝒢^e_i = _𝒢_j ∈𝔾{m(𝒢_i, 𝒢_j) | y_i = y_j, s_i ≠ s_j } where 𝔾={𝒢_i | v_i ∈𝒱)} and m(·, ·) is a metric of measuring the distance between a pair of subgraphs. Nevertheless, the problem of computing the distance of pairs of graphs is inefficient and infeasible due to the complex graph structure and large search space <cit.>. As we already have node representations 𝐡_i = [𝐜_i,𝐞_i] that capture the graph structure and node attribute information, we propose to measure the distance in the latent space, which can greatly reduce the computation burden. Then the counterfactual graph searching problem in Eq. (<ref>) and Eq. (<ref>) is converted to the problem below: 𝐡^c_i = _𝐡_j ∈ℍ{𝐡_i - 𝐡_j_2^2 | y_i ≠ y_j, s_i = s_j } 𝐡^e_i = _𝐡_j ∈ℍ{𝐡_i - 𝐡_j_2^2 | y_i = y_j, s_i ≠ s_j } where ℍ = {𝐡_i | v_i ∈𝒱} and we use L2 distance to find counterfactuals. A problem is that we only have limited labels in the training set. So we first pre-train the backbone model. With pre-trained model, we can obtain the prediction for unlabeled nodes as pseudo-labels. The pseudo-labels work as the guidance of the counterfactual searching problem. Note that for each factual input we can also get multiple counterfactuals by selecting a set of counterfactuals in Eq. (<ref>) and Eq. (<ref>) instead of one. Thus, the counterfactual 𝒢^c_i can be naturally extended to a set of K counterfactuals {𝒢^c_k_i|k=1,...,K} and 𝒢^e_i can be extended to {𝒢^e_k_i|k=1,...,K}. We fix K to 10 in our implementation. We can utilize the counterfactuals to supervise the disentanglement of 𝐜_i and 𝐞_i. Specifically, as shown in Figure <ref> (b), counterfactual 𝒢^e_k_i shares the same content information with factual graph 𝒢_i and has different environment information. Without supervision, the factual content representation 𝐜_i and the counterfactual content representation 𝐜^e_k_i may contain both the content information and environment information. When we minimize the discrepancy of the learned representations with dis(𝐜_i,𝐜^e_k_i), f_θ will tend to merely keep the content information and squeeze the sensitive information out of learned representations. In a similar manner, we can use dis(𝐞_i, 𝐞^c_k_i) to make the environment representation 𝐞_i be invariant to the content information stored in 𝐜_i. Also, we put the orthogonal constraint here to encourage 𝐜_i and 𝐞_i to store different information in representation space. The invariance constraint is given as: ℒ_inv=1/|𝒱| · K∑_v_i ∈𝒱∑_k = 1^K [ dis(𝐜_i, 𝐜^e_k_i) + dis(𝐞_i, 𝐞^c_k_i) + γ K· |cos(𝐜_i, 𝐞_i)| ] where dis(·, ·) is a distance metric, such as the cosine distance and L2 distance in our implementation. |cos(·, ·)| is the absolute value of cosine similarity and we optimize this term to approximate 𝐜_i^T 𝐞_i=0. γ is the hyper-parameter to control the orthogonal constraint. §.§ Final Objective Function of Putting the disentangled representation learning module and the counterfactual selection module together, the final objective function of the proposed framework is min_θ,ϕℒ= ℒ_pred + αℒ_inv + βℒ_suf , where θ and ϕ are parameters for the GNN encoder and the prediction head, respectively. α and β are hyper-parameters controlling the invariance constraint and the sufficiency constraint. §.§ Training Algorithm The whole process of is summarized in Algorithm <ref>. Our method relies on the counterfactuals in the representation space to guide the disentanglement. However, the randomly initialized representation at the first several epochs may degrade the performance of our model. Therefore, we first pre-train a plain node representation learning model 𝐘=g_Θ,Φ(𝐀, 𝐗) only with ℒ_pred. Then we use the optimized parameters Θ^*, Φ^*=min_Θ, Φℒ_pred to initialize the parameters θ and ϕ of our model and use the aforementioned framework to get the desired disentangled representations. We do not necessarily update the counterfactuals for each epoch. We update the counterfactuals once for t epochs and t=10 in our implementation. As shown in Algorithm <ref>, we first pre-train g_Θ, Φ and use the optimized parameter to initialize f_θ and ϕ from line 1 to line 2. Then we iteratively optimize f_θ and ϕ from line 3 to line 10. In each iteration, we first perform forward propagation to get node representations in line 4. And then for each t epoch we update the selected counterfactuals once from line 5 to line 7. Afterwards, we compute the overall objective and perform backpropagation to optimize the parameters θ and ϕ from line 8 to line 9. After training, we obtain the desired fair model f_θ and f_ϕ in line 11. § EXPERIMENTS In this section, we conduct experiments to evaluate the effectiveness of the proposed method and compare it with state-of-the-art fair GNNs. Specifically, we aim to answer the following questions: * (RQ 1) How effective is the proposed for fair node classification task on both synthetic datasets and real-world datasets? * (RQ 2) Can the proposed find appropriate counterfactuals? * (RQ 3) How do the proposed modules work? How can each regularization term affect the model performance? §.§ Experiment Settings §.§.§ Real-World Datasets We conduct experiments on three widely used real-world datasets, namely German Credit <cit.>, Credit Defaulter <cit.>, Bail <cit.>. The statistics of the datasets can be found in Table <ref>. These datasets contain sensitive attributes so that they can be used to evaluate fairness performance. The details of the datasets are as follows: * German Credit <cit.>: the nodes in the dataset are clients and two nodes are connected if they have high similarity of the credit accounts. The task is to classify the credit risk level as high or low with the sensitive attribute “gender”. * Credit Defaulter <cit.>: the nodes in the dataset are used to represent the credit card users and the edges are formed based on the similarity of the purchases and payments information. The task is to classify the default payment method with sensitive attribute “age”. * Bail <cit.>: these datasets contain defendants released on bail during 1990-2009 as nodes. The edges between two nodes are connected based on the similarity of past criminal records and demographics. The task is to classify whether defendants are on bail or not with the sensitive attribute "race". §.§.§ Synthetic Dataset Real-world datasets do not offer ground-truth counterfactuals, prompting us to construct a synthetic dataset based on the Structural Causal Model (SCM) as depicted in Figure <ref>. The primary advantage of a synthetic dataset is that it provides us with ground-truth counterfactuals for each node, which enables us to assess the quality of the obtained counterfactuals. In our approach, we consider settings with binary sensitive attributes and binary labels. A graph with 2000 nodes is sampled in our implementation. To generate the desired counterfactuals, we maintain the same sampled value of noise variables and use consistent causal relationships for each node. The sensitive attributes and labels are sampled from two different Bernoulli distributions, with s_i ∼ℬ(p) and y_i ∼ℬ(q), respectively. This results in generating vectors 𝐬_i = [(s_i)_× N] and 𝐲_i = [(y_i)_× N]. Next, environment and content features, 𝐞_i and 𝐜_i, are sampled from normal distributions 𝐞_i ∼𝒩(𝐬_i, 𝐈) and 𝐜_i ∼𝒩(𝐲_i, 𝐈), respectively. These features are combined to form the overall latent feature 𝐳_i = [𝐜_i , 𝐞_i]. The observed feature for each node v_i, denoted as 𝐱_i, is computed as 𝐱_i = 𝐖𝐳_i + 𝐛_i, where 𝐖_ij∼𝒩(1, 1), and 𝐖∈ℝ^d_2 × 2d_1, with 𝐛_i∼𝒩(0,𝐈) ∈ℝ^d_2. The adjacency matrix 𝐀 is defined such that 𝐀_ij = 1 if σ(cos(𝐳_i, 𝐳_j) + ϵ_ij) ⩾α and i ≠ j, with ϵ_ij∼𝒩(0,1), and 𝐀_ij = 0 otherwise. Here, σ(·) denotes the Sigmoid function, and the threshold α controls the edge number. We have the freedom to set sensitive attribute probability p, label probability q, latent feature dimension 2d_1, observed feature dimension d_2, node number N, and threshold α to control the biased graph generation process. Note that in the SCM we have C → Y instead of Y → C, thus a better way is to first generate content features and then assign labels to the features. Intuitively, we argue that when using an optimal classifier to deal with content features with different means will assign the same label in our generation process. Therefore, to simplify the generation process, we use C → Y in our dataset design. The synthetic dataset comes with notable advantages. Firstly, it gives us access to exact counterfactuals. After generating the initial graph, we keep all noise variables and unrelated variables unchanged, then adjust the sensitive attribute s_i or label y_i to calculate the precise counterfactual through the same graph generation procedure. Secondly, the synthetic dataset enables adjustable bias levels, providing us control over the extent of bias in our models. This adaptability allows us to match diverse real-world situations and robustly test our model's capability to manage various bias levels. As a result, we can undertake a comprehensive and detailed evaluation of our model's fairness and prediction quality. §.§.§ Baselines To evaluate the effectiveness of , we include representative and state-of-the-art methods, which can be categorized into three categories: (1) plain node classification methods: GCN <cit.>, GraphSAGE <cit.> and GIN <cit.>; (2) fair node classification methods: FairGNN <cit.>, EDITS <cit.>; (3) graph counterfactual fairness methods: NIFTY <cit.> and GEAR <cit.>. Unless otherwise specified, we use GraphSAGE as the model backbone except for baseline GCN and GIN. We use SAGE to denote GraphSAGE. The detailed descriptions about the datasets are as follows: * GCN <cit.>: GCN is a popular spectral GNN, which adopts a localized first-order approximation of spectral graph convolutions. * GraphSAGE <cit.>: GraphSAGE is a method for inductive learning that leverages node feature information to generate unsupervised embeddings for nodes in large graphs, even if they were not included in the initial training. * GIN <cit.>: Graph Isomorphism Network (GIN) is a graph-based neural network model that can capture different topological structures by injecting the node's identity into its aggregation function. * FairGNN <cit.>: FairGNN uses adversarial training to achieve fairness on graphs. It trains the learned representation via an adversary which is optimized to predict the sensitive attribute. * EDITS <cit.>: EDITS is a pre-processing method for fair graph learning. It aims to debias the input network to remove the sensitive information in the graph data. * NIFTY <cit.>: It simply performs a flipping on the sensitive attributes to get counterfactual data. It regularizes the model to be invariant to both factual and counterfactual data samples. * GEAR <cit.>: GEAR is a method for counterfactual fairness on graphs. It utilizes a variational auto-encoder to synthesize counterfactual samples to achieve counterfactual fairness for graphs. §.§.§ Evaluation Metrics We evaluate the model performance from three perspectives: classification performance, group fairness and counterfactual fairness. (i) For classification performance, we use AUC and the F1 score to measure node classification performance. (ii) For fairness, following <cit.>, we adopt two commonly used group fairness metrics, i.e., statistical parity (SP) Δ_S P and equal opportunity (EO) Δ_E O, which are computed as Δ_S P=|P(ŷ_u=1 | s=0)-P(ŷ_u=1 | s=1)| and Δ_E O=|P(ŷ_u=1 | y_u=1, s=0)-P(ŷ_u=1 | y_u=1, s=1)|. The smaller Δ_E O and Δ_E O are, the fairer the model is. (iii) For counterfactual fairness, as we have the ground-truth counterfactuals on the synthetic dataset, Following <cit.>, we use the counterfactual fairness metric δ_C F, i.e., δ_C F=|P((ŷ_i)_S ← s|𝐗, 𝐀)-P((ŷ_i)_S ← s^'|𝐗, 𝐀)|, where s, s^'∈{0,1}^N are the sensitive attributes and s^' = 1 - s. (ŷ_i)_S ← s^' is the computed ground-truth counterfactual label with the same data generation process as shown in Figure <ref>. We use subscript S ← s^' to denote counterfactual computation <cit.>, i.e., keeping the same data generation process and values of random noise variable. Counterfactual fairness of the graph is only measured on synthetic dataset. §.§.§ Setup For German Credit, Credit Defaulter and Bail , we follow train/valid/test split in <cit.>. For the constructed synthetic dataset, we use a 50/25/25 split for training/validation/testing data. We randomly initialize the parameters. For each combination of the hyper-parameters configuration, we run the experiments with 10 random seeds and grid search for the best configuration based on the performance on the validation set. Adam optimizer is used in our implementation <cit.>. §.§ Performance Comparison To answer RQ1, we conduct experiments on real-world datasets and synthetic dataset with comparison to baselines. §.§.§ Performance on Real-World Datasets Table <ref> shows the average performance with standard deviation of ten runs on real-world datasets. The best results are highlighted in bold and the runner-up results are underlined. From Table <ref>, we observe: * can improve the group fairness performance. Across three datasets, Table <ref> shows can make fairer predictions than other baseline methods. beats all the baselines with respect to the group fairness metrics. * There exists a trade-off between group fairness and prediction performance. Plain node classification methods, such as GCN, GraphSAGE and GIN, tend to have better prediction performance and worse group fairness performance. Fair node classification methods, including Fairness, EDITS, NIFTY, GEAR and , tend to suffer from a prediction performance drop and the group fairness performance is better. That shows the fairness methods tend to use less information. * achieves best performance on the prediction-fairness trade-off. We use the average rank of two prediction metrics and two group fairness metrics to know the performance of the trade-off. Our model ranks 1.75 and the runner-up model ranks 3.83. Our model outperforms the state-of-the-art node representation learning methods, which shows the effectiveness of our model. * Graph counterfactual fairness methods, such as NIFTY, GEAR and , achieved better performance than other baselines. Correlation-based counterfactual notions can capture the causal relationships and help to boost the group fairness performance. §.§.§ Performance on Synthetic Dataset Figure <ref> reports the performance on the synthetic dataset. On the synthetic dataset, we have the desired ground-truth counterfactuals, which can be used to measure the performance of graph counterfactual fairness. We compare our model with plain node classification models and counterfactual fairness models. The observations are as follows: * beats all the models with respect to the prediction, group fairness and counterfactual fairness metrics. We argue that in our assumed biased generation process, our model can effectively find invariant, sufficient and informative presentations to make accurate and fair predictions. * Other graph counterfactual fairness-based methods, including NIFTY and GEAR, cannot consistently outperform other methods. These methods design their model without considering meaningful causal relationships. NIFTY simply perturbs the sensitive attribute and omits the further influence on features and graph structure. GEAR adopts an ill-supervised GraphVAE to help model the causal relationships, which may fail to generate meaningful counterfactuals. §.§ Flexibility of for Various Backbones To show the flexibility of in improving the fairness of various backbones while maintaining high classification accuracy, other than GraphSAGE, we also plug our model in GCN and GIN. Figure <ref> shows the classification performance and fairness performance on Bail and Credit. From Figure <ref>, we observe that compared with the backbones, can significantly improve the fairness with no or marginal decrease in classification performance. For example, on Bail dataset, the prediction performance of our model with GIN backbone drops by 0.54% on AUROC but the Δ_SP drops by 1.37% and the Δ_EO drops by 0.86%, which is an improvement on fairness performance. This demonstrates the flexibility of in benefiting various backbones. §.§ Quality of Counterfactuals To answer RQ2, we compare the counterfactuals obtained by with ground-truth counterfactuals to investigate whether we can obtain the desired counterfactuals. We conduct experiments on the synthetic dataset which has ground-truth counterfactuals. We first use to obtain counterfactuals. To measure the discrepancy of the obtained counterfactuals with respect to the feature and structure in the ego graph, we compare the learned counterfactual representations and the ground-truth counterfactual representations. We compare our model with two graph counterfactual fairness baselines, i.e., NIFTY <cit.> and GEAR <cit.>. NIFTY simply flips the sensitive attributes to get their counterfactuals. GEAR uses a GraphVAE to generate the counterfactuals based on self-perturbation and neighbor-perturbation. Figure <ref> shows the average result for all the nodes on the synthetic dataset. We show that can find better counterfactuals than other graph counterfactual fairness models, i.e., smaller discrepancy to ground-truth counterfactuals. The result also shows there is still space for existing methods to improve the performance of getting appropriate counterfactuals. §.§ Ablation Study In our model, the pre-trained model can provide pseudo-labels for the nodes in the unlabeled set. Thus, we can select counterfactuals from the entire dataset. The model trained from scratch, without any pre-training, is denoted as -NP. Without pseudo-labels, we can only select counterfactuals from the training set, which is denoted as the variant -NS. We evaluate the performance on synthetic dataset. The results are reported in Table <ref>. We find the model -NS performs worse than the but better than -NP. The result shows the pseudo-labels can also boost the performance of our model. Usually, the training set is small and the model may not obtain desired counterfactuals from the limited data points. Although pseudo-labels may contain some noisy information, they can also help improve the our model performance. We further delve into how the constraints impact performance. When merely setting α=0 or β=0, we denote the model as -NA and -NB, respectively. The models -NA and -NB outperform SAGE, yet fall short when compared to . This indicates that both the sufficiency and invariance constraints collectively contribute to the superior performance of our model. §.§ Hyper-Parameter Sensitivity Analysis There are two important hyperparameters in , i.e., α and β. α controls the contribution of the invariance regularization ℒ_inv and β controls the contribution of the sufficiency regularization. To understand the impact of α and β on , we fix β as 5 and vary α as {0, 1,…, 18}. Similarly, we fix α as 1 and vary β as {0, 1,…, 18}. We report the result on German dataset in Figure <ref>. From Figure <ref>, we have the following observations: there exists a trade-off between prediction performance and fairness performance. The trend is that when we increase the α and β, we will get worse prediction performance and better fairness performance. We argue that without these regularizations, the model may rely on sensitive information to make the prediction. When we decrease the regularization budget, we can disentangle content representations and squeeze the sensitive information out. Thus, the prediction performance will get worse and the fairness performance will be better. § CONCLUSION AND FUTURE WORK In this paper, we study the problem of learning fair node representations with GNNs. We first formalize the biased graph generation process with an SCM. Motivated by causal theory, we propose a novel framework to learn fair node presentations which meet graph counterfactual fairness criteria and can achieve good prediction-fairness performance. Specifically, we align the model design with the data generation process and convert the problem to learn content representations in the causal graph. We derive several properties of the optimal content representation from the causal graph, i.e., invariance, sufficiency and informativeness. To get appropriate supervision for the invariance regularization, we design a counterfactual selection module. Extensive experiments demonstrate that can achieve state-of-the-art performance on synthetic dataset and real-world datasets with respect to the prediction-fairness trade-off. There are several interesting directions worth exploring. First, in this paper, we mainly focus on binary classification and binary sensitive attribute. We will extend the work to multi-class classification and multi-category sensitive attributes. Second, in this paper, we focus on static graphs while there are many different kinds of graphs in real-world. Thus, we aim to extend our model to more complex graph learning settings, such as dynamic graphs, multi-value sensitive attributes and labels. ACM-Reference-Format
http://arxiv.org/abs/2307.04114v1
20230709080743
FILM: How can Few-Shot Image Classification Benefit from Pre-Trained Language Models?
[ "Zihao Jiang", "Yunkai Dang", "Dong Pang", "Huishuai Zhang", "Weiran Huang" ]
cs.LG
[ "cs.LG", "cs.AI", "cs.CL", "cs.CV", "cs.MM" ]
Neutron scattering and muon-spin spectroscopy studies of the magnetic triangular-lattice compounds A_2La_2NiW_2O_12 (A = Sr, Ba) T. Shang Accepted 08-Jul-2023. Received 18-Jun-2023; in original form 23-May-2023 ================================================================================================================================ Few-shot learning aims to train models that can be generalized to novel classes with only a few samples. Recently, a line of works are proposed to enhance few-shot learning with accessible semantic information from class names. However, these works focus on improving existing modules such as visual prototypes and feature extractors of the standard few-shot learning framework. This limits the full potential use of semantic information. In this paper, we propose a novel few-shot learning framework that uses pre-trained language models based on contrastive learning. To address the challenge of alignment between visual features and textual embeddings obtained from text-based pre-trained language model, we carefully design the textual branch of our framework and introduce a metric module to generalize the cosine similarity. For better transferability, we let the metric module adapt to different few-shot tasks and adopt MAML to train the model via bi-level optimization. Moreover, we conduct extensive experiments on multiple benchmarks to demonstrate the effectiveness of our method. § INTRODUCTION Deep neural networks <cit.> have achieved remarkable success in many fields. However, training deep neural networks requires a large number of labeled data, which can be expensive and time-consuming to obtain. For instance, in medical imaging, obtaining labeled data requires expert radiologists to annotate images. This limits the application of deep learning models in real-world scenarios. In contrast, humans possess the ability to recognize and classify objects of unseen categories with only a few examples. This highlights the potential value of few-shot learning <cit.>, where models are trained on base classes and can be generalized well to novel classes with limited amounts of samples. Previous works mainly focus on image classification tasks, and most of them adopt the meta-learning paradigm <cit.>. Recent works consider leveraging additional information from other modalities such as text to enhance the performance of few-shot learning. In particular, some methods <cit.> adopt static word embedding models (e.g., GloVe <cit.>) to extract textual representations of class names and use them to adjust visual prototypes or classifiers. With the appearance of general language models such as BERT <cit.> and GPT <cit.>, another line of works <cit.> adopt public pre-trained language models (PLMs) to extract more comprehensive semantic information from class names. However, these works still focus on improving existing modules of the standard few-shot learning framework (e.g., visual prototypes and feature extractors), which confines the full utilization of powerful PLMs in few-shot learning. Inspired by the success of vision-language models <cit.> trained by contrastive learning, we explore the idea of aligning visual features and textual embeddings for few-shot image classification in this paper, where textual embeddings are extracted by a public PLM from class names following the setting of <cit.>. However, there are two main factors making this alignment challenging. Firstly, unlike vision-language models that have sufficient pairs of image and textual descriptions available for model training, we only have the class name of each image instead of a rich description. Secondly, in contrast to vision-language models where both visual and textual encoders are learnable to align embeddings, our textual encoder inherits from a puublic PLM trained on uni-modal text data. This leads to totally different structures of textual embedding spaces and thus makes the alignment between visual and textual features difficult. For instance, if we directly align visual features and textual embeddings, the probability[Here probabilities mean the elements outputted by softmax function.] of a sample image being assigned to its true label is extremely low (see blue bars in Figure <ref>). This indicates that the visual feature of an image is hard to approach the corresponding text embedding of its true label. In this paper, we propose a novel framework (Figure <ref>) to boost few-shot learning by means of public PLMs. To bridge the gap between visual and textual modalities, we carefully design a textual branch of our framework and introduce a metric module to measure the similarity between visual and textual embeddings. The textual branch first incorporates class labels into our hand-crafted prompt template containing a [MASK] token and then inputs the filled sentence to a PLM. The PLM transforms the input sentence into a hidden vector sequence and the final textual embedding is extracted from the vector corresponding to the [MASK] token. Meanwhile, the visual feature is obtained by a standard visual encoder. After that, we compute the similarities between visual features and textual embeddings through the proposed metric module, and send them into the contrastive loss. For better transferability on novel classes, we let the metric module adapt to different few-shot tasks and adopt Model-Agnostic Meta-Learning (MAML) <cit.> to train the model via bi-level optimization. Moreover, we conduct extensive experiments on multiple benchmarks to demonstrate that the proposed method significantly outperforms the state-of-the-art few-shot learning methods based on PLMs. The main contributions of this paper can be summarized as follows. * We propose a novel few-shot learning framework that leverages semantic information extracted by a pre-trained language model based on contrastive learning. * We carefully design a textual branch of the framework and introduce a metric module to generalize the similarity measure. * The metric module is designed to be adaptive to different few-shot tasks for better transferability, and MAML is adopted to train the model via bi-level optimization. * We conduct extensive experiments on multiple benchmarks with different domains to demonstrate the effectiveness of our method. § RELATED WORK Few-shot Learning. In general, few-shot learning methods are mainly divided into two categories: metric-based methods and optimization-based methods. Metric-based methods aim to map samples into an appropriate embedding space on the basis of certain distance metrics. Most previous methods use task-agnostic distance metrics, e.g., cosine similarity distance <cit.>, Euclidean distance <cit.>, CNN relation module <cit.>, and Earth Mover’s Distance <cit.>. Additionally, several methods <cit.> involve learning task-specific distance metrics, which can be adjusted for different tasks. Optimization-based methods <cit.> aims at learning optimal initial model parameters on base classes and quickly fine-tune them on novel classes with a few support examples. Our paper generalizes the similarity measure by the proposed metric module, and uses MAML <cit.> to train the model. Few-shot Learning with Semantic Information. Recent works on few-shot learning start to utilize semantic information from class labels to enhance few-shot learning. AM3 <cit.> proposes an adaptive modality mixture mechanism to model prototype representation as a combination of visual features and language semantic features. KTN <cit.> learns classifiers by fusing visual information and knowledge information acquired from a knowledge graph and word embeddings with a semantic-visual mapping network based on Graph Convolutional Network <cit.>. VS-Alignment <cit.> introduces a contrastive alignment between visual and semantic features as an additional objective. Semantic Prompt <cit.> considers semantic information as prompts to tune the ViT <cit.> feature extractor. All these methods leverage semantic features as auxiliary information to adjust visual prototypes, classifiers, or feature extractors. In contrast, we propose a new few-shot learning framework to directly align visual and textual embeddings via contrastive learning. Contrastive Learning. Contrastive learning is a popular method in self-supervised representation learning. It learns representations by pulling positive samples close and driving negative samples away from them in the latent embedding space with a contrastive loss. A set of previous works have shown the excellent performance of contrastive learning in computer vision <cit.> and natural language processing <cit.> tasks. Furthermore, recent works <cit.> apply contrastive learning to multi-modal settings by aligning image-text pairs in the embedding space. Our work introduces contrastive learning to few-shot learning, and proposes a learnable metric module to make aligning visual features and textual embeddings possible. § PROBLEM DEFINITION Few-shot learning involves two disjoint class sets: a base class set 𝒞_base classes and a novel class set 𝒞_novel classes. Sufficient labeled samples are provided for each base class, while abundant unlabeled samples and only a few labeled samples are provided for each novel class. Few-shot learning targets at classifying unlabeled samples from novel classes through training on all the given labeled samples. Previous works usually formulate the few-shot learning problem as N-way K-shot classification, which denotes a classification task among N classes with K labeled samples available for each class. In addition, given a fixed pre-trained language model, we use bimodal contrastive learning to leverage the semantic information extracted by it. Concretely, for each embedded sample image z and N embedded class labels {t_1,t_2,…,t_N} in a N-way K-shot classification task, contrastive learning adjusts the embedding space through the following widely-used contrastive loss <cit.> (using cosine similarity as an example): ℒ = -logexp(z· t_+/τ)/∑^N_i=1exp(z· t_i/τ), where t_+ is the embedded true label of the sample image and τ is a temperature hyper-parameter. Meta-learning paradigm <cit.> is commonly used to solve the few-shot learning problem, which trains and evaluates the model with the episodic mechanism. The standard meta-learning paradigm contains two stages: meta-training and meta-testing. In each episode of the meta-training stage, a N-way K-shot M-query classification task 𝒯=(𝒮,𝒬) is constructed with samples from the base classes. We first randomly select N classes from 𝒞_base as 𝒞_𝒯. For each class, we randomly sample K support images and M query images. Then we form the support set 𝒮={(x_i,y_i)|y_i∈𝒞_𝒯,i=1,2,…,N× K} and the query set 𝒬={(x_i,y_i)|y_i∈𝒞_𝒯,i=1,2,…,N× M} with the support images and the query images respectively, where x_i is the i-th sample image and y_i is the class label of x_i. To learn an appropriate embedding space, bi-level optimization is performed on 𝒮 and 𝒬 respectively, utilizing a contrastive loss. In each episode of the meta-testing stage, a classification task is built on the novel classes in a similar way. The support set is formed with a few label samples, while the query set is sampled from the unlabeled samples. After adapting to the novel classes by minimizing the contrastive loss on the support set, the model is used to predict class labels for the sample images in the query set. § METHOD We introduce our method of Few-shot Image classification with pre-trained Language Models (FILM) in this section. The overall framework is illustrated in Figure <ref>, which consists of three modules: a textual branch, a visual branch, and a metric module. For each episode, the textual branch extracts textual embeddings from class labels, while the visual branch extracts visual embeddings from support and query images. Moreover, the metric module computes the similarity score matrix between textual and visual embeddings from these two branches. In addition, we utilize a training strategy based on MAML algorithm to train the model via bi-level optimization. §.§ Textual Branch In this section, we explain how we design the textual branch to get textual embeddings from class labels. The textual branch comprises a text-based pre-trained language model (PLM) and a language model head. During meta-training and meta-testing, the PLM is frozen while the language model head is tuned for the downstream classification tasks. In our study, we mainly use the masked language model as the PLM. Notice that PLMs mainly take sentences rather than single words or phrases as input during the pre-training stage. Therefore, to bridge the gap between the pre-training and downstream tasks, for each class label y_i, we insert it into a hand-crafted prompt template and get y_i^prompt as the input of the PLM. The token sequence of y_i^prompt is first converted to a token embedding sequence through a token vocabulary. The input embedding sequence is calculated by summing the corresponding token embeddings and positional embeddings. Then PLM transforms the input embeddings into a sequence of hidden vectors. Two straightforward ways to get the textual embedding from the output hidden vector sequence are respectively: (1) taking the average vector of the output vector sequence as the textual embedding; (2) taking the hidden vector of the [CLS] token as the textual embedding. To make textual embeddings more relevant to the visual descriptive information of the corresponding categories, we design a prompt template with one [MASK] token as y_i^prompt = [CLS] The appearance ofy_i is [MASK] . [SEP] and extract the textual embedding by sending the hidden vector of the [MASK] token to the language model head. In this way, the extraction of textual embeddings is treated as a masked language modeling task, which makes downstream classification tasks more consistent with the pre-training of the PLM. The comparison among different designs of textual branches will be shown in Table <ref> later. §.§ Metric Module Inspired by vision-language models trained by contrastive learning, we explore aligning visual and textual modalities for few-shot image classification. However, directly aligning visual features and textual embeddings extracted by text-based PLM with cosine similarity has a poor effect in few-shot setting. The blue bars in Figure <ref> show that the probability of a sample image being assigned to its true label is extremely low if we directly align the visual and textual embeddings. In this paper, we introduce a metric module to generalize the similarity measure between visual features and textual embeddings. Moreover, we let the metric module adapt to different few-shot tasks for better transferability on novel classes. Specifically, we define f_θ_I as the image encoder with learnable parameters θ_I to transform each sample image x_i into a feature map z_i = f_θ_I(x_i). Textual branch f_θ_T with learnable parameters θ_T is used to extract the textual embedding t_y_i = f_θ_T(y_i) from each class label y_i. We generalize the similarity measure between visual embeddings z and textual embeddings t as a learnable function M(z, t) called metric module, whose parameters are denoted as θ_M. For example, the metric module could be a bilinear function M(z, t)=z^⊤θ_Mt (degenerating to the cosine similarity if θ_M is the identity matrix) or a neural network, e.g., M(z, t)=MLP_θ_M([z,t]). During meta-testing, we first fine-tune the task-specific parameters θ_M on the support set 𝒮. Then we use the similarity score matrix computed by the metric module as a reference to infer labels for sample images in the query set 𝒬. As is shown in Figure <ref>, the correct classification probabilities of our method are significantly higher than that of direct alignment, which means that our metric module can effectively align the visual features and textual embeddings. §.§ Loss Function We formulate the learning objective as a contrastive loss (Eq (<ref>)), which pulls together images and corresponding class labels while pushing away unmatched pairs in the embedding space. Moreover, we aim to train a model to maximize the similarity between visual features and textual embeddings for matching (image, text) pairs while reducing the similarity for non-matching pairs. Specifically, for a classification task 𝒯=(𝒮,𝒬), we calculate the contrastive loss on the support set 𝒮 and the query set 𝒬 respectively. On the support set, the contrastive loss ℒ_𝒮 is computed with all the support samples, which has a formulation as: ℒ_𝒮 = -1/|𝒮|∑_x_i∈𝒮logexp( M(z_i, t_y_i) /τ )/∑_c∈𝒞_𝒯exp(M(z_i, t_c)/τ ), where z_i is the visual embedding of the i^th support image x_i, t_y_i is the textual embedding of the true label y_i corresponding to x_i, t_c is the textual embedding of the class label c, and M(·, ·) is the similarity measure. On the query set, the contrastive loss ℒ_𝒬 has almost the same formulation as ℒ_𝒮, except it is computed with all the query samples of 𝒬. §.§ Training Strategy In this work, we incorporate the Model-Agnostic Meta-Learning (MAML) <cit.> algorithm to train the model via bi-level optimization as our training strategy. Our training strategy aims to learn a good model initialization (through the outer-loop optimization), which can be quickly adapted to novel tasks given a few examples (through the inner-loop optimization). The whole algorithm for our training strategy is outlined in Algorithm <ref>. First, we randomly initialize the parameters of image encoder θ_I, language model head θ_T, and metric module θ_M. For each task instance 𝒯_j from the distribution p(𝒯), we divide 𝒯_j into a support set 𝒮_j and a query set 𝒬_j. To let the metric module task-specific, we create copies of θ_M as the adapted parameters θ_M^'. In the inner loop, we adapt the model to the current task 𝒯_j by updating θ_M^' with a number of gradient descent steps on the support set while keeping θ_I, θ_T and θ_M fixed. In the outer loop, θ_M^' are utilized to evaluate the performance of the adapted model on the query set. Specifically, we compute loss on the query set with θ_I, θ_T, θ_M^' and perform gradient descent with respect to all the model parameters θ = {θ_I, θ_T, θ_M}. The optimization objective of the meta-training stage is to learn a good initialization across tasks. For example, when using one gradient update in the inner loop, the optimization objective can be formulated as follows: min_θ∑_𝒯_j ∼ p(𝒯)ℒ_𝒬_j (θ_I, θ_T, θ_M -α∇_θ_Mℒ_𝒮_j(θ_I, θ_T, θ_M)), where ℒ_𝒮_j and ℒ_𝒬_j denote the loss functions that evaluate the performance on support and query set respectively, and α is the learning rate of the inner loop. § EXPERIMENTS §.§ Setup Datasets. We experiment on three general object recognition datasets, i.e., miniImageNet, tieredImageNet and CIFAR-FS, and one fine-grained categorization image classification dataset, i.e., CUB-200-2011. The miniImageNet dataset is proposed in <cit.> as a benchmark for few-shot image classification tasks. It contains a subset of 100 classes in the ImageNet <cit.> dataset, where 64 classes are used for training, 16 classes for validation, and 20 classes for testing. The tieredImageNet dataset <cit.>, which is also derived from the ImageNet <cit.> dataset, contains 351 classes for training, 97 classes for validation, and 160 classes for testing. The CIFAR-FS dataset is built upon CIFAR-100 <cit.> dataset. Following the recent work of <cit.>, we use the same training/validation/testing splits consisting of 64/16/20 classes respectively. CUB-200-2011 (CUB) <cit.> is a dataset for fine-grained bird species classification tasks consisting of 100/50/50 classes for training/validation/testing splits respectively. We also evaluate the domain transferability of our method by training on miniImageNet dataset and then testing on CUB dataset. Architecture. For the visual branch, following previous works <cit.>, we use ResNet-12 as our image encoder of the visual branch, which consists of four residual blocks. Each block contains three 3×3 convolutional layers and a 2×2 max-pooling layer. Similar to <cit.>, we adopt Dropblock as the regularizer and set the number of filters to (64, 160, 320, 640). We apply a global average pooling layer after the last residual block. The backbone network takes images with a spatial size of 84×84 as input and outputs 640-dim support and query visual embeddings. To extract comprehensive semantic information from class names, we adopt RoBERTa-base <cit.> as our text-based pre-trained language model, which is trained on large-scale corpora and available for public use. The language model is a linear layer, which transforms 768-dim hidden vectors into 640-dim textual embeddings. In addition, we use the bilinear form of our metric module. Implementation Details. Following <cit.>, we first pre-train the image encoder for 200 epochs on miniImageNet, CIFAR-FS and CUB dataset, and 100 epochs on tieredImageNet dataset. Then we adopt the episodic training procedure under 5-way 1-shot and 5-shot settings. In each episode, 16 unlabeled query images per class are used for the meta-training and meta-testing phases. We use SGD optimizer with a momentum of 0.9 and a weight decay of 5e-4. The outer-loop learning rate is initialized as 1e-3 on miniImageNet, CIFAR-FS, CUB datasets and 1e-4 on tieredImageNet dataset. The inner-loop learning rate is initialized as 0.5 on four datasets. The number of inner-loop update steps is set to 25. Our model is meta-trained for 80 epochs on all datasets. The hyper-parameter τ is set as 1 for 1-shot setting, 0.2 for 5-shot setting in the inner loop, and 0.1 in the outer loop. To ensure the stability of the evaluation results, we test 1,000 episodes and report the average performance with 95% confidence intervals. We conduct experiments with an NVIDIA GeForce RTX 4090 GPU. §.§ Comparison with State-of-The-Art General Object Recognition and Fine-Grained Categorization. For fair comparisons, we compare with other methods using the same backbone or similar methods in both 5-way 1-shot and 5-way 5-shot settings on miniImageNet, tieredImageNet, CIFAR-FS and CUB datasets. As is shown in Table <ref>, our method is superior to existing methods and achieves the best performance. Compared with previous methods that leverage semantic information from class names, such as KTN <cit.>, AM3 <cit.>, TRAML <cit.> and Vs-Alignment <cit.>, our method improves 1-shot accuracy by 2.42% and 5-shot accuracy by 4.41% on miniImageNet. Furthermore, our method outperforms AM3 <cit.> by 3.88% and 4.41% at 1-shot and 5-shot settings on tieredImageNet respectively. According to Table <ref>, our method outperforms MetaOptNet <cit.> by 4.99% and 3.06% at 1-shot and 5-shot settings respectively on the CIFAR-FS dataset. In addition, on the CUB dataset, our method surpasses all the competitors, including RE-Net <cit.>, which previously achieved the best result. One observation worth highlighting is that our method not only outperforms traditional methods based on meta-learning but also is superior to methods using textual information on four benchmark datasets. These results validate the effectiveness of our proposed few-shot learning framework, which can leverage semantic information well in few-shot image classification tasks. Evaluation on Cross Domain and Larger Shots. To evaluate the cross-domain transferability of different few-shot learning methods, we train them on the source domain miniImageNet dataset and test them on the target domain CUB dataset. This setting is challenging due to the domain gap between the training and testing datasets. The results are reported in Table <ref>, showing that our method has competitive performance and obtains consistent improvements in the cross-domain setting. This indicates the transferability of our method in a situation where the meta-testing tasks are entirely different from the meta-training tasks. Furthermore, we evaluate the performance when the number of shots increases (e.g., 10-shot, 30-shot, and 50-shot) in Table <ref>. This shows that our method would be more effective when there are more (image, text) pairs available for novel classes. These comparisons demonstrate that our method has a more robust transferability, which means it can work well in cross-domain and larger shots scenarios. §.§ Ablation Study In this subsection, we empirically show the effectiveness of each component. To investigate the effects of our designed textual branch, we try to use different extraction methods and prompt templates. Moreover, we conduct extensive ablation studies to verify the effectiveness in the absence of the metric module and visualize our method on miniImageNet and tieredImageNet dataset. Analyze of Textual Branch. To evaluate the effect of our textual branch, we test different extraction methods (i.e., “Avg”, “[CLS]”, and “[MASK]”) and prompt templates in our framework with 5-way 1-shot setting on miniImageNet. As shown in Table <ref>, our “[MASK]” extraction method with “[CLS] The appearance ofy_i is [MASK] . [SEP]” prompt template outperforms the “[CLS]” extraction method by 5.39% and the “Avg” extraction method by 3.94%. Our proposed hand-crafted prompt template treats the extraction of textual embeddings as a masked language modeling task, which makes the textual embeddings more relevant to the visual description of object categories. The results demonstrate that the carefully designed textual branch is effective for aligning visual and textual embeddings for downstream few-shot classification tasks. Analyze of Metric Module. As is shown in Table <ref>, we design a new model without using the support set to update the parameters in the inner-loop optimization and directly compute the similarity score matrix between the query visual embeddings and textual embeddings with cosine similarity in the outer loop. The results show a significant decrease in performance on four widely-used few-shot image classification datasets, demonstrating the importance of the task-specific metric module. By leveraging the metric module to generalize the cosine similarity, our model can adaptively measure the similarity between visual features and textual embeddings for different few-shot tasks. Visualization. To qualitatively evaluate our method, we apply t-SNE <cit.> to visualize the results, which represent the visual features of five categories. We randomly sample 300 examples for each class in 5-way 5-shot setting on miniImageNet and tieredImageNet dataset. As shown in Figure <ref>, the t-SNE visualization results indicate that our method can learn more compact and separate clusters, which means that the learned representations are more discriminative. § CONCLUSION In this paper, we propose a novel few-shot learning framework with text-based pre-trained language model to boost few-shot learning. Furthermore, we introduce a task-specific metric module to enable the alignment between visual features and textual embeddings. Extensive experiments on miniImageNet, tieredImageNet and CIFAR-FS demonstrate the effectiveness of our method. unsrtnat Supplementary Materials § ADDITIONAL EXPERIMENTS Influence of Inner-Loop Temperature. To study the influence of inner-loop temperature hyper-parameter, we conduct experiments on four widely-used few-shot datasets with different inner-loop temperature values in our method. The rest settings are consistent with Section <ref>. Table <ref> shows the results in 5-way 5-shot setting. We find that 0.2 is an appropriate inner-loop temperature value for this setting on all these four datasets. Effect of the Number of Inner-Loop Update Steps. To find a suitable number of inner-loop update steps, we keep the experimental setup in Section <ref> and update the model 10, 15, 20, 25 and 30 steps in the inner loop respectively. Table <ref> shows the results in 5-way 5-shot setting on miniImageNet and tieredImageNet. Following the results, we set the number of inner-loop update steps to 25 in our experiments. Visualization of Grad-CAM. In Figure <ref>, we visualize the gradient-weighted class activation mapping from the pre-trained model and our method under a ResNet-12 feature extractor. It is observed that our method makes the model pay more attention to the discriminative part of the target object than the pre-trained model. For example, we find that for dog samples, the pre-trained model pays more attention to the body and background parts while our model focuses on the head part.
http://arxiv.org/abs/2307.04015v1
20230708164731
Emotion-Guided Music Accompaniment Generation Based on Variational Autoencoder
[ "Qi Wang", "Shubing Zhang", "Li Zhou" ]
cs.SD
[ "cs.SD", "cs.MM", "eess.AS" ]
Emotion-Guided Music Accompaniment Generation Based on Variational Autoencoder Qi Wang, Shubing Zhang , Li Zhou 1 China University of Geosciences(Wuhan) {wangqi233,zhouli}@cug.edu.cn * Corresponding author 1 This research was funded by the Chinese Regular Projects of the Humanities and Social Sciences Fund of the Ministry of Education of Grant No.16YJAZH080. August 12, 2023 =================================================================================================================================================================================================================================================================================================== Music accompaniment generation is a crucial aspect in the composition process. Deep neural networks have made significant strides in this field, but it remains a challenge for AI to effectively incorporate human emotions to create beautiful accompaniments. Existing models struggle to effectively characterize human emotions within neural network models while composing music. To address this issue, we propose the use of an easy-to-represent emotion flow model, the Valence/Arousal Curve, which allows for the compatibility of emotional information within the model through data transformation and enhances interpretability of emotional factors by utilizing a Variational Autoencoder as the model structure. Further, we used relative self-attention to maintain the structure of the music at music phrase level and to generate a richer accompaniment when combined with the rules of music theory. Our experimental results indicate that the emotional flow of the music generated by our model has a strong correlation with the input emotion, demonstrating the model's strong interpretability and control of emotional flow. The generated music is also well-structured, diverse, and dynamic, outperforming the baseline models. Music Accompaniment Generation, Emotional Flow, Variational Autoencoder, Rule constraints § INTRODUCTION Music evokes emotions in listeners, making it a powerful and intuitive medium for understanding. It also serves as a driving force for musicians to create. One important aspect of composing is incorporating emotional expression into the music. Composers use their emotions along with their technical skills and knowledge to craft their compositions. Current AI methods fall short of replicating a composer's approach. Neural networks primarily focus on combining and utilizing pre-existing knowledge of compositions, rather than incorporating emotions as high-level information. Our research aims to overcome this limitation by developing a model for generating accompaniment that takes emotions into account. The way emotions are processed impacts every aspect of music composition and, as a result, every aspect of deep neural networks <cit.>. This puts a significant emphasis on the need for network control. While autoregressive models can effectively capture key elements of music, they lack transparency and do not guarantee internal control and interpretability of musical information. Adversarial networks <cit.> can separate elements like pitch, rhythm, and texture, but they struggle with capturing emotional information and prioritize interpretability over musicality and structure. Additionally, many music generation models <cit.> primarily focus on identifying and evaluating the emotional aspects of music, rather than using them as a controllable variable. Therefore, instead of using subjective and limited emotional labels<cit.>, such as "relaxed" or "nervous," we have adopted Thayer's continuous emotion model<cit.>. This model takes into account two quantitative and controllable factors: valence, which measures the level of positivity or negativity, and arousal, which measures the level of excitement or calmness. This approach provides a controlled understanding of human emotions. Thus, we designed a system based on Variational Autoencoder, a controllable deep learning model, which incorporates emotional factors into the neural network's learning process. The user inputs valence and arousal trends, which are then encoded using our Valence Encoder and Arousal Encoder. The model then decodes and reconstructs this information to generate 2-bar piano accompaniments that match the emotional flow of the user's input. To compose a dynamic piece of music, we take into account two key elements: tonality<cit.>, which enhances the beat and rhythm of the music by incorporating rule-based constraints in the model's decoder, and structural organization<cit.>, which improves the storytelling aspect of the music and preserves the internal structure of the piece through a self-attention mechanism. Our data, code, and samples have been made publicly available [<https://github.com/Duoluoluos/Emotion-Guided-Music-Accompaniment-Generation>]online. Our main contributions include: * Emotion-Guided Composition, where the user inputs an Emotion-Flow Curve and the model generates music that closely matches the input emotions. * Enhanced accompaniment generation, incorporating global tonality, music phrases, and local texture for a more realistic and dynamic improvised accompaniment. * Integration of rules and deep learning, combining the creative capabilities of deep networks with the constraints of music theory to improve the transparency of the music creation process. § RELATED WORKS §.§ Accompaniment Generation Generating musical accompaniment is essentially a specific type of music generation problem<cit.>, where the melody is used as a constraint, and the accompaniment is the generated music. In the past, accompaniment generation was approached in the same way as music generation, treating pitch and temporal values as simple data. Algorithms such as Hidden Markov Chain (HMC)<cit.>, Random Forest (RF), Support Vector Machine (SVM)<cit.> <cit.>, etc. were used to approach the problem from a regression perspective. However, with the advancement of deep learning, more accurate prediction models have been developed. DeepBach<cit.>, a well-known music generation network based on RNN/LSTM<cit.> networks, represents Bach choral as voice lists with metadata lists and embedding representation to RNN for prediction. However, RNN/LSTM networks alone may not be sufficient for achieving the required level of long-range coherence in accompaniment. Hybrid models, such as the RNN-LSTM model in paper <cit.> and the RNN-RBM model in paper <cit.>, have been proposed to address this issue. The RNN-LSTM model learns different models in stages, while the RNN-RBM model uses several Restricted Boltman Machines (RBMs) and samples the output of the RBMs as input for the RNN, training local information and then makes autoregression for each information. In 2018, the Music Transformer <cit.> was introduced, which shifted the focus from regression problems and note prediction to natural language processing (NLP) techniques for recognizing relationships between different segments of music and evaluating the logicality of musical phrases, similar to how NLP tasks analyze relationships and coherence in language. The Transformer model uses attention mechanisms, positional coding, and other techniques to ensure long-range coherence, making it useful for various accompaniment generation tasks such as drum and piano accompaniment. The model is similar to text completion in NLP, using a priori melodic data and key information such as drum beats to "fill in" missing features. Papers <cit.> have expanded upon this data representation and the MuMidi proposed in paper <cit.> can solve harmonic problems in a long-term context by integrating pitch, time value, and tempo. However, the generation process is not always interpretable or controllable and the randomness of notes can increase over time, resulting in non-sequential music. To improve control over the music generation process, various methods have been employed. MuseBert <cit.> uses data corruption and fine-tuning during the inference learning process, while Music VAE <cit.> <cit.> uses decoupled feature representations such as pitch, chord, and texture, and employs interpolation, back-and-forth sampling, and temperature factors to increase accompaniment diversity. MuseGAN <cit.> treats music data as images and can generate multi-track accompaniments, but the structure of each track is not well-constrained by composition rules and the resulting music may not be as listenable. It is worth noting that the "hidden space" of the Variational Autoencoder(VAE) is better suited to the music generation problem than the image representation method used in the generative adversarial network. Unlike pass-through data, notes are affected by pitch, time, and velocity and have a high dimensionality of information. The VAE <cit.> normalizes this information to the hidden space for posterior estimation and reconstruction using an Encoder-Decoder architecture, which can be combined with a "learning from scratch" strategy and improve the model's ability to migrate and transfer. Therefore, we chose to use VAE as a controllable accompaniment generation model. Our model can generate well-structured accompaniments that conform to certain composition rules and follow an Emotion Flow. §.§ Emotional Flow Guided Composition Valence and Arousal are commonly used as quantitative measures of musical emotion in research. Studies<cit.> have shown that the rhythmic density of music, determined by the duration of notes in each measure, can affect a person's arousal levels independently of note velocity. Additionally, the melodic and harmonic direction of a song can affect the overall emotional direction <cit.>, referred to as valence. These factors can have a significant impact on the emotional response to a piece of music. The objective of our research is to extract features from Emotion Flow, specifically the Valence Curve and Arousal Curve <cit.>, and then systematically associate those features with the generated accompaniment. Previous research, as shown in the paper <cit.>, used dynamic programming and template-matching methods to complete the Emotion-Flow Guided Accompaniment Generation. However, these methods can ensure the audibility of the music but do not guarantee the diversity of the accompaniment. In contrast, deep neural networks can achieve accompaniment diversity through large-scale learning, but they struggle to maintain the structure of the music compared to methods such as template matching <cit.>. Although self-similarity <cit.> can maintain some of the structure, neural network methods have difficulty ensuring the structure of the music because the music structure is strongly regulated through music phrases. Therefore, decoding music segments into "phrase" units is the key to maintain music structure. In this paper, we propose using a VAE which makes full use of structured features of the music to improve the overall structure and diversity of the accompaniment. § METHODS §.§ Data Preparation The POP909 Dataset <cit.> comprises 909 popular music tracks, which are piano-based and have a total running time of 60 hours. Each track is stored in MIDI file format and includes three separate components - melody, bridge, and piano. The bridge and piano tracks serve as an accompaniment. Additionally, the dataset includes chord and bar annotations for each song. The POP909 dataset includes melodies that are broken down into 2-bar, 4-bar, and 6-bar fragments. The bar annotations in the dataset provide information about the structure of these fragments. The chord annotations, on the other hand, provide information about the harmony of each bar in the melodies. To address the issue of music structure in a consistent manner, we discovered that the majority of music is composed of 2-bar segments. As a result, we carried out data cleaning, filtering out 2/4-bar segments and 2/4-bar segments with 6-bar introductory fragments. The training and testing sets were then split in an 8:2 ratio. As sample data, we selected a subset from the Nottingham Dataset <cit.>. This dataset comprises over 1000 European and American folk songs, all of which have chord annotations. For validation purposes, we chose 2-bar and 4-bar segments from the dataset. The collated data information is presented in Table <ref>. (It is worth noting that if the user-supplied music does not have chord annotations like the sample data, we used Bi-LSTM Harmonizer <cit.> to implement the chord annotations) To showcase the capabilities of our model, we chose two representative songs, one with high valence and the other with low valence, from the 20 songs we used. These songs were made available on a web page for users to evaluate and [<https://soundcloud.com/ko9isjyplxrb/sets/demos-of-emotion-guided-generated-accompaniment>]enjoy. §.§ Models §.§.§ The Conversion of Valence and Arousal The overall architecture is illustrated in Figure <ref>. The initial music data is represented by piano rolls. Each row of the piano roll matrix corresponds to one of the 128 pitch values and each column corresponds to a unit of time, with the duration of a 16th note used as the unit of time. The accompaniment tracks were merged and transformed to produce the accompaniment piano roll p_T^ACC, where T represents the duration of the altered accompaniment fragment. Similarly, the rhythm piano roll is represented as p_T^RHY, and the labeled chord progression is represented as c_T. According to the twelve-mean meter <cit.>, c_T is a matrix of 12 × T, where 12 is the number of notes in an octave. Valence_T=V(c̅_̅T̅) Where V(·) is the Valence's mapping and c̅_̅T̅ is the chord data after normalizing the root note of c_T to the C3 note. This is to ensure that the Valence is in the same key, and we set the T here to 8. Also with respect to Arousal's mapping as A( · ), there are, Arousal_T= A(p_T^ACC+p_T^RHY) The operation of mapping A is to transform the multitrack music data into a tree structure <cit.>, where the nodes of the tree can more clearly characterize the density distribution of notes. Arousal is a four-dimensional matrix of size 128× T × 16 × 8, denoting the pitch-duration-density grouping, respectively. Denote the quantization operation of Arousal and Density as | · |, |Arousal|_T=1/5 · T∑_T∑_pitch A(p_T^ACC+p_T^RHY) |Valence|_T = ∑_T W_V(c̅_̅T̅) The W value in this context refers to the chroma weights of each chord and serves as a measure of the valence, or emotional assessment, of each chord. By performing a quantization-transformation operation, the emotional content of the music can be translated into a format that the composition model can understand, allowing for the user's desired Emotion Flow to be incorporated into the final output. §.§.§ Valence/Arousal Encoder Arousal and Valence Encoder are both dominated by LSTM as the backbone network. Arousal Encoder extracts the features of pitch-time-value information through a CNN with a (4,12) sized kernel in convolutional layer and (1,4) sized kernel in max pooling layer. In fact, after the features are extracted by the convolutional network, the arousal information is more concise and refined [38], so that Decoder can learn better emotional features. The layers of the LSTM network are all 1, and both are bidirectional. the dimension of the input weight of the Arousal Encoder is 256, and the dimension of the output weight is 1024. the dimension of the input weight of the Valence Encoder is 32, and the dimension of the output weight is 1024. Both are encoded to calculate the mean and variance of the probability distribution and sampled to obtain a 256-dimensional latent space variable z_Arousal or z_Valence. §.§.§ Decoder The Valence Decoder is introduced first, and the LSTM encoder of the decoder is roughly the same, except that the input side is fused with z_valence, and the dimension is modified to 292. The reconstructed Valence is estimated by calculating the variance and mean, and it is input to the LSTM as a token so that the decoding part of the model is completed. The probability distribution of valence is a 12-dimensional Bernoulli distribution. PianoTree Decoder, on the other hand, refers to the design of the paper <cit.> and uses the model in this paper as a baseline. The original model is divided into two main stages, one is the time domain decoding and the other is the decoding of notes for each pitch. Since different notes may be concatenated into fragments and have some autocorrelation in the structure of the music to form the music phrases, we performed a note summary operation after the time-domain decoding operation and introduced a self-attention mechanism, which we will explain the ins and outs in detail in the next subsection. The role of the first Pianotree-LSTM in Figure <ref> is to decode 512-dimensional latent space vectors. latent space vectors are the hidden space mapping changes of notes, and LSTM (hidden size=1024) is to summarize and summarize the results of the changes in the temporal dimension, so we call the summarized results note summary with size (1,512). After obtaining the relative self-attention, it is then decoded in the dimension of the pitch by LSTM(2) and mapped to 128 pitches through the fully connected layer. For each or each class of notes, respective temporal values are then decoded by LSTM (hidden size=16) to obtain the emotional stream/music sequence after reconstruction. §.§.§ Relative Self-Attention In order to maintain the structural organization in the music sequences, we introduce a self-attentive mechanism. This inspiration comes from the paper <cit.>, which does this by comparing a template music sequence fragment with a training music fragment and obtaining the correlation of the relative positions in the two sequences by one-dimensional/two-dimensional convolution, and the resulting correlation data is called self-similarity. In this paper, self-similarity is not done by convolution operation because we do not have template fragments, but by note summary, a tensor of stacked pitch and mood information in the time domain. Similarly, since self-attention obtains the autocorrelation information inside the input by soft addressing, it is just possible to obtain the autocorrelation of note summary in the time domain and thus maintain the structured organization of the music fragments as the estimated "music phrases". Since there is some time invariance in the relative positions of the sequences <cit.>, we also introduce offsets. Each fragment is not very informative, and to optimize the efficiency of the algorithm, we use a single-head attention mechanism. The query, key, and value tensor of relative attention are written as Q, K, and V, respectively. S^rel represents the offset matrix and the matrix element r=NS_k-NS_q, where NS_k and NS_q are the note summary query and key's position code, then the formula for relative self-attention(abbreviated as Att) is Att = Softmax(QK^T+S^rel/√(D))V. As for the parameter settings, we set the weight dimension of Q to 1024 and the weight dimension of K, V to D=128. §.§.§ Rules-based Constraint Two rules are very common in the realm of improvised accompaniment, enriching the player's accompaniment performance by changing tonality. The first principle is to add variety to the chords by making small adjustments to the chord tuning. The second technique is to add a sense of layering between the different voices by shifting the tonality of the chords significantly at the same time. Either way, chord arrangement is the most important thing. If we want to use the rules in our accompaniment generator, we need to grasp the key information and build the model. Whether it's chord transposition or pitch shifting, it's essentially shifting pitch. So instead of inferring from the model, we can use the chord arrangement and transposition information directly to shift the pitch and change the generated accompaniment. To obtain the chord transposition information, a mathematical evaluation is required. We note that the originally labeled chords of the input melody are C^pre and the chords generated by PianoTree decoding are C^gene, each chord is represented by 12 mean meters, so it is a 12-dimensional vector. The two are compared and the maximum difference is used as the criterion for transposition. Note the current bar number i, the pitch shift Δ C refers to: Δ C = argmax(C^pre_i C^gene(T)_i /|| C^pre_i || · || C^gene(T)_i ||) Here T denotes matrix transposition. Each bar has a chord best transposition selection, and a number of bars with large Δ C are selected for pitch shift so that tonality adjustment is achieved by means of rules and mathematical modeling. §.§ Training Objective The training objective of VAE <cit.> is much the same, and its loss function mainly consists of regularization loss and reconstruction loss. To shorten the formulation, we abbreviate Valence and Arousal as V and A. For the regularized loss, we set the prior Gaussian distributions of Valence and Texture as p(z_V) and p(z_A), and the posterior distributions after encoder are noted as p(z_V|V), p(z_A|A), respectively. To find the regularization loss of the two probability distributions, we commonly use the KL scatter [40], denoted here as KL(·). For the reconstruction loss, we set the probability distribution of the Valence Decoder output as p(V|z_V) and the PianoTree Decoder output as p(A|z_A, z_V), and the reconstruction loss is generally found by finding the log probability expectation value. In summary, the loss function Loss(V, A) of the model is Loss(V, A) = E_p[log p(V|z_V) + log p(A|z_V,z_A)] + KL(p(z_V|V) || p(z_V)) + KL(p(z_A|A) || p(z_A)) § EXPERIMENTS §.§ Training Details of Our Proposed Model The experiment was run on a host with a 12th Gen Intel(R) Core(TM) i7-12700H and a single NVIDIA GeForce RTX3060 6GB. In the section <ref>, we explain the dataset and convert the MIDI files in the dataset into a piano roll representation and a 12-measure chord representation, respectively We set the batch size to 128, so that the model is trained with a time value of 32 for each arousal fragment and 8 for the valence fragment. When training our VAE model, we set the epoch to 6 and the learning rate to 10^-3 with an exponential decay of 0.999 and a minimum value of 10^-5. To speed up the training speed and reduce the possibility of model divergence, we use the Teacher-Forcing strategy. The Teacher-Forcing training ratio of Encoder-PianoTree Decoder , and Encoder-Valence is set to 0.6 and 0.5 respectively. The training ratio of Encoder-Valence Decoder is set to 0.5. §.§ Baseline Models Our baseline models are Poly-dis and M-GPT chosen from the model in the paper <cit.> <cit.>. Poly-dis, the state-of-the-art disentanglement learning-based model, decouples the characterization of harmony and texture. Unlike our rule constraint and modeling, this model achieves the adjustment of the generated accompaniment by learning prior and posterior sampling. M-GPT is the state-of-the-art piano music generation model and can harmonize the melody using auto-regression principles. §.§ Emotional Flow Comparison Test The experiment aims to compare the correlation between the Emotional Flow entered by the user, used as a guide, and the Emotional Flow finally generated by the system. This is an important indicator of the effectiveness of the system's control over the input Emotional Factors. We evaluate the correlation by comparing the Pearson coefficients between the two sequences, referring to the evaluation metrics in the paper <cit.>, so as to avoid misevaluation due to misalignment of the Emotional Flow. There are two constraints on the Emotional Flow of the user input guidelines. The first is that there cannot be more than five extreme points per flow curve, except for the start and end points. This is because the melodic data of the sample data does not exceed 90s in length, and too many extreme points mean too many melodic ups and downs, which is not in accordance with the rules of music composition. The second is that each flow curve must have a certain amount of ebb and flow, because too much flatness is not necessary for correlation. Specifically, V̅ and A̅ are the mean values of the valence and arousal curves, and the duration of the melody is set to T. 1/T∫_0^T (V-V̅)^2 dt > 0.15 1/T∫_0^T (A-A̅)^2 dt > 0.15 The data for the experiment were obtained from the "Samples" mentioned in the section <ref>, with 20 pieces of music to be validated. Four typical cases were selected to visualize the results. The criteria we chose are similar to the idea of control variables, which are the correlation of Arousal Flow in the low arousal and high arousal cases, and the correlation of Valence Flow in the Low Valence and High Valence cases, respectively. We calculated the average valence and arousal correlation values for 20 samples of music. For statistical convenience, high arousal/valence is denoted as High Input Basis (HIB) and low arousal/valence is denoted as Low Input Basis (LIB). The visualization in Figure <ref>, a combination of a heat map and box plot, presents a comparison of the input and output Emotional Flow. The heat map illustrates the specifics of the Emotional Flow, while the box plot offers a broader statistical comparison. The results reveal that the mean values and quartiles of the Emotional Flow are similar for both the user input and the system output. This suggests that the system-generated Emotional Flow aligns with the user input statistically, regardless of the Emotional Flow's baseline. We also compared the association values between the baseline model and our VAE model, as shown in Table <ref>. Where the baseline model is abbreviated as Poly-Dis, our model is called VA-VAE. It can be seen that the average correlation of our model outperforms the baseline models for both valence flow and arousal flow. The correlation of our VA-VAE also outperforms the baseline model under HIB versus LIB. §.§ Subjective Musicality test The subjective musicality assessment was mainly a professional assessment by music experts. A total of 44 junior and senior music majors and graduate students were invited. The music experts were randomly selected from two of the eight sample groups, and each group contained two pieces of music, one with the accompaniment generated by the baseline Transformer model and the other with the accompaniment generated by the VA-VAE model. The two pieces of music were not distinguished by name; in other words, the music experts' music was selected in a completely blind manner. The music experts evaluated the level of the accompaniment from four angles: 1) whether the overall layout of the composition was appropriate; 2) whether the chords were harmoniously chosen and connected; 3) whether the rhythmic density (articulation points) was specific to the melody; and 4) whether there was a sub-melody or passing phrase that accentuated the melody. Each evaluation angle is evaluated quantitatively using a rating value, and is assigned a score of 1 to 5. The above four perspectives are abbreviated as Q1, Q2, Q3 and Q4. The experimental results are shown below, and the final score for each assessment perspective is based on the weighted average score. From the experimental results shown in Fig <ref>, we can see that the weighted average score of our VA-VAE model is stronger than that of the Baseline models in terms of the overall layout of the weave (Q1), chord selection and connection (Q2), melodic counterpoint (Q3), and melodic underscoring (Q4). The overall arrangement of the accompaniment generated by our model is more reasonable, and the chord selection and connection are more fully considered, and the rhythm between the accompaniment and the melody is more organized and regular, which can also better support the melody. The musical accompaniment generated by our model has a more artistic character. Refer to Figure <ref> for a visual representation of the music's attention structure. The darker the color of the music phrases, the greater the weight of attention. The structure of the different "music phrases" gathered by attention mechanism is divided by dotted lines, so that the music as a whole is well organized. §.§ Ablation Study For the ablation study, we abbreviated the control group without relative self-attention and Rule Constraint (RC) as CG, the model after adding relative self-attention as CG+NS, and then after adding Rule Constraint as CG+NSR. We used a quantitative approach to assess the generation The quality of the accompaniment in the ablation experiment is assessed quantitatively. Quantitative metrics such as pass/fail ratios, null ratios, etc. are less applicable in our piano improvisation accompaniment generation task. The key criteria for the evaluation of the accompaniment task are the texture of the accompaniment, the harmony of the accompaniment with the melody, the contribution to the melody, etc. This way of evaluation is very similar to that of the translation task, where the harmony of the accompaniment is like the valuation of the translated utterance, the weaving arrangement is like the wording of the translation, and the contribution to the melody is like the synthesis and comparison of the information in the translation task. Therefore, we chose the MUTE evaluation index from the paper <cit.>, which is analogous to the F-Score evaluation index in the translation task, to accurately and quantitatively assess the level of the accompaniment arrangement. In MUTE, F1 Score(FS) evaluates the "translation accuracy" of the accompaniment from the perspective of 128 pitches and is suitable for evaluating texture, while the F1 Score Pitch Class(FSPC) normalizes the pitches to 12 basic pitches and is therefore suitable for evaluating harmony. As seen in Table <ref>, the model incorporating relative self-attention and RC outperformed the CG and CG+NS control groups in both FS and FSPC metrics. Whether it is harmony or texture, the newly incorporated relative self-attention mechanism and rule constraint can be better designed and orchestrated to create higher quality accompaniment. Further, we visualized the comparison test of the rule constraints, as shown in Figure <ref>, and found that the rule constraints did indeed shift the range of the accompaniment to better harmonize the melody. § CONCLUSION In this study, we investigate the generation of musical accompaniment that is guided by emotional flow. We focus on two key aspects of the problem. First, we establish a mechanism for converting emotional streams into music information data and a VAE network architecture that is tailored to emotional quantization data, allowing us to control the network model with emotional factors. Secondly, we optimize the structural planning of accompaniment generation by introducing the Self-Similarity and relative self-attention mechanism. By using rule constraints, we further improve the local and global tonality of the music. This approach of progressing from the whole to the local, layer by layer, allows us to create an automatic accompaniment system that has excellent emotional flow control and high-quality music generation. In the future, we plan to further improve our research. Currently, the accompaniment is generated by a single instrument and we intend to extend it to include multiple instruments to create an automated orchestra. Additionally, the representation of emotional flow is not yet clear, and we will research on better visualization methods to make the AI technology more user-friendly. § ACKNOWLEDGMENT This research was funded by the Regular Projects of the Humanities and Social Sciences Fund of the Ministry of Education of Grant No.16YJAZH080. 00 b1 Wu, Yi-Chan, and Homer H. Chen. "Emotion-flow guided music accompaniment generation." 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2016. b2 Thayer, Robert E. The biopsychology of mood and arousal. Oxford University Press, Oxford, UK, 1990, ch. 2-5. b3 Boulanger-Lewandowski, Nicolas, Yoshua Bengio, and Pascal Vincent. "Modeling temporal dependencies in high-dimensional sequences: Application to polyphonic music generation and transcription." arXiv preprint arXiv:1206.6392 (2012). b4 Choi, Keunwoo, George Fazekas, and Mark Sandler. "Text-based LSTM networks for automatic music composition." arXiv preprint arXiv:1604.05358 (2016). b5 Dua, Mohit, et al. "An improved RNN-LSTM based novel approach for sheet music generation." Procedia Computer Science 171 (2020): 465-474. b6 Lyu, Qi, et al. "Modelling high-dimensional sequences with lstm-rtrbm: Application to polyphonic music generation." Twenty-Fourth International Joint Conference on Artificial Intelligence. 2015. b7 Yang, Li-Chia, Szu-Yu Chou, and Yi-Hsuan Yang. "MidiNet: A convolutional generative adversarial network for symbolic-domain music generation." arXiv preprint arXiv:1703.10847 (2017). b8 Luo, Jing, et al. "MG-VAE: deep Chinese folk songs generation with specific regional styles." Proceedings of the 7th Conference on Sound and Music Technology (CSMT). Springer, Singapore, 2020. b9 Lattner, Stefan, Maarten Grachten, and Gerhard Widmer. "Imposing higher-level structure in polyphonic music generation using convolutional restricted boltzmann machines and constraints." Journal of Creative Music Systems 2 (2018): 1-31. b10 Zhao, Jingwei, and Gus Xia. "AccoMontage: Accompaniment Arrangement via Phrase Selection and Style Transfer." arXiv preprint arXiv:2108.11213 (2021). b11 Hadjeres, Gaëtan, François Pachet, and Frank Nielsen. "Deepbach: a steerable model for bach chorales generation." International Conference on Machine Learning. PMLR, 2017. b12 Huang, Cheng-Zhi Anna, et al. "Music transformer." arXiv preprint arXiv:1809.04281 (2018). b13 Huang, Yu-Siang, and Yi-Hsuan Yang. "Pop music transformer: Beat-based modeling and generation of expressive pop piano compositions." Proceedings of the 28th ACM International Conference on Multimedia. 2020. b14 Wu, Shih-Lun, and Yi-Hsuan Yang. "The Jazz Transformer on the front line: Exploring the shortcomings of AI-composed music through quantitative measures." arXiv preprint arXiv:2008.01307 (2020). b15 Jin, Cong, et al. "A transformer generative adversarial network for multi‐track music generation." CAAI Transactions on Intelligence Technology 7.3 (2022): 369-380. b16 Wang, Ziyu, and Gus Xia. "MuseBERT: Pre-training Music Representation for Music Understanding and Controllable Generation." ISMIR. 2021. b17 Jiang, Junyan, et al. "Transformer VAE: A hierarchical model for structure-aware and interpretable music representation learning." ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2020. b18 Tanaka, Keitaro, et al. "Pitch-Timbre Disentanglement Of Musical Instrument Sounds Based On Vae-Based Metric Learning." ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2021. b19 Yang, Ruihan, et al. "Deep music analogy via latent representation disentanglement." arXiv preprint arXiv:1906.03626 (2019). b20 Song, Kai, Xia Liang, and Junmin Wu. "ViT-based VQ-VAE Generative Network for Accompaniment Generation." 2021 4th International Conference on Algorithms, Computing and Artificial Intelligence. 2021. b21 Liu, Weiming. "Literature survey of multi-track music generation model based on generative confrontation network in intelligent composition." The Journal of Supercomputing (2022): 1-23. b22 Wu, Yi-Chan, and Homer H. Chen. "Emotion-flow guided music accompaniment generation." 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2016. b23 Wallis, Isaac, et al. "A rule-based generative music system controlled by desired valence and arousal." Proceedings of 8th international sound and music computing conference (SMC). 2011. b24 Morreale, Fabio, and Antonella De Angeli. "Collaborating with an autonomous agent to generate affective music." Computers in Entertainment (CIE) 14.3 (2016): 1-21. b25 Miyamoto, Kana, Hiroki Tanaka, and Satoshi Nakamura. "Online EEG-Based Emotion Prediction and Music Generation for Inducing Affective States." IEICE TRANSACTIONS on Information and Systems 105.5 (2022): 1050-1063. b26 Kaliakatsos-Papakostas, Maximos, Andreas Floros, and Michael N. Vrahatis. "Artificial intelligence methods for music generation: a review and future perspectives." Nature-Inspired Computation and Swarm Intelligence (2020): 217-245. b27 Boulesteix, Anne‐Laure, et al. "Overview of random forest methodology and practical guidance with emphasis on computational biology and bioinformatics." Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery 2.6 (2012): 493-507. b28 Eddy, Sean R. "What is a hidden Markov model?." Nature biotechnology 22.10 (2004): 1315-1316. b29 Hearst, Marti A., et al. "Support vector machines." IEEE Intelligent Systems and their applications 13.4 (1998): 18-28. b30 Boulanger-Lewandowski, Nicolas, Yoshua Bengio, and Pascal Vincent. "Modeling temporal dependencies in high-dimensional sequences: Application to polyphonic music generation and transcription." arXiv preprint arXiv:1206.6392 (2012). b31 Dahale, Rishabh, et al. "Generating Coherent Drum Accompaniment With Fills And Improvisations." arXiv preprint arXiv:2209.00291 (2022). b32 Vaswani, Ashish, et al. "Attention is all you need." Advances in neural information processing systems 30 (2017). b33 Ren, Yi, et al. "Popmag: Pop music accompaniment generation." Proceedings of the 28th ACM International Conference on Multimedia. 2020. b34 Temperley, David. The cognition of basic musical structures. MIT press, 2004: 10-20. b35 Wang, Ziyu, et al. "Pop909: A pop-song dataset for music arrangement generation." arXiv preprint arXiv:2008.07142 (2020). b36 Medeot, Gabriele, et al. "StructureNet: Inducing Structure in Generated Melodies." ISMIR. 2018. b37 Wang, Ziyu, et al. "Pianotree vae: Structured representation learning for polyphonic music." arXiv preprint arXiv:2008.07118 (2020). b38 Sharif Razavian, Ali, et al. "CNN features off-the-shelf: an astounding baseline for recognition." Proceedings of the IEEE conference on computer vision and pattern recognition workshops. 2014. b39 Wang, Ziyu, et al. "Pianotree vae: Structured representation learning for polyphonic music." arXiv preprint arXiv:2008.07118 (2020). b40 An, Jinwon, and Sungzoon Cho. "Variational autoencoder based anomaly detection using reconstruction probability." Special Lecture on IE 2.1 (2015): 1-18. b41 Wang, Ziyu, et al. "Learning interpretable representation for controllable polyphonic music generation." arXiv preprint arXiv:2008.07122 (2020). b42 Lim, Hyungui, Seungyeon Rhyu, and Kyogu Lee. "Chord generation from symbolic melody using BLSTM networks." arXiv preprint arXiv:1712.01011 (2017). b43 Gover, Matan, and Oded Zewi. "Music Translation: Generating Piano Arrangements in Different Playing Levels." Ismir 2022 Hybrid Conference. 2022.
http://arxiv.org/abs/2307.05054v1
20230711070613
Resilient Information Aggregation
[ "Itai Arieli", "Ivan Geffner", "Moshe Tennenholtz" ]
econ.TH
[ "econ.TH", "cs.GT", "cs.MA" ]
Belief Revision from Probability Jeremy Goodman School of Philosophy University of Southern California, USA [email protected] Bernhard Salow Faculty of Philosophy University of Oxford, UK [email protected] August 12, 2023 ======================================================================================================================================================================================================= In an information aggregation game, a set of senders interact with a receiver through a mediator. Each sender observes the state of the world and communicates a message to the mediator, who recommends an action to the receiver based on the messages received. The payoff of the senders and of the receiver depend on both the state of the world and the action selected by the receiver. This setting extends the celebrated cheap talk model in two aspects: there are many senders (as opposed to just one) and there is a mediator. From a practical perspective, this setting captures platforms in which strategic experts advice is aggregated in service of action recommendations to the user. We aim at finding an optimal mediator/platform that maximizes the users' welfare given highly resilient incentive compatibility requirements on the equilibrium selected: we want the platform to be incentive compatible for the receiver/user when selecting the recommended action, and we want it to be resilient against group deviations by the senders/experts. We provide highly positive answers to this challenge, manifested through efficient algorithms. § INTRODUCTION Experts' opinions aggregation platforms are crucial for web monetizing. Indeed, in sites such as Reddit or Google, comments and reviews are aggregated as an answer to a user query about items observed or studied by others. We refer to these reviewers as experts. The platform can aggregate these experts' inputs or filter them when providing a recommendation to the user, which will later lead to a user action. An ideal platform should maximize the users' social welfare. In an economic setting, however, the different experts may have their own preferences. Needless to say, when commenting on a product or a service, we might not know if the expert prefers the user to buy the product or accept the service, or if the expert prefers otherwise. This is true even when all experts observe exactly the same characteristics of a product or service. Interestingly, while the study of economic platforms is rich <cit.>, there is no rigorous foundational and algorithmic setting for the study of aggregation and filtering of strategic experts opinions in service of the platform users. In this paper, we initiate such a study, which we believe to be essential. This study can be viewed as complementary to work on platform incentives <cit.>, issues of dishonesty <cit.>, and issues of ranking/filtering <cit.>, by putting these ingredients in a concrete foundational economic setting dealing with recommendations based on inputs from strategic experts. The model we offer extends the classical cheap talk model in two fundamental directions. First, by having several strategic senders (experts) rather than only one; second, by introducing a platform that acts as a mediator in an information design setting. Our work is related to the literature on information design that studies optimal information disclosure policies for informed players. The two leading models of information design are cheap talk <cit.> and Bayesian persuasion <cit.>. The main distinction between these models is the underlying assumption that, in the Bayesian persuasion model, the sender has commitment power in the way she discloses the information, while in the cheap talk model she has not. Bayesian persuasion models emphasize commitment power, and while it may hold in some real-world situations, it is often considered strong. In addition, in Bayesian persuasion, the informed agent (the sender) is also the one who designs the information revelation policy. In practice, however, information revelation can be determined by other external or legal constraints. A commerce platform, for example, determines what information about a product is revealed to a potential customer based on information submitted by different suppliers. In our model there is a finite state space of size n, several informed players (senders), an uninformed player (the receiver) that determines the outcome of the game by playing a binary action from the set A := {0,1} (this could represent buying a product or not, passing a law or not, etc.), and a mediator that acts as a communication device between the senders and the receiver (the mediator can be seen as the platform used by all parties). The utility of each player is determined by the state and by the action played by the receiver. The incentives of the senders may not necessarily be aligned (e.g., senders can be a car seller and a technician that tested the car, two independent parties who studied the monetary value of law, two suppliers of a product, etc.). The state is drawn from a prior distribution that is commonly known among players, but only the senders know its realized value. Thus, the senders' purpose is to reveal information to the receiver in such a way that the receiver plays the action that benefits them the most. Since the senders have no commitment power, we are interested in a mediated cheap talk equilibrium, in which it is never in the interest of the senders to be dishonest, and it is always in the interest of the receiver to play the action suggested by the protocol. The most common notions of equilibrium, such as Nash equilibrium, require that each individual player cannot increase its utility by deviating from the proposed strategy. However, notions of equilibria that are resilient to group deviations are currently gaining traction <cit.>, in particular because of their Web applications. Indeed, on the Internet, it is not only fairly easy to collude, but it is also relatively simple to create proxy pseudo-identities and defect in a coordinated way (this is known as a Sybil attack <cit.>). Nowadays, in Web applications and in distributed systems, resilience against individual deviations is generally considered insufficient for practical purposes. For instance, blockchain protocols are required to tolerate coordinated deviations from up to a fraction of their user base. In this work, we focus on k-resilient equilibria, which are strategies profiles in which no coalition of up to k players can increase their utility by deviating. Our main goal in the paper is to characterize, given the incentives of the senders and the receiver, which maps from states to distributions over actions result from playing k-resilient equilibria. More precisely, each cheap talk protocol σ⃗ induces a map M from states to distributions over actions, where M(ω) is mapped to the distribution over actions resulting from playing σ⃗ in state ω. Our aim is to characterize which of these maps (or outcomes, as we call them) can be implemented by a k-resilient equilibrium, and to efficiently construct a concrete k-resilient equilibrium whenever a given outcome is implementable. We first show that, if there are more than two senders, even if one of them defects and misreports the state, a majority of the senders would still report the truth, and thus the mediator will always be able to compute the correct state. Therefore, if there are at least three senders, outcomes are implementable by a 1-resilient equilibrium (i.e., a Nash equilibrium) if and only if they are incentive-compatible for the receiver. That is, an outcome is implementable by a 1-resilient equilibrium if and only if it improves the utility of the receiver relative to the case where no information is revealed to her. This result implies that the set of implementable distributions is independent of the utilities of the senders and only depends on that of the receiver, and thus that the senders have no bargaining power. It is also easy to check that this result extends to the case of k-resilient equilibria for k < n/2, where n is the number of senders. However, we show that if a majority of the players can collude, the set of implementable outcomes is defined by a system of linear equations that depend both on the utilities of the senders and the receiver. It may seem at first that computing such characterization may be highly inefficient since the number of possible coalitions of size at most k ≥ n/2 grows exponentially over the number of players, and each of these possible coalitions imposes a constraint on the outcome. By contrast, our main result shows that, if the number of states is m, then the aforementioned linear system can be written with only m^2 inequality constraints, and all such inequalities can be computed in polynomial time over m and the number of senders n. This means that the best receiver k-resilient equilibrium or the k-resilient equilibrium that maximizes social welfare can be computed efficiently. We also provide, given a solution of the system of equations, an efficient way to construct a concrete k-resilient equilibrium that implements the desired outcome. Our results so far assume that all senders have full information about the realized state. However, in some cases it is realistic to assume that senders only have partial information about it and, moreover, that each sender's information might be different. We show in Section <ref> that our techniques generalize to this model as long as the senders's preferences are not influenced by their coalition, a condition that we call k-separability. This means that, assuming k-separability, we provide a characterization of all outcomes that are implementable by a k-resilient equilibrium, and an algorithm that construct a concrete k-resilient equilibrium that implements a desired (implementable) outcome. Both the characterization and the algorithm are efficient relative to the size of the game's description. §.§ Related Work The literature on information design is too vast to address all the related work. We will therefore mention some key related papers. Krishna and Morgan <cit.> consider a setting similar to that considered by Crawford and Sobel <cit.>, where a real interval represents the set of states and actions. In this setting, the receiver's and the senders' utilities are biased by some factor that affects their incentives and utility. Similarly to the current paper where the sender is not unique, Krishna and Morgan consider two informed senders that reveal information sequentially to the receiver. They consider the best receiver equilibrium and show that, when both senders are biased in the same direction, it is never beneficial to consult both of them. By contrast, when senders are biased in opposite directions, it is always beneficial to consult them both. In another work, Salamanca <cit.> characterizes the optimal mediation for the sender in a sender-receiver game. Lipnowski and Ravid <cit.>, and Kamenica and Gentzkow <cit.> provide a geometric characterization of the best cheap talk equilibrium for the sender under the assumption that the sender's utility is state-independent. The geometric characterization of Lipnowski and Ravid is no longer valid for the case where there are two or more senders. Kamenika and Gentzkow <cit.> consider a setting with two senders in a Bayesian persuasion model. The two senders, as in the standard Bayesian persuasion model (and unlike ours), have commitment power and they compete over information revelation. The authors characterize the equilibrium outcomes in this setting. In many game-theoretical works, mediators are incorporated into strategic settings <cit.>. Kosenko <cit.> also studied the information aggregation problem. However, their model assumed that the mediator had incentives of its own and selected its policy at the same time as the sender. Monderer and Tennenholtz <cit.> studied the use of mediators to enhance the set of situations where coalition deviance is stable. They show that using mediators in several classes of settings can produce stable behaviors that are resistant to coalition deviations. In our setting, the existence of a k-resilient equilibrium is straightforward (e.g., playing a constant action). Instead, the strength of our result follows from efficiently characterising the set of all outcomes that are implementable using k-resilient mediated equilibria. § MODEL In an information aggregation game Γ = (S, A, Ω, p, u), there is a finite set of possible states Ω = {ω^1, …, ω^m}, a commonly known distribution p over Ω, a set of possible actions A = {0,1}, a set S = {1,2, …, n} of senders, a receiver r, a mediator d, and a utility function u : (S ∪{r}) ×Ω× A ⟶ℝ such that u(i, ω, a) gives the utility of player i when action a is played at state ω. Each information aggregation game instance is divided into four phases. In the first phase, a state ω is sampled from Ω following distribution p and this state is disclosed to all senders i ∈ S. During the second phase, each sender i sends a message m_i to the mediator. In the third phase (after receiving a message from each sender) the mediator must send a message m_d ∈ A to the receiver, and in the last phase the receiver must play an action a ∈ A and each player i ∈ S ∪{r} receives u(i, ω, a) utility. The behavior of each player i and is determined by its strategy σ_i and the behavior of the mediator is determined by its strategy σ_d. A strategy σ_i for a player i ∈ S can be represented by a (possibly randomized) function m_i: Ω⟶{0,1}^* such that m_i(ω) indicates what message i is sending to the mediator given state ω∈Ω. The strategy σ_d of the mediator can be represented by a function m_d: ({0,1}^*)^n ⟶ A that indicates, given the message received from each player, what message it should send to the receiver. The strategy σ_r of the receiver can be represented by a function a_r: A → A that indicates which action it should play given the message received from the mediator. In summary, a game instance goes as follows: * A state ω is sampled from Ω following distribution p, and ω is disclosed to all senders i ∈ S. * Each sender i ∈ S sends message m_i(ω) to the mediator. * The mediator sends message m_d(m_1, …, m_n) to the receiver. * The receiver plays action a_r(m_d) and each player i ∈ S ∪{r} receives u(i, ω, a_r(m_d)) utility. Note that, in order to simplify the notation, we use a slight notation overload since m_i is both the message sent by player i and a function that depends on the state. This is because the message sent by i always depend on the state, even if it is not explicitly written. A similar situation happens with a_r. §.§ Game mechanisms Given a game Γ = (S, A, Ω, p, u), a mechanism M = (m_1, m_2, …, m_n, m_d, a_r) uniquely determines a map o_M^Γ : Ω⟶Δ A (where Δ A is the set of probability distributions of A) that maps each state ω to the distribution of actions obtained by playing Γ when the senders, the mediator and the receiver play the strategies represented by the components of M. We say that M implements o_M^Γ and that o_M^Γ is the outcome of M. A mechanism M is incentive-compatible if it is not in the interest of the receiver or any of the senders to deviate from the proposed mechanism (note that the mediator has no incentives). We also say that M is honest if (a) m_i ≡ Id_Ω, where Id_Ω(ω) = ω for all ω∈Ω, and (b) a_r ≡ Id_A. Moreover, we say that M is truthful if it is both honest and incentive-compatible. Intuitively, a mechanism is truthful if sending the true state to the mediator is a dominant strategy for the senders and playing the state suggested by the mediator is a dominant strategy for the receiver. Consider a game Γ = (S, A, Ω, p, u) where S = {1,2,3}, A = {0,1}, Ω = {ω_1, …, ω_k}, p is the uniform distribution over Ω and u : (S ∪{r}) ×Ω× A ⟶ℝ is an arbitrary utility function. Consider the truthful mechanism in which senders disclose the true state to the mediator, the mediator chooses the state ω∈Ω sent by the majority of the senders and sends to the receiver the action a that maximizes u(r, ω, a), and the receiver plays the action sent by the mediator. It is easy to check that this mechanism is incentive-compatible: no individual sender can influence the outcome by deviating since the mediator chooses the state sent by the majority of the senders. Moreover, by construction, this mechanism gives the receiver the maximum possible utility among all mechanisms. Our first goal is to characterize the set of possible outcomes that can be implemented by truthful mechanisms. Note that, because of Myerson's revelation principle <cit.>, characterizing the set of outcomes implemented by truthful mechanisms is the same as characterizing the set of outcomes implemented by any incentive-compatible mechanisms (not necessarily truthful): Let Γ = (S, A, Ω, p, u) be an information aggregation game. Then, for any incentive-compatible mechanism M for Γ there exists a truthful mechanism M' such that M' implements o_M^Γ. Given M = (m_1, m_2, …, m_n, m_d, a), consider a mechanism M' = (m_1', m_2', …, m_n', m_d', a') such that m_i' ≡ Id_Ω for all i ∈ S, m_a' ≡ Id_A, and the mediator does the following. After receiving a message ω_j from each sender j, it computes a(m_d(m_1(ω_1),, m_2(ω_2), …, m_n(ω_n))) and sends this action to the receiver (if the message from some player j is inconsistent, the mediator takes ω_j to be an arbitrary element of Ω). By construction, M' is a truthful mechanism in which the mediator simulates everything the players would have sent or played with M. It is easy to check that, with M', for any possible deviation for player j ∈ S ∪{r}, there exists a deviation for j in M that produces the same outcome. Thus, if M is incentive-compatible, so is M'. This proposition shows that we can restrict our search to truthful mechanisms. Moreover, the construction used in the proof shows that we can assume without loss of generality that the senders can only send messages in Ω since sending any other message is equivalent to sending an arbitrary element of Ω. To simplify future constructions, we'll use this assumption from now on. §.§ Resilient equilibria Traditionally, in the game theory and mechanism design literature, the focus has always been on devising strategies or mechanisms such that no individual agent is incentivized to deviate. However, in the context of multi-agent Bayesian persuasion, this approach is not very interesting. The reason is that, if n > 2, the mediator can always compute the true state by taking the one sent by a majority of the senders (as seen in Example <ref>), and thus the mediator can make a suggestion to the receiver as a function of the true state while individual senders cannot influence the outcome by deviating. In fact, given action a ∈ A, let U_a := 𝔼_ω← p[u(r, ω, a)] be the expected utility of the receiver when playing action a regardless of the mediator's suggestion and, given outcome o^Γ, let E_i(o^Γ) := 𝔼_ω← p, a ← o^Γ(ω)[u(i, ω, a)] be the expected utility of player i ∈ S ∪{r} with outcome o^Γ. The following proposition characterizes outcomes implementable by truthful mechanisms. If Γ = (S, A, Ω, p, u) is an information aggregation game with |S| > 2, an outcome o^Γ: Ω⟶Δ A of Γ is implementable by a truthful mechanism if and only if E_r(o^Γ) ≥ U_a for all a ∈ A. Intuitively, proposition <ref> states that, if there are at least three senders, the only condition for an outcome to be implementable by a truthful incentive-compatible mechanism is that the receiver gets a better expected utility than the one it gets with no information. Before proving it, we need the following lemma, which will also be useful for later results. Let Γ = (S, A, Ω, p, u) be an information aggregation game. An honest mechanism M = (Id_Ω, …, Id_Ω, m_d, Id_A) for Γ is incentive-compatible for the receiver if and only if E_r(o_M^Γ) ≥ U_a for all a ∈ A. (⟹) Let M be an honest mechanism for Γ that is incentive-compatible for the receiver. Then, if E_r(o_M^Γ) < U_a for some a ∈ A, the receiver can increase its utility ignoring the mediator's suggestion and playing always action a. This would contradict the fact that M is incentive-compatible. (⟸) Suppose that E_r(o_M^Γ) ≥ U_a for all a ∈ A. If M is not incentive-compatible, it means that the receiver can strictly increase its payoff either (a) by playing 1 when the mediator sends 0 and/or (b) playing 0 when the mediator sends 1. Suppose that (a) is true, then the receiver can strictly increase its payoff by playing 1 in all scenarios, which would contradict the fact that its expected payoff with M is greater or equal than U_1. The argument for (b) is analogous. With this we can prove Proposition <ref>. The mechanism used in the proof is very similar to the one in Example <ref>. Let M be a truthful mechanism. Then, by Lemma <ref>, o_m^Γ satisfies that E_r(o_M^Γ) ≥ U_a for all a ∈ A. Conversely, suppose that an outcome o^Γ satisfies that E_r(o_M^Γ) ≥ U_a for all a ∈ A. Consider a mechanism M = (Id_Ω, …, Id_Ω, m_d, Id_A) such that the mediator takes the state ω sent by the majority of the senders and sends o^Γ(ω) to the receiver. By construction, M implements o^Γ. Moreover, as in Example <ref>, M is incentive-compatible for the senders since, if n > 2, they cannot influence the outcome by individual deviations. By Lemma <ref> M is also incentive-compatible for the receiver. Thus, M is a truthful mechanism that implements o^Γ. The construction used in the proof shows how easily we can implement any desired outcome as long as it is better for the sender than playing a constant action. However, Proposition <ref> is only valid under the assumption that senders cannot collude and deviate in a coordinated way (an assumption that many times is unrealistic, as pointed out in the introduction). If we remove this assumption, the next best thing is to devise mechanisms such that all coalitions up to a certain size do not get additional utility by deviating. We focus mainly on the following notions of equilibrium: Let Γ be any type of game for n players with strategy space A = A_1 ×…× A_n and functions u_i: S ⟶ℝ that give the expected utility of player i when players play a given strategy profile. Then, * A strategy profile σ⃗∈ A is a k-resilient Nash equilibrium if, for all coalitions K up to k players and all strategy profiles τ⃗_K for players in K, u_i(σ⃗) ≥ u_i(σ⃗_-K, τ⃗_K) for some i ∈ K. * A strategy profile σ⃗∈ A is a strong k-resilient Nash equilibrium if, for all coalitions K up to k players and all strategy profiles τ⃗_K for players in K, u_i(σ⃗) ≥ u_i(σ⃗_-K, τ⃗_K) for all i ∈ K. Intuitively, a strategy profile is k-resilient if no coalition of up to k players can deviate in such a way that all members of the coalition strictly increase their utility, and a strategy profile is strongly k-resilient if no member of any coalition of up to k players can strictly increase its utility by deviating, even at the expense of the utility of other members of the coalition. We can construct analogous definitions in the context of information aggregation: Let Γ = (S, A, Ω, p, u) be an information aggregation game. A mechanism M = (m_1, …, m_n, m_d, a_r) for Γ is k-resilient incentive-compatible (resp., strong k-resilient incentive-compatible) if (a) The receiver cannot increase its utility by deviating from the proposed protocol. (b) Fixing m_d and a_r beforehand, the strategy profile of the senders determined by M is a k-resilient Nash equilibrium (resp., strong k-resilient Nash equilibrium). A mechanism M is k-resilient truthful if it is honest and k-resilient incentive-compatible. Strong k-resilient truthfulness is defined analogously. § MAIN RESULTS For the main results of this paper we need the following notation. Given an outcome o: Ω→Δ A, we define by o^*: Ω→ [0,1] the function that maps each state ω to the probability that o(ω) = 0. Note that, since |A| = 2, o is uniquely determined by o^*. The following theorem gives a high level characterization of all k-resilient truthful mechanisms (resp., strong k-resilient truthful mechanisms). Let Γ = (S, A, Ω, p, u) be an information aggregation game with Ω = {ω^1, …, ω^m}. Then, there exists a system E of O(m^2) equations over variables x_1, …, x_m, such that each equation of E is of the form x_i ≤ x_j for some i,j ∈ [m], and such that an outcome o of Γ is implementable by a k-resilient truthful mechanism (resp., strong k-resilient truthful mechanism) if and only if (a) x_1 = o^*(ω^1), …, x_m = o^*(ω^m) is a solution of E. (b) E_r(o) ≥ U_a for all a ∈ A. Moreover, the equations of E can be computed in polynomial time over m and the number of senders n. Note that condition (b) is identical to the one that appears in Lemma <ref>. In fact, condition (b) is the necessary and sufficient condition for a mechanism that implements o to be incentive-compatible for the receiver, and condition (a) is the necessary and sufficient condition for this mechanism to be k-resilient incentive-compatible (resp., strong k-resilient incentive-compatible) for the senders. Theorem <ref> shows that the set of outcomes implementable by k-resilient truthful mechanisms (resp., strong k-resilient truthful mechanisms) is precisely the set of solutions of a system of equations over {o^*(ω^i)}_i ∈ [m]. This means that the solution that maximizes any linear function over {o^*(ω^i)}_i ∈ [m] can be reduced to an instance of linear programming. In particular, the best implementable outcome for the receiver or for each of the senders can be computed efficiently. There exists a polynomial time algorithm that computes the outcome that could be implemented by a k-resilient truthful mechanism (resp., strong k-resilient truthful mechanism) that gives the most utility to the receiver or that gives the most utility to a particular sender. Our last result states that not only we can characterize the outcomes implementable by truthful mechanisms, but that we can also efficiently compute a truthful mechanism that implements a particular outcome. Before stating this formally, it is important to note that all truthful mechanisms can be encoded by a single function m_d^* from message profiles m⃗ = (m_1, …, m_n) to [0,1]. Intuitively, the mechanism m_d defined by m_d^* is the one that maps (m⃗) to the distribution such that 0 has probability m_d^*(m⃗) and 1 has probability 1 - m_d^*(m⃗). Moreover, note that the description of a k-resilient truthful mechanism for a game with m possible states is exponential over k since the mechanism must describe what to do if k players misreport their state, which means that the mechanism should be defined over at least m^k inputs. Clearly, no polynomial algorithm over n and m can compute this mechanism just because of the sheer size of the output. However, given a game Γ and an output o, it is not necessary to compute the whole description of the resilient truthful mechanism m_d^* for Γ that implements o, we only need to be able to compute m_d^*(m⃗) in polynomial time for each possible message profile m⃗. We state this as follows. There exists an algorithm π that receives as input the description of an information aggregation game Γ = (S, A, Ω, p, u), an outcome o for Γ implementable by a k-resilient mechanism (resp., strong k-resilient mechanism), and a message input m⃗ for the mediator, and π outputs a value q ∈ [0,1] such that the function m_d^* defined by m_d^*(m⃗) := A(Γ, o, m⃗) determines a k-resilient truthful mechanism (resp., strong k-resilient truthful mechanism) for Γ that implements o. Moreover, π runs in polynomial time over |Ω| and |S|. The proofs of Theorems <ref> and <ref> are detailed in Sections <ref> and <ref> respectively. Intuitively, each coalition imposes a constraint over the space of possible messages that the mediator may receive, implying that the mediator should suggest action 0 more often for some message inputs than others. These constraints induce a partial order over pure inputs (i.e., messages such that all senders report the same state), which is precisely the order defined by E in Theorem <ref>. It can be shown that, even though there may be exponentially many possible coalitions of size at most k, this partial order can be computed in polynomial time over the number of states and senders. § PROOF OF THEOREM <REF> In this section we prove Theorem <ref>. Note that, because of Lemma <ref>, we only have to show that, given a game Γ = (S, A, Ω, p, u) with |Ω| = m and |S| = n, there exists a system of equations E as in Theorem <ref> such that an outcome o is implementable by an honest mechanism that is k-resilient incentive-compatible (resp., strong k-resilient) for the senders if and only if (o^*(ω^1), …, o^*(ω^m)) is a solution of E. To understand the key idea, let us start with an example in which Ω = {ω^1, ω^2}, S = {1,2,3,4}, senders 1,2 and 3 prefer action 0 in ω^2, senders 2,3 and 4 prefer action 1 in ω^1, and in which we are trying to characterize all outcomes that could be implemented by a mechanism that is 2-resilient incentive-compatible for the senders. If all senders are honest, then the mediator could only receive inputs (ω^1,ω^1,ω^1,ω^1) or (ω^2,ω^2,ω^2,ω^2) (where the ith component of an input represents the message sent by sender i). However, since senders could in principle deviate, the mediator could receive, for instance, an input of the form (ω^1,ω^1,ω^2,ω^2). This input could originate in two ways, either the true state is ω^1 and senders 3 and 4 are misreporting the state, or the state is ω^2 and senders 1 and 2 are misreporting. Even though a mechanism is honest, the mediator's message function m_d should still be defined for inputs with different components, and it must actually be done in such a way that players are not incentivized to misreport. Let m_d^* be the function that maps each message (m_1, m_2, m_3, m_4) to the probability that m_d(m_1, …, m_4) = 0. If the honest mechanism determined by m_d^* is 2-resilient incentive-compatible for the senders, the probability of playing 0 should be lower with (ω^1,ω^1,ω^2,ω^2) than with (ω^2,ω^2,ω^2,ω^2). Otherwise, in ω^2, senders 1 and 2 can increase their utility by reporting 1 instead of 2. Thus, m_d^* must satisfy that m_d^*(ω^1,ω^1,ω^2,ω^2) ≤ m_d^*(ω^2,ω^2,ω^2,ω^2). Moreover, m_d^*(ω^1,ω^1,ω^2,ω^2) ≥ m_d^*(ω^1,ω^1,ω^1,ω^1), since otherwise, in state ω^1, senders 3 and 4 can increase their utility by reporting 2 instead of 1. These inequalities together imply that m_d^*(ω^1,ω^1,ω^1,ω^1) ≤ m_2^*(ω^2,ω^2,ω^2,ω^2), and therefore that o^*(ω^1) ≤ o^*(ω^2). In fact, we can show that this is the only requirement for o to be implementable by a mechanism that is k-resilient incentive compatible for the senders. Given o such that o^*(ω^1) ≤ o^*(ω^2), consider an honest mechanism determined by m_d^*, in which m_d^*(m_1, m_2, m_3, m_4) is defined as follows: * If at least three players sent the same message ω, then m_d^*(m_1, m_2, m_3, m_4) := o^*(ω). * Otherwise, m_d^*(m_1, m_2, m_3, m_4) := (o^*(ω^1) + o^*(ω^2))/2. We can check that the honest mechanism M determined by m_d^* is indeed 2-resilient incentive-com­pa­ti­ble for the senders. Clearly, no individual sender would ever want to deviate since it cannot influence the outcome by itself (still three messages would disclose the true state). Moreover, no pair of senders can increase their utility by deviating since, in both ω^1 and ω^2, at least one of the senders in the coalition would get the maximum possible utility by disclosing the true state. This shows that, in this example, o^*(ω^1) ≤ o^*(ω^2) is the only necessary and sufficient condition for o to be implementable by a mechanism that is 2-resilient incentive-compatible for the senders. §.§ Theorem <ref>, general case The proof of the general case follows the same lines as the previous example. We show the generalization for the case of k-resilient incentive-compatibility, the proof for strong k-resilience is analogous, with the main differences highlighted in Section <ref>. In the example, note that we could argue that m_d^*(ω^1,ω^1,ω^2,ω^2) should be greater than m_d^*(ω^1, ω^1, …, ω^1) since, otherwise, senders 3 and 4 could increase their utility in state ω^1 by reporting ω^2 instead of ω^1. More generally, suppose that in some state ω there exists a subset C of at most k senders such that all senders in C prefer action 1 to action 0. Then, all k-resilient truthful mechanisms must satisfy that m_d^*(ω, …, ω) ≥ m_d^*(m⃗) for all inputs m⃗ such that m_i = ω for all i ∉C. Following this intuition, we make the following definitions. Let Γ = (S, A, Ω, p, u) be an information aggregation game with Ω = {ω^1, …, ω^m} and |S| = n. We say that a possible input m⃗ = (m_1, …, m_n) for m_d is ω-pure if m_1 = m_2 = … = m_n = ω (i.e., if all m_j are equal to ω). We also say that an input is pure if it is ω-pure for some ω. Additionally, if ω∈Ω, we denote by ω⃗ the ω-pure input (ω, …, ω). Moreover, given two inputs m⃗ = (m_1, …, m_n) and m⃗' = (m'_1, …, m'_n) for m_d, we say that m⃗≺_k m⃗' if the subset C of senders such that their input differs in m⃗ and m⃗' has size at most k, and such that (a) m⃗ is ω-pure for some ω and all senders in C strictly prefer action 1 to action 0 in state ω, or (b) m⃗' is ω-pure for some ω and all senders in C strictly prefer action 0 to action 1 in state ω. By construction we have the following property of ≺_k. A honest mechanism is k-resilient incentive-compatible for the senders if and only if m⃗≺_k m⃗' ⟹ m_d^*(m⃗) ≤ m_d^*(m⃗') for all inputs m⃗ and m⃗'. Note that Lemma <ref> completely characterizes the honest mechanisms that are k-resilient incentive-compatible for the senders. However, this lemma is of little use by itself since mechanisms have an exponential number of possible inputs. Let ≤_k be the partial order between pure states induced by ≺_k. More precisely, we say that two states ω and ω' satisfy ω≤_k ω' if there exists a sequence of inputs m⃗^1, …, m⃗^t such that ω⃗≺_k m⃗^1 ≺_k …≺_k m⃗^t ≺_k ω⃗'. For instance, in the example at the beginning of this section, we would have that ω^1 ≤_2 ω^2 since (ω^1,ω^1,ω^1,ω^1) ≺_2 (ω^1,ω^1,ω^2,ω^2) ≺_2 (ω^2,ω^2,ω^2,ω^2). The following proposition shows that the ≤_k relations completely determine the outcomes implementable by honest mechanisms that are k-resilient incentive-compatible for the senders. Let Γ = (S, A, Ω, p, u) be an information aggregation game. Then, an outcome o of Γ is implementable by an honest mechanism that is k-resilient incentive-compatible for the senders if and only if ω≤_k ω' ⟹ o^*(ω) ≤ o^*(ω') for all ω, ω' ∈Ω. The fact that any honest mechanism that is k-resilient incentive-compatible for the senders implies ω≤_k ω' ⟹ o^*(ω) ≤ o^*(ω') follows directly from Lemma <ref>. To show the converse, given o satisfying ω≤_k ω' ⟹ o^*(ω) ≤ o^*(ω'), define m_d^* as follows. If m⃗ is ω-pure for some ω, then m_d^*(m⃗) := o^*(ω). Otherwise, let A^k_≺(m⃗) be the set of inputs m⃗' such that m⃗≺_k m⃗' and A^k_≻(m⃗) be the set of inputs m⃗' such that m⃗' ≺_k m⃗. Then, * If A^k_≺(m⃗) = ∅, then m_d^*(m⃗) := 1. * Otherwise, if A^k_≻(m⃗) = ∅, then m_d^*(m⃗) := 0. * Otherwise, m_d^*(m⃗) := min_m⃗' ∈ A^k_≺(m⃗){m_d^*(m⃗')} + max_m⃗' ∈ A^k_≻(m⃗){m_d^*(m⃗')}/2. Note that m_d^* is well-defined since all elements in A^k_≺(m⃗) and A^k_≻(m⃗) are pure, which means that m_d^*(m⃗') is already defined for all these elements. Moreover, the honest mechanism M determined by m_d^* implements o. It remains to show that M is k-resilient incentive-compatible for the senders. By Lemma <ref> this reduces to show that m⃗≺_k m⃗' ⟹ m_d^*(m⃗) ≤ m_d^*(m⃗') for all inputs m⃗ and m⃗'. To show this, take a pure input ω⃗ and another input m⃗ such that ω⃗≺_k m⃗. If m⃗ is ω'-pure, then ω⃗≺_k m⃗⟹ω⃗≤_k ω⃗' and thus m_d^*(ω⃗) ≤ m_d^*(ω⃗'). If m⃗ is not pure and A^k_≺(m⃗) = ∅ we have by construction that m_d^*(m⃗) = 1, which is greater than m_d^*(ω⃗). Otherwise, for all ω' such that ω⃗' ∈ A^k_≺(m⃗), we have that ω≤_k ω' and thus by assumption that m_d^*(ω⃗) ≤ m_d^*(ω'). Therefore, min_m⃗' ∈ A^k_≺(m⃗){m_d^*(m⃗')}/2≥m_d^*(ω⃗)/2 Moreover, we have that max_m⃗' ∈ A^k_≻(m⃗){m_d^*(m⃗')}/2≥m_d^*(ω⃗)/2 since ω⃗∈ A^k_≻(m⃗'). Hence m_d^*(m⃗) ≥ m_d^*(ω⃗) as desired. An analogous argument can be used for the case in which m⃗≺_k ω⃗. It remains to show that the partial order between the states in Ω defined by ≤_k can be computed with a polynomial algorithm. To do this, note that, by definition, any chain ω⃗≺_k m⃗^1 ≺_k …≺_k m⃗^t ≺_k ω⃗' between two pure inputs ω⃗ and ω⃗' must satisfy that either m⃗^1 or m⃗^2 are also pure. This implies the following lemma: Let Γ = (S, A, Ω, p, u) be an information aggregation game with Ω = {ω^1, …, ω^m}. Let E a system of equations over x_1, …, x_m such that equation x_i ≤ x_j appears in E if and only if ω⃗^i ≺_k ω⃗_j or if there exists an input m⃗ such that ω⃗^i ≺_k m⃗≺_k ω⃗^j. Then, y_1, …, y_m is a solution of E if and only if ω^i ≤_k ω^j ⟹ y_i ≤ y_j for all i, j ∈ [m]. Intuitively, Lemma <ref> says that the inequalities obtained from chains of length 2 or 3 span the partial order over Ω defined by ≤_k, and thus that we can take the system of equations E of Theorem <ref> to be the one in the lemma above. Therefore, given two states ω and ω', it only remains to show that we can check in polynomial time if ω⃗≺_k ω⃗' or if there exists a state m⃗ such that ω⃗≺_k m⃗≺_k ω⃗'. Checking if ω⃗≺_k ω⃗' is equivalent to checking if k = n and either all senders prefer 1 in ω or all senders prefer 0 in ω'. Finding an input m⃗ such that ω⃗≺_k m⃗≺_k ω⃗' reduces to finding an input m⃗ such that (a) the set C_ω of senders such that their message is not ω in m⃗ has size at most k, and all senders in C_ω strictly prefer 1 to 0 in ω. (b) the set C_ω' of senders such that their message is not ω' in m⃗ has size at most k, and all of them strictly prefer 0 to 1 in ω'. The high level idea of the algorithm is that, if m⃗ satisfies the above properties, all senders i that prefer 0 to 1 in ω must satisfy that m_i = ω (otherwise, it breaks property (a)), and all senders i that prefer 1 to 0 in ω' must satisfy that m_i = ω' (otherwise, it breaks property (b)). If there is a sender i that prefers 0 to 1 in ω and 1 to 0 in ω' then such an input m⃗ does not exist, and if there is a sender i that strictly prefers 1 to 0 in ω and 0 to 1 in ω', then m_i has no constraints. The only remaining restriction is that there can only be at most k values different than ω and at most k values different than ω' (note that this implies that if 2k < n such an input does not exist). The algorithm goes as follows: * Split the set of senders into four subsets X_0,1^0,1, X_0,1^1,0, X_1, 0^0,1, X_1, 0^1,0, in which X_i,j^i', j' is the set of senders that prefer i to j in ω (resp., strictly prefer if i = 1) and prefer i' to j' in ω' (resp., strictly prefer if i' = 0). * If X_0,1^1,0≠∅ or 2k < n, there is no solution. * If |X_0,1^0,1| > k or |X_1,0^1,0| > k, there is no solution. * Otherwise, set m_i = ω for all i ∈ X_0,1^0,1, m_i = ω' for all i ∈ X_1,0^1,0. Then, set k - |X_0,1^0,1| of the messages from X_1,0^0,1 to ω and the rest to ω'. Return m⃗. Proof of Correctness: Because of the previous discussion, if X_0,1^1,0≠∅ or 2k < n, there is no solution. If |X_0,1^0,1| ≥ k then, any input m⃗ that satisfies ω⃗≺_k m⃗≺_k ω⃗' would require to have at least |X_0,1^0,1| components equal to ω, which would break property (b). An analogous argument can be used when |X_1,0^1,0| > k. If none of these conditions hold, then we set all messages from X_0,1^0,1 to ω, all messages from X_1,0^1,0 to ω', and we split the messages sent by senders in X_1,0^0,1 between ω and ω' in such a way that no value appears more than k times. The resulting input satisfies properties (a) and (b). §.§ Theorem <ref>, strong k-resilience The proof of Theorem <ref> for strong k-resilience is analogous to the one of k-resilience in the previous section. The main difference is the definition of ≺_k. In this case we say that two inputs m⃗ and m⃗' satisfy m⃗≺_k^s m⃗' if and only if the subset C of senders such that their input differs in m⃗ and m⃗' has size at most k, and such that (a) m⃗ is ω-pure for some ω and at least one sender in C strictly prefers action 1 to action 0 in state ω, or (b) m⃗' is ω-pure for some ω and at least one sender in C strictly prefers action 0 to action 1 in state ω. We have that ω⃗≺_k^s ω⃗' if and only if k = n and at least one sender in ω prefers action 1 to action 0, or at least one sender in ω' prefers action 0 to action 1. Given ω and ω', finding if there exists m⃗ such that ω⃗≺_k^s m⃗≺_k^s ω⃗' can be reduced to finding if there exists a partition of the set of senders S into two sets S_ω and S_ω' such that |S_ω| ≤ k and |S_ω'| ≤ k, and such that at least one sender of S_ω prefers action 0 to 1 in ω' and at least one sender of S_ω' prefers 1 to 0 in ω. This can easily be done in polynomial time. For future reference, we define ≤_k^s in the same way as ≤_k except that we use ≺_k^s instead of ≺_k. § PROOF OF THEOREM <REF> Most of the tools used to prove Theorem <ref> have already appeared in the proof of Theorem <ref>. We prove the theorem for k-resilience, the case of strong k-resilience is analogous. Given a game Γ and an outcome o for Γ, we set m_d^*(ω⃗) := o^*(ω) for each ω∈Ω. For every other input m⃗, we define m_d^*(m⃗) in the same way as in the proof of Proposition <ref>. As shown in the proof of Theorem <ref>, checking if m⃗≺_k m⃗' can be performed in polynomial time. Thus, m_d^*(m⃗) can also be computed in polynomial time. § EXTENDED MODEL AND GENERALIZATION OF MAIN RESULTS An extended information aggregation game is defined in the same way as a standard information aggregation game (see Section <ref>) except that each sender starts the game with a private signal x_i (as opposed to all senders starting the game with the same input ω), and the utility function u takes as input the signals from each sender instead of just ω. More precisely, in an extended information aggregation game Γ = (S, A, X, p, u) there is a set of senders S = {1,2,3,…, n}, a receiver r, a mediator d, a set of actions A, a set X = X_1 × X_2 ×…× X_n of signals, a probability distribution p over X, and a utility function u : (S ∪{r}) × X × A ⟶ℝ. Each game instance proceeds exactly the same way as in a standard information aggregation game except that, in phase 1, a signal profile (x_1, …, x_n) ∈ X is sampled following distribution p, and each signal x_i is disclosed only to sender i. In this context, an outcome o for Γ is just a function from signal profiles x⃗∈ X to distributions over A, and mechanisms for Γ are determined by functions m_d^* from X to [0,1]. Our aim is to generalize the results from Section <ref> to the extended model. However, the main problem is that, for a fixed signal profile, the preferences of the agents may depend on their coalition. For instance, consider a game Γ for five players with uniformly distributed binary signals and binary actions such that the utility of each sender is 1 if the action that the receiver plays is equal to the majority of the signals, and their utility is 0 otherwise. Suppose that senders have signals (0,0,0,1,1). It is easy to check that if players 1, 2 and 3 collude, player 1 would prefer action 0 to action 1. However, if players 1, 4 and 5 collude, player 1 would prefer action 1 since in this case it is more likely that the majority of the signals are 1. We can avoid the issue above by assuming that the game is k-separable, which is that, for all signal profiles x⃗ and all senders i, there exists an action a such that the preference of sender i inside any coalition K of size at most k is a. Intuitively, an extended information aggregation game is k-separable if the preferences of the senders do not depend on the coalition they are in. With this, we can provide algorithms for the characterization and implementation of k-resilient truthful implementable outcomes that are efficient relative to the size of the description of the game Γ. Let Γ = (S, A, X, p, u) be a k-separable extended information aggregation game such that the support of signal profiles in distribution p is {(x⃗)_1, …, (x⃗)_m}. Then, there exists a system E of O(m^2) equations over variables x_1, …, x_m, such that each equation of E is of the form x_i ≤ x_j for some i,j ∈ [m], and such that an outcome o of Γ is implementable by a k-resilient truthful mechanism (resp., strong k-resilient truthful mechanism) if and only if (a) x_1 = o^*((x⃗)_1), …, x_m = o^*((x⃗)_m) is a solution of E. (b) E_r(o) ≥ U_a for all a ∈ A. Moreover, the equations of E can be computed in polynomial time over m and the number of senders n. Note that Theorem <ref> states that E can be computed in polynomial time over the size of the support of signal profiles as opposed to |X|, which may be way larger. There is also a generalization of Theorem <ref> in the extended model. There exists an algorithm A that receives as input the description of a k-separable extended information aggregation game Γ = (S, A, Ω, p, u), an outcome o for Γ implementable by a k-resilient mechanism (resp., strong k-resilient mechanism), and a message input m⃗ for the mediator, and A outputs a value q ∈ [0,1] such that the function m_d^* defined by m_d^*(m⃗) := A(Γ, o, m⃗) determines a k-resilient truthful mechanism (resp., strong k-resilient truthful mechanism) for Γ that implements o. Moreover, A runs in polynomial time over the size m of the support of signal profiles and |S|. The proofs of Theorems <ref> and <ref> are analogous to the ones of Theorems <ref> and <ref> with the following difference. Given two inputs m⃗ and m⃗', we say that m⃗≺_k m⃗' if the subset C of senders such that their input differs in m⃗ and m⃗' has size at most k, and such that (a) m⃗ is in the support of p and all senders in C strictly prefer action 1 to action 0 given signal profile m⃗, or (b) m⃗' is is in the support of p and all senders in C strictly prefer action 0 to action 1 given signal profile m⃗'. Intuitively, we replace the notion of pure input by the condition that the input is in the support of p. Note that the assumption of k-separability is crucial for this definition, since otherwise the preferences of the players may not be uniquely determined by the signal profile. With this definition, we can construct analogous statements for Lemmas <ref>, <ref> and Proposition <ref>, and proceed identically as in the proofs of Theorems <ref> and <ref>. § CONCLUSION We provided an efficient characterization of all outcomes implementable by k-resilient and strong k-resilient truthful mechanisms in information aggregation games. We also gave an efficient construction of the k-resilient or strong k-resilient mechanism that implements a given implementable outcome. These techniques generalize to the extended model where senders may receive different signals, as long as the senders' preferences are not influenced by their coalition (k-separability). It is still an open problem to find if the techniques used in this paper generalize to other notions of coalition resilience as, for instance, the notion in which the sum of utilities of the members of a coalition cannot increase when defecting, or if we can get efficient algorithms in the extended model without the k-separability assumption. It is also an open problem to find if we can get similar results in partially synchronous or asynchronous systems in which the messages of the senders are delayed arbitrarily. eptcs
http://arxiv.org/abs/2307.07647v1
20230714223620
Physics Informed Neural Networks with strong and weak residuals for advection-dominated diffusion problems
[ "Maciej Sikora", "Patryk Krukowski", "Anna Paszynska", "Maciej Paszynski" ]
math.NA
[ "math.NA", "cs.NA" ]
^(1)Institute of Computer Science, AGH University of Science and Technology, Kraków, Poland e-mail: [email protected] ^(2) Faculty of Physics, Astronomy and Applied Computer Science, Jagiellonian University, Kraków, Poland e-mail: [email protected] This paper deals with the following important research questions. Is it possible to solve challenging advection-dominated diffusion problems in one and two dimensions using Physics Informed Neural Networks (PINN) and Variational Physics Informed Neural Networks (VPINN)? How does it compare to the higher-order and continuity Finite Element Method (FEM)? How to define the loss functions for PINN and VPINN so they converge to the correct solutions? How to select points or test functions for training of PINN and VPINN? We focus on the one-dimensional advection-dominated diffusion problem and the two-dimensional Eriksson-Johnson model problem. We show that the standard Galerkin method for FEM cannot solve this problem. We discuss the stabilization of the advection-dominated diffusion problem with the Petrov-Galerkin (PG) formulation and present the FEM solution obtained with the PG method. We employ PINN and VPINN methods, defining several strong and weak loss functions. We compare the training and solutions of PINN and VPINN methods with higher-order FEM methods. advection-dominated diffusion Petrov-Galerkin formulation Physics Informed Neural Networks Variational Physics Informed Neural Networks Eriksson-Johnson problem § INTRODUCTION The classical way of solving PDEs numerically is based on the Finite Element Method (FEM). In FEM, we approximate the solution of the PDE by using a linear combination of the prescribed basis functions. The coefficients of the basis functions are obtained by solving a system of linear equations. The most accurate version of the FEM employs higher-order and continuity B-spline basis functions <cit.>. The neural networks are the universal approximators <cit.>. They can successfully replace or support the FEM computations. Recently, there has been a growing interest in the design and training of neural networks for solving PDEs <cit.>. The most popular method for training the DNN solutions of PDEs is Physics Informed Neural Networks (PINN) <cit.>. Since its introduction in 2019, there has been exponential growth in the number of papers and citations related to them (Web of Science search for "Physics Informed Neural Network"). It forms an attractive alternative for solving PDEs in comparison with traditional solvers such as the Finite Element Method (FEM) or Isogeometric Analysis (IGA). Physics Informed Neural Network proposed in 2019 by George Karniadakis revolutionized the way in which neural networks find solutions to partial differential equations <cit.> In this method, the neural network is treated as a function approximating the solution of the given partial differential equation u (x) = PINN (x). The residuum of the partial differential equation and the boundary-initial conditions are assumed as the loss function. The learning process consists in sampling the loss function at different points by calculating the residuum of the PDE and the boundary conditions. Karniadakis has also proposed Variational Physics Informed Neural Networks VPINN <cit.>. VPINNs use the loss function with a weak (variational) formulation. In VPINN, we approximate the solution with a DNN (as in the PINN), but during the training process, instead of probing the loss function at points, we employ the test functions from a variational formulation to average the loss function (to average the PDE over a given domain). In this sense, VPINN can be understood as PINN with a loss function evaluated at the quadrature points, with the distributions provided by the test functions. Karniadakis also showed that VPINNs could be extended to hp-VPINNs (hp-Variational Physics Informed Neural Networks) <cit.>, where by means of hp-adaptation (h-adaptation is breaking elements, and p-adaptation is raising the degrees of base polynomials) it is possible to solve problems with singularities. The incorporation of the domain decomposition methods into VPINNs is also included in the RAR-PINN method <cit.>. In conclusion, a family of PINN solvers based on neural networks has been developed during the last few years, ranging from PINNs, VPINNs, and hp-VPINNs to RAR-PINNs. In this paper, we investigate the application of PINN VPINN family methods to solve the advection-dominated diffusion, a challenging computational problem. It has several important applications, from pollution simulations <cit.> to flow and transport modeling <cit.>. This problem is usually solved numerically using the finite element method. However, for small values of  ε / β, which means that the advection is much larger than diffusion, it is a very difficult problem to solve. The traditional finite element method does not work. They encounter numerical instabilities resulting in unexpected oscillations and giving unphysical solutions. There are several stabilization methods, from residual minimization <cit.>, Streamline Upwind Petrov-Galerkin (SUPG) <cit.>, and Discontinuous-Galerkin <cit.> and Discontinuous Petrov-Galerkin methods <cit.>. In this paper, we check if neural network-based methods can solve these challenging problems in a competitive way. The new scientific contributions of our paper are the following. We show how to define the loss functions for PINN and VPINN so it can successfully solve the Eriksson-Johnson problem. We investigate strong and weak residuals as the candidates for the loss function. We compare the PINN and VPINN methods with standard finite element method solvers. The structure of the paper is the following. We start in Section 2 with an introduction of the finite element method for advection-dominated diffusion, and we explain why it does not work. Section 3 is devoted to the one-dimensional model problem solved with PINN and VPINN, and Section 4 is to the two-dimensional model Eriksson-Johnson problem. We conclude the paper in Section 5. § GALERKIN METHOD FOR ONE-DIMENSIONAL ADVECTION-DOMINATED DIFFUSION MODEL PROBLEM We focus on the following model formulation of the advection-dominated diffusion problem in one dimension. Find u∈ C^2(0,1): -ϵd^2u(x)/dx^2_diffusion=ϵ+1du(x)/dx_advection"wind"=1 = 0, x ∈ (0,1), -ϵdu/dx(0)+u(0)=1.0, u(1)=0 The weak form is obtained by "averaging" (computing integrals) with distributions provided by test functions: ∫_0^1 -ϵd^2u(x)/dx^2v(x)dx+∫_0^11du(x)/dxv(x)dx = 0,∀v∈ V ∫_0^1ϵdu(x)/dxdv(x)/dxdx+∫_0^11du(x)/dxv(x)dx + u(0)v(0)= v(0),∀v∈ V §.§ Galerkin formulation The traditional finite element method is based on the Galerkin formulation, where we seek the solution as a linear combination of basis functions. In the Galerkin method, we employ the same value for trial and for testing. In our case, we can have 21 quadratic B-splines with C^0 separators defined by the knot vector [0 0 0 0.1 0.1 0.2 0.2 0.3 0.3 0.4 0.4 0.5 0.5 0.6 0.6 0.7 0.7 0.8 0.8 0.9 0.9 1 1 1] (see Figure <ref>). Find u_h ∈ U_h ⊂ U=V, u_h= ∑_i=1,...,21 u_i B_i(x): ∫_0^1ϵdu_h(x)/dxdv_h(x)/dxdx+∫_0^11du_h(x)/dxv_h(x)dx + u_h(0)v_h(0)= v_h(0), ∀v_h∈ V_h=U_h ⊂ U=V where v_h are the same 21 basis functions For small values of diffusion, e.g., ϵ=0.001, this problem has the following solution, presented in brown color in Figure <ref>. As we can see from this Figure, the finite element method has problems in approximating the correct solution for the advection-dominated diffusion problem. §.§ Petrov-Galerkin formulation In the Petrov-Galerkin formulation, we employ different trial and test spaces. Find u_h ∈ U_h ⊂ U, u_h= ∑_i=1,...,11 u_i B_i(x): ∫_0^1ϵdu_h(x)/dxdv_h(x)/dxdx+∫_0^11du_h(x)/dxv_h(x)dx + u_h(0)v_h(0)= v_h(0),∀v_h∈ V_h ⊂ V where v_h are carefully selected 11 elements of V_h For example, we seek a solution as a linear combination of linear B-splines, defined with knot vector [0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.0], and we test with quadratic B-splines with C^0 separators defined by the knot vector [0 0 0 0.1 0.1 0.2 0.2 0.3 0.3 0.4 0.4 0.5 0.5 0.6 0.6 0.7 0.7 0.8 0.8 0.9 0.9 1 1 1] (see Figure <ref>). §.§ Evidence of failure of Galerkin method and superiority of Petrov-Galerkin method with optimal test (equivalent to residual minimization method) for advection-dominated diffusion problem Figures <ref>,<ref>, and <ref> present the comparison of the solutions for ϵ=0.001 obtained with the Galerkin method using 10, 20, and 30 elements and quadratic B-splines with C^0 separators (equivalent to quadratic Lagrange basis) for trial and test, as well as Petrov-Galerkin method using linear B-splines for trial and quadratic B-splines with C^0 separators for the test. These Figures show that the Galerkin method has strong oscillations, while the Petrov-Galerkin method stabilizes the problem (but still delivers some oscillations). Why does the Galerkin method not work, and why does the Petrov-Galerkin method provide better (but not ideal) solutions? We must recall the Céa Lemma, proposed in 1964 in the Ph.D. thesis of Jean Céa ”Approximation variationnelle des problèmes aux limites". The Céa lemma says that || u - u_h || ≤M/α dist{U_h,u} where M stands for the continuity constant b(u,v)≤ M uv, and α stands for the coercivity constant αu^2 ≤ b(u,v). What does it mean? Ideally, the approximation of a solution should be as good as the distance of the space where it lives (trial space) to the exact solution. But due to the Céa Lemma, this is true if and only if M/α=1. We have αu^2 ≤ b(u,u) ≤u^2 which implies M/α≥ 1. The reason why M/α>1 is explained by the discrete Babuška-Brezzi inf-sup condition <cit.>. The coercivity constant α is estimated for the abstract infinite dimensional space V with the supremum over the infinite-dimensional test space. The problem is that in numerical experiments, we consider finite-dimensional spaces u_h∈ U_h, v_h ∈ V_h, and we compute the supremum over the finite-dimensional test space, and this supremum is smaller than infinite-dimensional one inf_u ∈ U sup_v ∈ Vb(u,v)/vu = α > α_h = inf_u_h ∈ U_h sup_v_h ∈ V_hb(u_h,v_h)/v_hu_h M/α_h > M/α≥ 1 To improve the solution, we need to test with larger test space V_h, so it realizes supremum for α. This is the idea of the Petrov-Galerkin formulation. But still, in the Petrov-Galerkin formulation, the test space is still finite-dimensional, so the results may still not be ideal. § NEURAL NETWORK APPROACH FOR ONE-DIMENSIONAL ADVECTION-DOMINATED DIFFUSION MODEL PROBLEM §.§ Physics Informed Neural Networks for advection-dominated diffusion We now introduce the PINN formulation of the one-dimensional model advection-dominated diffusion problem. We recall the strong form of the PDE: Find u∈ C^2(0,1): -ϵd^2u(x)/dx^2_diffusion=ϵ+1du(x)/dx_advection"wind"=1 = 0, x ∈ (0,1), -ϵdu/dx(0)+u(0)=1.0, u(1)=0 The neural network represents the solution u(x)=NN(x)=A_n σ(A_n-1σ(...σ(A_1x+B_1)...+B_n-1)+B_n where σ is the activation faction, e.g., sigmoid σ(x)=1/1+exp(-x). We define the following loss functions related to the point-wise residual of the PDE and with the boundary conditions: LOSS_PDE(x) = (-ϵd^2NN(x)/dx^2+dNN(x)/dx)^2, LOSS_BC0 = ( -ϵdNN(0)/dx+NN(0)-1.0)^2, LOSS_BC1 = (NN(1)s)^2, LOSS(x) = LOSS_PDE(x)+LOSS_BC0+LOSS_BC1. The argument to the loss functions LOSS is the point x selected during the training process. §.§ Variational Physics Informed Neural Networks for advection-dominated diffusion problem We also introduce the VPINN formulation of the one-dimensional model advection-dominated diffusion problem. We recall the weak form of the PDE: Find u∈ H^1(0,1): ∫_0^1 ϵdu(x)/dxdv(x)/dxdx+∫_0^1du(x)/dxv(x)dx +u(0)v(0) =v(0)∀v∈ V Now, neural network IS the solution u(x)=NN(x)=A_n σ(A_n-1σ(...σ(A_1x+B_1)...+B_n-1)+B_n For the VPINN method, we can define the following two alternative loss functions. The first one is related to the strong residual of the PDE, multiplied by the test functions, without integration by parts, and with the boundary conditions: b_strong(v) = ∫(-ϵd^2NN(x)/dx^2v(x)+dNN(x)/dxv(x) )dx; l_strong(v)=0 LOSS_strong(v) = (b_strong(v)-l_strong(v))^2, LOSS_BC0 = (-ϵdNN(0)/dx+NN(0)-1.0)^2, LOSS_BC1 = (NN(1))^2, LOSS_1(v) = LOSS_strong(v)+LOSS_BC0+LOSS_BC1. The second one is with the weak residual of the PDE, after integration by parts, and with the boundary conditions: b_weak(v) = ∫(ϵdNN(x)/dxdv(x)/dx+dNN(x)/dxv(x) )dx+NN(0)v(0), l_weak(v)=v(0), LOSS_weak(v) = (b_weak(v)-l_weak(v))^2, LOSS_BC0 = (-ϵdNN(0)/dx+NN(0)-1.0)^2, LOSS_BC1 = (NN(1))^2, LOSS_2(v) = LOSS_weak(v)+LOSS_BC0+LOSS_BC1. The argument to the loss functions LOSS_weak and LOSS_strong is now the test function v. In this paper, we select the cubic B-splines during the training process. §.§ Numerical results In this section, we summarize our numerical experiments performed for the one-dimensional advection-dominated diffusion problems with PINN and VPINN methods. §.§.§ PINN and VPINN on uniform mesh The first numerical experiment concerns the PINN method with points x selected for the training with a uniform distribution and with cubic B-splines selected for VPINN using uniformly distributed intervals spanning over these points. We employ 4 layers with 20 neurons per layer, hyperbolic tangent as the activation function, and the learning rate is defined as η = 0.00125. We use Adam optimizer <cit.>. The summary of the solution plots is presented in Figure <ref>. The convergence of the method is summarized in Figure <ref>. There are the following plots on these Figures: * basic loss corresponding to PINN method * strong loss corresponding to VPINN method using strong formulation multiplied by the test functions * weak loss corresponding to VPINN method using weak formulation multiplied by the test functions and integrated by parts * strong and weak loss corresponding to VPINN method using both strong and weak formulations The rows correspond to different values of ε=0.1,0.01,0.001, and columns correspond to a different number of points (or intervals), X=100,1000. We can read from these Figures that if PINN and VPINN were using a uniform distribution of points, then it is possible to solve the problem for ϵ=0.01 using 100 points. We can conclude that for 100 points, the VPINN method with strong form loss function and the PINN method provide correct solutions for ε=0.01. There are 150,000 epochs, and the total training time is 1709 seconds. It is also not possible to solve the problem on uniform mesh for ϵ=0.001. §.§.§ PINN and VPINN on adaptive mesh The second numerical experiment concerns the PINN method with points x selected for the training based on the adapted mesh and with cubic B-splines selected for the VPINN span on the adapted mesh. The mesh has been defined as x_0=0, x_1=0.5, x_i=x_i-1+x_i-1+x_i-2/2, up to the point where 1-x_i<ϵ and then we put equally distributed remainning points between 1-ϵ and 1. The summary of the solution plots is presented in Figure <ref>. The convergence of the method is summarized in Figure <ref>. There are the following plots on these Figures: * basic loss corresponding to PINN method * strong loss corresponding to VPINN method using strong formulation multiplied by the test functions * weak loss corresponding to VPINN method using weak formulation multiplied by the test functions and integrated by parts * strong and weak loss corresponding to VPINN method using both strong and weak formulations The rows correspond to different values of ε=0.1,0.01,0.001, and columns correspond to different numbers of points (or intervales), X=100,1000. We can read from these Figures that if using adapted mesh, it is possible to solve the problems with arbitrary ϵ=0.1,0.01 and 0.001 using 100 or more points. We can conclude that for 100 adaptively distributed points, all PINN and VPINN methods provide the correct solution for ε=0.001. There are 40,000 epochs, and the average training time is 514 seconds. § GALERKIN METHOD FOR TWO-DIMENSIONAL ADVECTION-DOMINATED DIFFUSION MODEL PROBLEM This section compares the application of Petrov-Galerkin and neural-network-based methods for the solution of two-dimensional advection-dominated diffusion problems. We focus our attention on the Eriksson-Johnson model problem introduced in <cit.>. §.§ Eriksson-Johnson model problem Given Ω=(0,1)^2, β=(β_x,β_y)=(1,0), we seek the solution of the advection-diffusion problem β_x∂ u/∂ x+β_y∂ u/∂ x-ϵ(∂^2 u/∂ x^2+ ∂^2 u/∂ y^2)=0 with Dirichlet boundary conditions u(x,y)=0 for x∈(0,1),y∈{0,1} u(x.y)=g(x,y)=sin(Π y) for x=0 The problem is driven by the inflow Dirichlet boundary condition. It develops a boundary layer of width ϵ at the outflow x = 1. §.§ Eriksson-Johnson problem weak form for residual minimization method Following <cit.>, we develop the weak formulation for the Eriksson-Johnson problem, with weak enforcement of the Dirichlet boundary condition. b(u,v) = β_x(∂ u/∂ x,v)_Ω+β_y(∂ u/∂ y,v)_Ω+ϵ( ∂ u/∂ x, ∂ v/∂ x)_Ω +ϵ( ∂ u /∂ y, ∂ v/∂ y)_Ω -(ϵ∂ u/∂ xn_x,v)_Γ -(ϵ∂ u/∂ yn_y,v)_Γ -(u,ϵ∇ v · n)_Γ +(u,β· n v)_Γ-∑_K(u,3p^2 ϵ/h_K v)_Γ|_K where n=(n_x,n_y) is the versor normal to Γ, and h_K is element diameter l(v)=-(g,ϵ∇ v · n)_Γ +(g,β· n v)_Γ-(g,3p^2 ϵ/h v)_Γ where the gray and red represents the penalty terms, while the brown terms result from the integration by parts. In our Erikkson-Johnson problem, we seek a solution in space U = V = H^1(Ω). The inner product in V is defined as (u,v)_V=(u,v)_L_2+(∂ u/∂ x,∂ v/∂ x)_L_2 +(∂ u/∂ y,∂ v/∂ y)_L_2 Keeping in mind our definitions of the bilinear form (<ref>), right-hand-side (<ref>), and the inner product (<ref>), we solve the Eriksson-Johnson problem using the residual minimization method: Find (r_m,u_h)_V_m× U_h such as (r_m,v_m)_V_m - ( ∂ u_h/∂ x, v_m) - ϵ(∂ u_h/∂ x, ∂ v_m/∂ x +∂ u_h/∂ y, ∂ v_m/∂ y) - -(ϵ∂ u_h/∂ xn_x,v_m)_Γ -(ϵ∂ u_h/∂ yn_y,v_m)_Γ - -(u_h,ϵ∇ v_m · n)_Γ +(u_h,β· n v_m)_Γ-∑_K(u_h,3p^2 ϵ/h_K v_m)_Γ|_K = -(g,ϵ∇ v_m · n)_Γ +(g,β· n v_m)_Γ-(g,3p^2 ϵ/h v_m)_Γ ∀ v_m ∈ V_h ( ∂ w_h/∂ x, r_m) + ϵ( ∂ w_h/∂ x, ∂ r_m/∂ x +∂ w_h/∂ y, ∂ r_m/∂ y) -(ϵ∂ w_h/∂ xn_x,r_m)_Γ -(ϵ∂ w_h/∂ yn_y,r_m)_Γ - -(w_h,ϵ∇ r_m · n)_Γ +(w_h,β· n r_m)_Γ-∑_K(w_h,3p^2 ϵ/h_K r_m)_Γ|_K = 0 ∀ w_h ∈ U_h where (r_m,v_m)_V_m=(r_m,v_m)+(∂ r_m/∂ x,∂ v_m/∂ x)+(∂ r_m/∂ y,∂ v_m/∂ y) is the H^1 norm induced inner product. §.§ Eriksson-Johnson problem weak form for Streamline Upwind Petrov-Galerkin method The alternative stabilization technique is the SUPG method <cit.>. In this method, we modify the weak form in the following way b(u_h,v_h) + ∑_K (R(u_h),τβ·∇ v_h)_K=l(v_h) +∑_K (f,τβ·∇ v_h)_K ∀ v∈ V where R(u_h)=β·∇ u_h +ϵΔ u_h, and τ^-1= β·(1/h^x_K,1/h^y_K) + 3p^2ϵ1/h^x_K^2+h^y_K^2, where ϵ stands for the diffusion term, and β = (β_x,β_y) for the convection term, and h^x_K and h^y_K are horizontal and vertical dimensions of an element K. Thus, we have b_SUPG(u_h,v_h)=l_SUPG(v_h) ∀ v_h∈ V_h b_SUPG(u_h,v_h)= (∂ u_h/∂ x,v)+ϵ( ∂ u_h/∂ x, ∂ v_h/∂ x) +ϵ( ∂ u_h /∂ y, ∂ v_h/∂ y) -(ϵ∂ u_h/∂ xn_x,v_h)_Γ -(ϵ∂ u_h/∂ yn_y,v_h)_Γ -(u_h,ϵ∇ v_h · n)_Γ -(u_h,β· n v_h)_Γ-(u_h,3p^2 ϵ/h v_h)_Γ +(∂ u_h/∂ x+ϵΔ u_h, (1/h_x + 3ϵp^2/h^x_K^2+h^y_K^2)^-1∂ v_h/∂ x) l_SUPG(v_h) =(f,v_h)-(g,ϵ∇ v_h · n)_Γ +(g,β· n v_h)_Γ^--(g,3p^2 ϵ/h_K v_h)_Γ+(f,(1/h_x + 3ϵp^2/h^x_K^2+h^y_K^2)^-1∂ v_h/∂ x). §.§ Eriksson-Johnson problem on adapted mesh To solve the Eriksson-Johnson problem with the finite element method, we need to apply a special stabilization method. We select two methods, the Streamline Upwind Petrov-Galerkin method (SUPG) <cit.> and the residual minimization method <cit.>. We select ϵ=10^-3. Both methods require adapted mesh. The sequence of solutions from the SUPG method on the adapted mesh is presented in Figure <ref>. They approximate the solution quite well from the very beginning, but they provide a smooth solution (they do not approximate the stiff gradient well). The sequence of solutions obtained from the residual minimization method is presented in Figure <ref>. First, they deliver some oscillations, and the SUPG solution is much better on the coarse mesh, but once the mesh recovers the boundary layer, the resulting solution from the Petrov-Galerkin method delivers a good approximation. § NEURAL NETWORK APPROACH FOR TWO-DIMENSIONAL ADVECTION-DOMINATED DIFFUSION MODEL PROBLEM §.§ Physics Informed Neural Networks for Eriksson-Johnson problem We now introduce the PINN formulation of the Eriksson-Johnson problem. Starting from the strong form: Given Ω=(0,1)^2, β=(1,0)^T, we seek Ω∋ (x,y) → u(x,y) such that ∂ u/∂ x-ϵ(∂^2 u/∂ x^2+ ∂^2 u/∂ y^2)=0 with Dirichlet boundary conditions u=0 for x∈(0,1),y∈{0,1} and u=sin(Π y) for x=0, we introduce the neural network as the solution of the PDE u(x,y)=NN(x,y)=A_n σ(A_n-1σ(...σ(A_1[ x; y ] +B_1)...)+B_n-1)+B_n We define the following loss functions for the strong form of the PDE, as well as for the Dirichlet boundary conditions: LOSS_PDE(x,y) = (∂NN(x,y)/∂ x-ϵ∂^2NN(x,y)/∂ x^2-ϵ∂^2NN(x,y)/∂ y^2)^2, LOSS_BC0y(0,y) = ( NN(0,y)-sin(Π y))^2, LOSS_BC1y(1,y) = ( NN(1,y))^2, LOSS_BCx0(x,0) = ( NN(x,0))^2, LOSS_BCx1(x,1) = (NN(x,1))^2, LOSS(x,y) = LOSS_PDE(x,y)+LOSS_BC0y(0,y)+LOSS_BC1y(1,y)+ +LOSS_BCx0(x,0)+LOSS_BCx1(x,1) The argument for the loss function is the points (x,y)∈ (0,1)^2, selected during the training process. §.§ Variational Physics Informed Neural Networks for Eriksson-Johnson problem We start from the weak formulation: Find u∈ H^1([0,1]^2): b(u,v)=l(v) b(u,v)=∫_[0,1]^2ϵ∂u/∂ x∂v/∂ xdxdy+ ∫_[0,1]^2ϵ∂u/∂ y∂v/∂ ydxdy+ ∫_[0,1]^2∂u/∂ x vdxdy l(v)=0 For test functions we take cubic B-splines such as B_ij;3(x,y)=B^x_i,3(x)B^y_j,3(y), i=1,...,N_x, j=1,...,N_y on ∂Ω. Now, the neural network represents the PDE solution u(x,y)=NN(x,y)=A^n σ(A^n-1σ(...σ(A^1[ x; y ]+b^1)...+b^n-1)+b^n where A^k_ij are the trainable neural network weights from layer k, and b^k_i are the trainable coefficients of neural network biases. We define the first loss function by averaging the strong form with test functions LOSS_strong= (∫_[0,1]^2( ∂NN(x,y)/∂ x-ϵ(∂^2 NN(x,y)/∂ x^2+ ∂^2 NN(x,y)/∂ y^2)) B^xB^y dxdy )^2, where the test functions B^xB^y are cubic B-splines. We also define the second loss based on the weak formulation: LOSS_weak(γ) = ( b(NN(x,y),γ· B^xB^y)-l(NN(x,y), B^xB^y))^2. We also introduce the loss functions for the Dirichlet boundary conditions: LOSS_BC(x,y)=( NN(0,y)-sin(Π y))^2+ ( NN(1,y))^2+ ( NN(x,0))^2+(NN(x,1))^2. We use the Adam optimization algorithm <cit.>. §.§ Numerical results §.§.§ PINN and VPINN on uniformly distributed points The first numerical experiment concerns the PINN method with points (x,y) selected for the training as a uniform distribution, as well as the VPINN method with cubic B-splines employed as test functions span over the uniformly distributed points. We employ 4 layers with 20 neurons per layer, hyperbolic tangent as the activation function, and the learning rate is defined as η = 0.00125. We use Adam optimizer <cit.>. We executed the experiments for different values of ε=0.1,0.01,0.001 with different numbers of points (or intervales), X=50× 50,80× 80. PINN and VPINN with different loss functions do not provide correct numerical results when using uniform distribution of points or test functions. §.§.§ PINN and VPINN on adapted mesh The second numerical experiment concerns the PINN method with points x selected for the training based on the adapted mesh and with cubic B-splines selected for the VPINN span on the adapted mesh. The mesh has been defined as x_0=0, x_1=0.5, x_i=x_i-1+x_i-1+x_i-2/2, up to the point where 1-x_i<ϵ and then we put equally distributed remainning points between 1-ϵ and 1. We employ 4 layers with 20 neurons per layer, hyperbolic tangent as the activation function, and the learning rate is defined as η = 0.00125. The PINN results are summarized in Figure <ref>. The rows correspond to different values of ε=0.1,0.01,0.001 and columns correspond to different number of points (or intervales), X=100,1000. We can read from these Figures that using adapted mesh, it is possible to solve the Eriksson-Johnson problem using the PINN method with arbitrary ϵ=0.1,0.01, and 0.001 using 50 or more points. We can conclude that for 50 adaptively distributed points, the PINN method provides a good solution for ε=0.01. There are 40,000 epochs, and the total training time is 1019 seconds. We can also conclude that for 50 adaptively distributed points, the PINN method provides a good solution for ε=0.001. There are 40,000 epochs, and the total training time is 1009 seconds. The VPINN results with strong loss are summarized in Figure <ref>. The VPINN results with weak loss are summarized in Figure <ref>. The VPINN results with weak and strong loss functions together are summarized in Figure <ref>. The rows correspond to different values of ε=0.1,0.01,0.001, and columns correspond to a different number of points (or intervales), X=100,1000. We can read from these Figures that if using adapted mesh, it is possible to solve the Eriksson-Johnson problem using the VPINN method with arbitrary loss functions and arbitrary ϵ=0.1,0.01, and 0.001 using 50 or more points. Tables <ref> and <ref> illustrate the numerical accuracy with respect to the exact solution u_exact(x, y) = (e^(r_1 (x-1)) - e^(r_2 (x-1)))/ (e^(-r_1) - e^(-r_2)) sin(Π y), r_1 = (1 + √((1 + 4ϵ^2Π^2)))/ (2ϵ), r_2 = (1 - √((1 + 4ϵ^2Π^2)))/ (2ϵ). § CONCLUSIONS The neural network can be successfully applied for the solution of challenging advection-dominated diffusion problems. We focused on Physics Informed Neural Networks (PINN) using the strong residual as the loss function, as well as on the Variational Physics Informed Neural Networks (VPINN) using strong and weak residuals (multiplied by test functions and possibly integrated by parts). The training algorithm selects points to evaluate the residual loss for PINN during the training. It also selects test functions to span over the selected mesh. The points and test functions are selected to evaluate the residuals for VPINN during the training process. When using a uniform distribution of 100 points or a uniform mesh of 100 intervals to span the test functions, it is only possible to train VPINN with strong residual to find the solution of the model one-dimensional problem with ϵ=0.01. Using 1000 uniform points, or 1000 elements mesh, does not allow finding the correct solution for ϵ=0.001, even using 150,000 epochs. For the two-dimensional Eriksson-Johnson problem, the uniform distribution of points does not work at all. On the other hand, when using adapted mesh, both PINN and VPINN provide the correct solution for the one-dimensional model problem and two-dimensional Eriksson-Johnson problem. Thie higher-order finite element methods behave similarly. The adaptive mesh is a must to solve the two-dimensional Eriksson-Johnson problem using stabilized weak formulations. From the practical point of view, finite element method solvers require the development of stabilized formulations, like Streamline Upwind Petrov-Galerkin, or residual minimization methods, together with the weak imposition of the boundary conditions <cit.>. The (Variational) Physics Informed Neural Networks employ strong or weak residual and strong enforcement of the boundary conditions. They work fine with the diffusion coefficient ϵ=0.001. The Adam optimization algorithm finds a good-quality solution after 40,000 epochs. The training time takes around 1000-2000 seconds on a modern laptop. For smaller values of diffusion, e.g., ϵ≤ 0.0001, better loss functions related to the Riesz representative of the residual may be required, which will be a subject of our future work. Artificial intelligence methods based on PINN/VPINN technology, are attractive alternatives for the solution of challenging engineering problems. One disadvantage of the method is the need for pre-existing knowledge about the location of the boundary layer to construct an efficient adaptive mesh. Our future work will include the development of automatic adaptive algorithms for training PINN/VPINN methods. § ACKNOWLEDGMENTS Research project partly supported by the program "Excellence Initiative – research university" for the AGH University of Science and Technology. 99 kingma2017adamKingma, D. & Ba, J. Adam: A Method for Stochastic Optimization. (2017) LOS2021200Łoś, M., Muga, I., Muñoz-Matute, J. & Paszyński, M. Isogeometric residual minimization (iGRM) for non-stationary Stokes and Navier–Stokes problems. Computers And Mathematics With Applications. 95 pp. 200-214 (2021), https://www.sciencedirect.com/science/article/pii/S0898122120304417, Recent Advances in Least-Squares and Discontinuous Petrov–Galerkin Finite Element Methods PollutionPodsiadło, K., Serra, A., Paszyńska, A., Montenegro, R., Henriksen, I., Paszyński, M. & Pingali, K. Parallel graph-grammar-based algorithm for the longest-edge refinement of triangular meshes and the pollution simulations in Lesser Poland area. Engineering With Computers ., 3857-3880 (2021) BBDemkowicz, L. Babuška == Brezzi ??. ICES-REPORT. (2006) RMChan, J. & Evans, J. A Minimal-Residual Finite Element Method for the Convection–Diffusion Equations. ICES-REPORT. (2013) a3855ab1-24a7-34e2-9b85-f7a1af6ed2ffEriksson, K. & Johnson, C. Adaptive Finite Element Methods for Parabolic Problems I: A Linear Model Problem. SIAM Journal On Numerical Analysis. 28, 43-77 (1991), http://www.jstor.org/stable/2157933 CALO2021113214Calo, V., Łoś, M., Deng, Q., Muga, I. & Paszyński, M. Isogeometric Residual Minimization Method (iGRM) with direction splitting preconditioner for stationary advection-dominated diffusion problems. Computer Methods In Applied Mechanics And Engineering. 373 pp. 113214 (2021), https://www.sciencedirect.com/science/article/pii/S0045782520303996 HUGHES198797Hughes, T., Franca, L. & Mallet, M. A new finite element formulation for computational fluid dynamics: VI. Convergence analysis of the generalized SUPG formulation for linear time-dependent multidimensional advective-diffusive systems. Computer Methods In Applied Mechanics And Engineering. 63, 97-112 (1987), https://www.sciencedirect.com/science/article/pii/0045782587901253 ErnDi Pietro, D. & Ern, A. Mathematical Aspects of Discontinuous Galerkin Methods. (Springer, Paris,2011) https://doi.org/10.1002/num.20640Demkowicz, L. & Gopalakrishnan, J. A class of discontinuous Petrov–Galerkin methods. II. Optimal test functions. Numerical Methods For Partial Differential Equations. 27, 70-105 (2011), https://onlinelibrary.wiley.com/doi/abs/10.1002/num.20640 IsogeometricAnalysisProposalHughes, T., Cottrell, J. & Bazilevs, Y. Isogeometric analysis: CAD, finite elements, NURBS, exact geometry and mesh refinement. Computer Methods In Applied Mechanics And Engineering. (2005) UniversalApproximatorsHornik, K., Stinchcombe, M. & White, H. Multilayer feedforward networks are universal approximators. Neural Networks. (1989) NODEChen, R., Rubanova, Y., Bettencourt, J. & Duvenaud, D. Neural Ordinary Differential Equations. Advances In Neural Information Processing Systems 31. (2018) IGAOvwCompImplNguyen, V., Anitescu, C., Bordas, S. & Rabczuk, T. Isogeometric analysis: An overview and computer implementation aspects. Mathematics And Computers In Simulation. 117 pp. 89 - 116 (2015), http://www.sciencedirect.com/science/article/pii/S0378475415001214 IGATsplinesBazilevs, Y., Calo, V., Cottrell, J., Evans, J., Hughes, T., Lipton, S., Scott, M. & Sederberg, T. Isogeometric analysis using T-splines. Computer Methods In Applied Mechanics And Engineering. 199, 229 - 263 (2010) DiffEqDNNMichoski, C., Milosavljević, M., Oliver, T. & Hatch, D. Solving differential equations using deep neural networks. Neurocomputing. 399 pp. 193-212 (2020), https://www.sciencedirect.com/science/article/pii/S0925231220301909 IzogeometrycznaMESPaszyński, M. Classical and isogeometric finite element method. (AGH University of Science,2020), https://epodreczniki.open.agh.edu.pl/handbook/1088/module/1173/reader BREVIS2021186Brevis, I., Muga, I. & Van der Zee, K. A machine-learning minimal-residual (ML-MRes) framework for goal-oriented finite element discretizations. Computers & Mathematics With Applications. 95 pp. 186-199 (2021), https://www.sciencedirect.com/science/article/pii/S0898122120303199, Recent Advances in Least-Squares and Discontinuous Petrov–Galerkin Finite Element Methods HPPaszyński, M., Grzeszczuk, R., Pardo, D. & Demkowicz, L. Deep Learning Driven Self-adaptive Hp Finite Element Method. Lecture Notes In Computer Science., 114-121 (2021) c1Raissi, M., Perdikaris, P. & G.E., K. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal Of Computational Physics., 686-707 (2019) c3Kharazmi, E., Zhang, Z. & Karniadakis, G. Variational Physics-Informed Neural Networks For Solving Partial Differential Equations. Arxiv., 1-24 (2019) c4Kharazmi, E., Zhang, Z. & Karniadakis, G. hp-VPINNs: Variational physics-informed neural networks with domain decomposition. Computer Methods In Applied Mechanics And Engineering., 113547 (2021) c5Shin, Y., Zhang, Z. & Karniadakis, G. Error estimates of residual minimization using neural networks for linear PDEs. Arxiv., 1-22 (2020) c7Qin, S., Li, M. & Xu, S. RAR-PINN algorithm for the data-driven vector-soliton solutions and parameter discovery of coupled nonlinear equations. Arxiv., 1-22 (2022)
http://arxiv.org/abs/2307.07246v2
20230714093822
Knowledge Boosting: Rethinking Medical Contrastive Vision-Language Pre-Training
[ "Xiaofei Chen", "Yuting He", "Cheng Xue", "Rongjun Ge", "Shuo Li", "Guanyu Yang" ]
cs.CV
[ "cs.CV", "cs.LG" ]
KoBo: Knowledge-Boosting Contrastive Vision-Language Pre-training Chen et al. Key Laboratory of New Generation Artificial Intelligence Technology and Its Interdisciplinary Applications (Southeast University), Ministry of Education [email protected] Nanjing University of Aeronautics and Astronautics Dept. of Biomedical Engineering, Case Western Reserve University, OH, USA Joint International Research Laboratory of Medical Information Processing, Southeast University, Nanjing 210096, China Centre de Recherche en Information Biomédicale Sino-Français (CRIBs) Knowledge Boosting: Rethinking Medical Contrastive Vision-Language Pre-Training Xiaofei Chen1, Yuting He1, Cheng Xue1, Rongjun Ge2, Shuo Li3 Guanyu Yang1,4,5() August 12, 2023 ===================================================================================== The foundation models based on pre-training technology have significantly advanced artificial intelligence from theoretical to practical applications. These models have facilitated the feasibility of computer-aided diagnosis for widespread use. Medical contrastive vision-language pre-training, which does not require human annotations, is an effective approach for guiding representation learning using description information in diagnostic reports. However, the effectiveness of pre-training is limited by the large-scale semantic overlap and shifting problems in medical field. To address these issues, we propose the Knowledge-Boosting Contrastive Vision-Language Pre-training framework (KoBo), which integrates clinical knowledge into the learning of vision-language semantic consistency. The framework uses an unbiased, open-set sample-wise knowledge representation to measure negative sample noise and supplement the correspondence between vision-language mutual information and clinical knowledge. Extensive experiments validate the effect of our framework on eight tasks including classification, segmentation, retrieval, and semantic relatedness, achieving comparable or better performance with the zero-shot or few-shot settings. Our code is open on <https://github.com/ChenXiaoFei-CS/KoBo>. § INTRODUCTION Foundation models have become a significant milestone in artificial intelligence, from theoretical research to practical applications <cit.>, like world-impacting large language model ChatGPT <cit.> and art-history-defining large generative model DALL-E <cit.>. In medical image analysis, foundation models are showing promising future, and pre-training technologies <cit.>, as the cornerstone of foundation models, facilitated feasibility of computer-aided diagnosis for widespread use. Medical contrastive vision-language pre-training <cit.> has shown great superiority in medical image analysis, because it utilizes easy-accessible expert interpretation from reports to precisely guide the understanding of image semantics. Therefore, contrastive vision-language pre-training will break through the bottleneck of time-consuming and expensive expert annotation <cit.> and difficulty in learning fine-grained clinical features with pure-image self-supervised methods <cit.>. It will improve data efficiency, and achieve comparable or better performance when transferred with the zero-shot or few-shot setting, demonstrating the potential of promoting the ecology of medical artificial intelligence. However, semantic overlap and semantic shifting are two significant challenges in medical vision-language contrastive learning (Fig.<ref>). (a) Semantic Overlap Problem: There is overlapping semantics between negative samples which should be semantic-distinct, e.g. two medical images sharing the same disease are contrasted which brings noise <cit.>. Once directly learning, cross-modal representations of the same disease are falsely pulled apart, making the model unable to capture the disease-corresponding image feature. (b) Semantic Shifting Problem: Radiologists have writing preferences, e.g. biased for their own familiar concepts and observation view towards similar visual features, and inclined for negation expression towards opposite visual features. Distinct concepts describing the same image are morphologically dissimilar for text encoder, while the negation expression of concepts is morphologically similar <cit.>. Once lack of concept correlation and negation identification, representations with similar semantics are falsely pushed apart and those with opposite semantics are falsely pushed together, interfering with the learning of significant representation<cit.>. Rethinking the existing methods and challenges of medical contrastive vision-language pre-training <cit.>, the lack of clinical knowledge constraints in dual-free-encoding contrastive learning structure is the key problem. Existing methods utilize sample-wise differences to learn mutual information between modalities, improving the representation quality based on the correspondence of learned mutual information and clinical knowledge. However, semantic overlap reduces the learning efficiency of mutual information with the noisy difference, and the mentioned correspondence is vulnerable to semantic shifting. Therefore, if we are able to embed an unbiased, comprehensive representation as knowledge boosting, it will reduce the negative noise and supplement the lacking correspondence. It motivates us to measure the noise with similarities between knowledge representation, and fuse the correspondence between knowledge and modality. In this paper, we propose a novel knowledge-boosting medical contrastive vision-language pre-training framework (KoBo). Our contributions are as followed. 1) Our KoBo pre-trains a powerful image encoder including visual information corresponding with the disease described in texts, where knowledge is embedded in our paradigm (Fig.<ref>) to boost the learning of vision-language consistency. 2) We propose Knowledge Semantic Enhancement (KSE) module to reduce the negative sample noise with the similarity between open-set sample-wise knowledge embeddings. 3) We propose Knowledge Semantic Guidance (KSG) module to adjust the semantic shifting during pre-training, fusing the modality feature with unbiased knowledge embeddings for supplementing the correspondence between modality mutual information and clinical knowledge. § METHODOLOGY Our Knowledge-Boosting Contrastive Vision-Language Pre-training framework (Fig.<ref>) boosts vision-language learning with additional clinical knowledge. It contains two modules: KSE for reducing the negative effect of semantic overlap, and KSG for adjusting semantic shifting, aimed at learning effective representation by maximizing semantic consistency between paired image and text features. §.§ Framework Formulation In the framework, a powerful image encoder Enc^I and text encoder Enc^T is pre-trained, alongside a graph encoder Enc^G. Given a pair of medical image and diagnostic report {I_i, T_i^Report}, I_i∈ℝ^H × W × C, a sentence T_i^Sent is randomly selected from T_i^Report as a caption comprised of several tokens {w_1, w_2,...,w_N_L}. Enc^I outputs global feature z_i^I,G and local feature z_i^I,L for N_I sub-regions, which is from the intermediate feature map. T_i^Sent is fed into Enc^T, obtaining global sentence feature z_i^T,G, and local token feature z_i^T,L. Distinct projectors are applied to map features into embeddings with lower semantic dim D_S, finally getting global and local image embeddings v_i∈ℝ^D_S, R_i={r_i1,r_i2,...,r_iN_I}∈ℝ^N_I× D_S, and text embedding t_i∈ℝ^D_S, L_i={l_i1,l_i2,...,l_iN_L}∈ℝ^N_L× D_S. Besides using reports and images as the input for our pre-training network, we also input an external knowledge graph to the whole framework for improving the correspondence of modality features and clinical knowledge. The knowledge refers to relations between clinical pathology concepts in the radiology domain in the format of triplet 𝒢={(c_h_k,r_k,c_t_k)}^N_G_k=1, such as UMLS <cit.>. Domain knowledge embedding for each concept E={e_s}^N_E_s=1∈ℝ^N_E× D_S is the output of Enc^G(𝒢). §.§ Knowledge Semantic Enhancement To relieve the semantic overlap problem, where negative sample noise harms the effective learning of vision-language mutual information, we propose a semantic enhancement module to identify the noise using sample-wise similarities. The similarity is estimated upon sample knowledge k_i, calculated from domain knowledge embedding E and concept set from texts with negation marker. Getting Sample knowledge: Firstly, we acquire a concept set that contains pathology concepts extracted from texts with Negbio 𝒩(·) <cit.>. The image-view concept set which involves the overall observation is from the whole report, while the text-view set only covers the chosen sentence. Secondly, the image and text sample knowledge, as an auxiliary semantic estimation, is selected from domain knowledge embedding E according to the corresponding concept set from the report and sentence respectively, if not considering the negation problem. Furthermore, considering the challenge that negation expression of concepts commonly exists in radiology reports, which has opposite semantics with similar morphology for text encoder (converging shifting), we randomly generate a No Finding embedding 𝒩ℱ and a variant of domain knowledge embedding E = {e_1, e_2, ..., e_N_E} of the same size as E with Xavier distribution. Upon the negation mark of concept, sample knowledge embedding k_i={k_i,s}^N_ES_s=1 is denoted below: k_i,s = e_i,s c_i,s∈𝒩(T_i), P(c_i,s) ≠ Neg ϵ·𝒩ℱ +(1-ϵ) e_i,s c_i,s∈𝒩(T_i), P(c_i,s) = Neg where P is the negation mark of concepts, and e_i,s, e_i,s is the corresponding position of c_i,s in E and E. ϵ tunes the variance of negative sample knowledge. k^Image_i,s and k^Text_i,s are k_i from the image-view and text-view concept set. Estimation of Similarities: The semantic similarity is calculated upon sample knowledge. For each image-text pair, a max-match strategy is adopted to match each two sample knowledge embedding with the most similar one for calculating cosine similarities. Sample-wise similarities are aggregated with averages. λ_ij^IT = 1/N_ES'∑^N_ES'_s=1max^N_ES_s'=1(k_i,s^Image)^Tk_j,s'^Text, λ_ij^TI = 1/N_ES∑^N_ES_s=1max^N_ES'_s'=1(k_i,s^Text)^Tk_j,s'^Image where N_ES is the number of concepts in T_i^Sent, while N_ES' is that in T_i^Report. Knowledge Semantic Enhancement Loss: We utilize the sample-wise semantic similarity to estimate negative sample noise, placed in the sample weight of the contrastive loss <cit.>, where paired cross-modal embedding are pushed together and unpaired ones are pulled apart. The importance of estimated noisy negative samples is relatively smaller for a subtle pulling between cross-modal embeddings. The semantic enhancement loss is below: ℒ_SE = -1/N∑_i=1^N(logexp(v_i^Tt_i/τ_G)/∑_j=1^N (1-λ^IT_ij) exp(v_i^Tt_j/τ_G) + logexp(t_i^Tv_i/τ_G)/∑_j=1^N (1-λ^TI_ij) exp(t_i^Tv_j/τ_G)) where τ_G is the global temperature, and λ^IT, λ^TI is the sample similarity measurement. specifically, λ_i,i is fixed to zero to persist the positive sample weight. §.§ Knowledge Semantic Guidance In this section, we propose a semantic guidance module to solve the semantic shifting problem. Utilizing sample knowledge from Section <ref> which contains concept correlation and negation information, the adverse effects of both disperse and converging shifting are alleviated by fusing domain-sample knowledge with global-local modality embeddings. We design four contrast schemes: knowledge anchor guidance for adjusting disperse shifting, semantic knowledge refinement for filtering converging shifting, vision semantic response for consolidating knowledge fusion, and semantic bridge guidance for narrowing the modality gap. Knowledge Anchor Guidance: Disperse shifting will be adjusted if there are unbiased anchors in semantic space as priors to attract modality embeddings towards clinical semantics, and domain knowledge embedding does a good job. We define knowledge fused embeddings H_i^IK = ATTN(v_i, E, E) and H_i^TK = ATTN(t_i, E, E), and ATTN(Q,K,V) means the attention function <cit.>: ℒ_KAG = -1/N∑_i=1^N(log exp(H_i^IK· H_i^TK/τ_G)/∑_j=1^N exp(H_i^IK· H_j^TK/τ_G) + log exp(H_i^TK· H_i^IK/τ_G)/∑_j=1^N exp(H_i^TK· H_j^IK/τ_G)) where image-weighted and text-weighted knowledge is globally contrasted. Semantic Knowledge Refinement: Wrong-converging pairs have distinct intrinsic responses on sample knowledge from image and text. Hence, we propose to utilize sample knowledge to refine these falsely gathered dissimilar pairs. We define H_ij^SI = ATTN(k^Text_i, R_j, R_j) and H_ij^ST = ATTN(k^Text_i, L_j, L_j): ℒ_SKR = -1/N∑^N_i=1logexp(1/N_ES·τ_L∑^N_ES_k=1H_iik^SI· H_iik^ST)/∑^N_j=1exp(1/N_ES·τ_L∑^N_ES_k=1H_ijk^SI· H_ijk^ST) where local semantic-weighted image and text embeddings are contrasted. Vision Semantic Response: Instead of matching single token with image sub-regions in <cit.>, we propose to match the concept with sub-regions. As the concept is a more complete and atomic semantic unit, local response upon concept will better guide the representation learning with a fine-grained semantic match through an in-sample contrast. We define H_i^IS = ATTN(R_i, k^Text_i, k^Text_i), and the fusion of knowledge will be consolidated as below: ℒ_VSR = -1/N· N_I∑^N_i=1∑^N_I_k=1logexp(H_ik^IS· r_ik/τ_L)/∑^N_I_k'=1exp(H_ik^IS· r_ik'/τ_L) where there is an in-sample local contrast between H_i^IS and vision features. Semantic Bridge Guidance: We propose to narrow disperse shifting enlarged by the modality gap between vision and language. Specifically, the gap is bridged by the fusion of domain knowledge which is better compatible with text: ℒ_SBG = -1/N∑^N_i=1(logexp(H_i^IK· t_i/τ_G)/∑^N_j=1exp(H_i^IK· t_j/τ_G)+logexp(t_i· H_i^IK/τ_G)/∑^N_j=1exp(t_i· H_j^IK/τ_G)) where the image-weighted domain knowledge is contrasted with text features between samples. Finally, ℒ_SG is aggregated by these four parts as below: ℒ_SG = λ_1ℒ_KAG + λ_2ℒ_SKR + λ_3ℒ_VSR + λ_4ℒ_SBG § EXPERIMENT Experiment Protocol: Pre-training performs on MIMIC-CXR <cit.> following the pre-process style of <cit.>. The impression section of reports and frontal view of images are selected to generate 203k image-report pair. Five downstream task datasets (CheXpert <cit.>, Covidx <cit.>, MIMIC-CXR, UMNSRS <cit.> and SIIM <cit.>) are applied on eight tasks. Semantic relatedness is to verify the text understanding of radiology concepts, where text embedding with certain prompts predicts the relatedness. A new semantic relatedness benchmark is generated from MIMIC-CXR, adding in the extra negation discriminating. CheXpert5X200 <cit.>(Multi-classification) is from CheXpert, and CheXpert-labeller<cit.> generates retrieval labels in MIMIC-CXR. More details are in appendix. For implementation, ResNet50 <cit.> and Vit <cit.> are image encoder, and BioClinicalBERT <cit.> is the text encoder. CompGCN with LTE <cit.> is our graph encoder, and domain knowledge contains 10,244 concepts in UMLS which exist in MIMIC-CXR. Negbio <cit.> combined with UMLS disambiguation tool <cit.> serves as 𝒩(·). Embeddings are projected into the dim of 256. Pre-training has the batch size of 100 and max epochs of 50 based on Pytorch on two RTX3090 GPUs. Adam optimizer with the learning rate of 5e-5 and ReduceLR scheduler are applied. τ_G is 0.07 and τ_L is 0.1. λ in KSG loss are all 0.25, while ϵ in KSE loss is 0.1. Comparison Study: Table <ref> verifies our powerful representation ability, reaching state-of-art in classification, segmentation, and semantic relatedness compared with existing vision-language pre-training tasks, while our method is also top two for retrieval. In zero-shot classification tasks, our KoBo outperforms MGCA and ConVIRT 0.94% and 3.38% respectively, exceeding most methods even in their training setting. For CheXpert5X200, our framework is second only to MedCLIP which presents a superior performance in this dataset. In three few-shot setting task, our KoBo has an absolute leading position. Ablation Study: As is demonstrated in Fig.<ref>, we perform module ablation and data amount ablation. (a) For module ablation, both modules bring benefits in representation learning and are respectively effective. When KSG module is removed, our KoBo also extracts effective feature related to pneumonia with a subtle decrease of 0.51%. When KSE is removed, there is a reduction of 1.25% accuracy. (b) For data amount ablation, KoBo has better data robustness with a subtle decrease when training data reduce to 1%. KoBo also has a superior transfer ability with an absolutely better AUC with 1% data than ImageNet with all training data than ImageNet with all training data. Qualitative Analysis: In Fig.<ref>, our Kobo has learned fine-grained and effective image feature with the fusion of knowledge modeling. The deepest region in the first image gathered on the top left side, showing an obvious expansion on the right lung. There is consistency with the expert annotation and our output logit. The precise location of atelectasis region in CAM of second image and clustering trend interpret for the increase in zero-shot classification. § CONCLUSION In our paper, we propose a Knowledge-Boosting Contrastive Vision-Language Pre-traing framework (KoBo). Sample and domain knowledge are used to differentiate noisy negative samples and supplement the correspondence between modality and clinical knowledge. Our experiments on eight tasks verify the effectiveness of our framework. We hope that our work will encourage more research on knowledge-granularity alignment in medical vision-language learning. §.§.§ Acknowledgements This work was supported in part by the National Natural Science Foundation under grants (62171125, 61828101), CAAI-Huawei Mind Spore Open Fund, CANN (Compute Architecture for Neural Networks), Ascend AI Processor, and Big Data Computing Center of Southeast University. splncs04
http://arxiv.org/abs/2307.05034v2
20230711061807
Synthetic Dataset for Evaluating Complex Compositional Knowledge for Natural Language Inference
[ "Sushma Anand Akoju", "Robert Vacareanu", "Haris Riaz", "Eduardo Blanco", "Mihai Surdeanu" ]
cs.CL
[ "cs.CL" ]
Number Systems for Deep Neural Network Architectures: A Survey [ August 12, 2023 ============================================================== We introduce a synthetic dataset called Sentences Involving Complex Compositional Knowledge (SICCK) and a novel analysis that investigates the performance of Natural Language Inference (NLI) models to understand compositionality in logic. We produce 1,304 sentence pairs by modifying 15 examples from the SICK dataset <cit.>. To this end, we modify the original texts using a set of phrases – modifiers that correspond to universal quantifiers, existential quantifiers, negation, and other concept modifiers in Natural Logic (NL) <cit.>. We use these phrases to modify the subject, verb, and object parts of the premise and hypothesis. Lastly, we annotate these modified texts with the corresponding entailment labels following NL rules. We conduct a preliminary verification of how well the change in the structural and semantic composition is captured by neural NLI models, in both zero-shot and fine-tuned scenarios. We found that the performance of NLI models under the zero-shot setting is poor, especially for modified sentences with negation and existential quantifiers. After fine-tuning this dataset, we observe that models continue to perform poorly over negation, existential and universal modifiers. § INTRODUCTION Natural language inference (NLI) has made tremendous progress in recent years, both in terms of datasets, e.g., SNLI <cit.>, MultiNLI <cit.>, Adversarial NLI <cit.>, NLI_XY <cit.>, MonaLog <cit.>, and methods <cit.>. However, many of these directions lack explainability, a critical drawback that limits their applicability to critical domains such as medical, legal, or financial. In contrast, Natural Logic (NL) <cit.> provides the necessary explainability through explicit compositionality that is driven by several relations that serve as building blocks (Forward Entailment (FE), Reverse Entailment (RE), Negation, Cover, Alternation, Equivalence, and Independence) as well as rules to combine them, which model changes in monotonicity. In this work, we analyze how well transformer networks trained for NLI understand the atomic reasoning blocks defined in NL, and how well they can compose them to detect changes in monotonicity <cit.>. To this end, we create a dataset containing 1304 sentences by modifying 15 premise/hypothesis pairs from the SICK dataset <cit.>. The dataset is generated by modifying the premise and hypothesis sentences selected, as follows: * We append a series of modifiers to subject/verb/objects in the hypothesis/premise pairs. These modifiers include universal quantifiers (e.g., every, always), existential quantifiers (e.g., some, at least), negation, and adverbs/adjectives (e.g., happy, sad). Table <ref> lists the complete set of modifiers used. * We store the adjusted entailment label for each modifier pair to understand the shift in meaning from word-level changes within sentential contexts. More formally, we used the seven entailment relations as defined in <cit.>. These labels were generated manually for each example by following monotonicity calculus and natural logic. For example, consider the premise: an old man is sitting in a field and the hypothesis: a man is sitting in a field, with the original SICK label: Forward Entailment. After adding the universal quantifier every to the aforementioned SICK example, the modified premise: an old man is sitting in a field and the original hypothesis: every man is sitting in a field are annotated with the adjusted label: Reverse Entailment. Using this dataset, we analyzed the capacity of three different NLI methods to correctly capture the change in entailment given the modified texts. In particular, the contributions of this work are as follows: * We propose a mechanism to generate synthetic data for NLI that enforces compositionality in reasoning. Following this mechanism, we produce 1,304 examples from 15 SICK <cit.> premise, hypothesis sentence pairs by modifying the sentences for subject, verb, and object respectively with a series of modifiers. The resulting dataset is freely available at <https://github.com/clulab/releases/tree/sushma/acl2023-nlrse-sicck>. * We define specific annotation guidelines based on monotonicity calculus and natural logic <cit.> for annotating the modified premise and hypothesis sentences in the dataset above. The resulting labels are included in the dataset. * We conducted an analysis to understand how well these structural and compositional changes are captured by neural NLI models, in both zero-shot and fine-tuned scenarios. Our analysis indicates that NLI models perform poorly over negation and several types of quantifiers. Fine-tuned NLI models do not show significant improvement in learning about compositional changes when compared to their zero-shot equivalent models over our dataset. This suggests that compositionality in reasoning remains a challenge for neural models of language. § RELATED WORK Natural Logic (NL) is a formal reasoning approach that makes use of syntactic structure and semantic properties of lexical items to understand compositionally <cit.>. Logical reasoning is a known challenge for neural NLI models <cit.>. In particular, NLI models struggle to understand quantifiers, which is highlighted by the fact that these models do not generalize well over quantifier-driven inference tasks <cit.>. The monotonicity calculus over quantifiers with token-level polarity has been explored using the CCG parser over the SICK dataset to generate a synthetic dataset that considers compositional data augmentation <cit.> and monotonicity calculus <cit.>. Other recent research focused on language structures to highlight the importance of compositionality, i.e., the premise and hypothesis differ only in the order of the words, or the presence of antonyms, synonyms, or negation <cit.>. Having such data augmentation can help move closer to the compositional encoding of the language <cit.>. Our work extends this direction: our dataset captures both phrasal changes (e.g., synonyms, hypernyms), which we inherit from the SICK dataset <cit.>, as well as multiple types of modifiers that are critical for NLI such as universal, existential, negation, and adjectives/adverbs. The FraCas test suite <cit.> contains 346 examples that explore aspects of natural logic applied to NLI <cit.>. The HELP dataset <cit.> modifies phrases in premise/hypothesis sentences based on monotonicity reasoning from combinatorial categorical grammar <cit.> and semantic tagging <cit.>. As mentioned above, our work is complementary to such datasets, as we cover other types of text modifications. The MED dataset <cit.> is another manually-labeled dataset where hypotheses were also modified by the human labelers given the monotonicity information for the premises. Similarly, we manually labeled NLI information, but our work focuses mainly on compositional information in a sentential context. Enhancing the dataset with data augmentation is another recent method to test the generalizability of NLI models <cit.>. Lexical entailment acquired from the distributional behavior of word pairs <cit.> led to the subsequent work of <cit.>, who produced a 3-way classification task for NLI dataset that serves as a benchmark for evaluating natural language understanding. Using Natural Logic as a means to learn and reason about the semantic and lexical relations is a common method used to improve the reasoning capabilities of the NLI models <cit.>. The NLI_XY dataset <cit.> conducts structural investigation over the transformer-based NLI models. In particular, the authors investigate how monotonicity (upwards or downwards) changes when the premises and hypotheses are modified through the insertion of hypernym/hyponym phrases. This work is complementary to ours: while they focus on monotonicity in lexicalization (e.g., changing from a hypernym to a hyponym), we focus on changes in monotonicity due to explicit modifiers applied on top of such lexical modifications. The MonaLog system <cit.> introduces a simple yet explainable NLI method that relies on a simplified Natural Logic implementation. The proposed method operates by implementing monotonicity calculus over CCG syntactic trees using “a small inventory of monotonicity facts about quantifiers, lexical items and token-level polarity.” Despite its simplicity, the authors report excellent performance on the SICK dataset. More closely related to our work, they use MonaLog to generate additional training data for NLI from the generated proofs. § DATASET We introduce a synthetic dataset to facilitate the analysis of compositionality in logic. The dataset contains 1,304 sentences that were created by modifying 15 examples from the SICK dataset <cit.> with a variety of modifiers. To this end, we used a set of phrases that correspond to universal quantifiers, existential quantifiers, negation, and other concept modifiers in Natural Logic (NL) <cit.>. These modifiers were applied to syntactic constructs in both premise and hypothesis and the entailment labels are adjusted, as detailed below. §.§ Overview At a high level, our dataset creation followed the following steps: * We start with 15 seed pairs of premise and hypothesis sentences from SICK. Table <ref> shows the seed sentence pairs. * We syntactically analyze these sentences to understand their subject-verb-object (SVO) structures. Each of the SVO elements is then modified using a subset of the applicable modifiers listed in Table <ref>. This process is detailed in Section <ref>. * Lastly, we re-annotate the entailment labels for the modified sentences, using the seven entailment relations defined in <cit.>: Forward Entailment (FE), Reverse Entailment (RE), Negation (Neg) (or Contradiction), Alternation, Cover, Independence (Neutral) and Equivalence (Equiv). This step is detailed in Section <ref>. The labels are described in Table <ref>. §.§ Sentence Modification Strategy For each premise and hypothesis sentence pair, we modified individual subject, verb, and object phrases with the following approach: * To modify subjects, we used the Berkeley Neural Parser to extract the left-most noun phrases (NPs). We then append the applicable modifiers from Table <ref>. In particular, we used universal quantifiers, existential quantifiers, negations, and adjectives. * To modify verbs, we used the Berkeley Neural parser to extract the rightmost verb phrases (VPs) from the parse tree and appended the applicable modifiers. Verbs were modified using universal quantifiers (always, never), negations (not, never), and adverbs (abnormally,elegantly). * To detect objects, we used the syntactic dependency parser of <cit.> to identify noun phrases attached to the main verb. Similarly to the subject modifications, these objects were modified using universal quantifiers, existential quantifiers, negations, and adjectives. After modifying each of the premises and hypotheses sentences, we generate multiple new data points as follows: f(P_i,H_i, m, SVO) = P_i^',H_i^' where m ∈ M: all modifiers; SVO: subject/verb/object phrases for either one of the parts of the sentence; and P_i, H_i are premise and hypothesis from sentence pairs S_i ∈ S where S is the set of 15 examples from SICK. Lastly, f is the function that modifies a given premise and hypothesis that follows one of the modification strategies described above. We generate the following pairs of combinations of the premise, and hypothesis sentences: (P_i^', H_i), (P_i,H_i^'), (P_i^',H_i^'). We repeat this process to modify each of the relevant sentence phrases, as well as a couple of combinations: subject, verb, object, subject + object, and verb + object. §.§ Entailment Annotation Strategy To annotate our dataset,[The annotation guidelines we followed are detailed on this website <https://github.com/clulab/releases/tree/sushma/acl2023-nlrse-sicck/annotations-guidelines/NLI_annotation_task_guidelines.pdf>] we created a set of annotation guidelines that follow Natural Logic and monotonicity calculus <cit.>. In general, to produce entailment relations we used a set theoretic approach to understand how the set of concepts that holds true in the premise overlaps with the set described in the hypothesis. To implement this set theoretic approach consistently, we defined the quantitative interpretation for several more ambiguous modifiers such as all but one, all, not every as follows: * For the modifier all, we consider the size of the set of elements X to be greater than 0: |X| > 0. For example, in the case of the phrase all children, we consider the size of the set of children to be greater than 0. * For the all but one modifier, we consider the size of all as N and the size of all but one to be N-1. Note that the size of all but one could thus theoretically be 0, when N = 1. * For not every we consider the size of the corresponding set X to be 0 or larger: |X| ≥ 0 where X is any set defined over the sentence. not every man would make X as a set of all men but there exists zero or one or more men that would not be included in this set. * When we cannot determine the size of the intersection of the two sets of premise and hypothesis, we resolved the annotation to be a Neutral label among all 7 entailment relations. * When comparing quantifiers between modified premise, and hypothesis sentence pairs, we denote the sizes of sets mathematically for P ∪ H, P ∩ H, and the Universal set. For example, consider the premise: every turtle is following the fish and the hypothesis: every fish is following the turtle. The set over the premise is P: ∀ X ∈all turtles following one fish, and the set over hypothesis is H: ∀ X ∈all fishes following the turtle. Thus, P ∩ H = ϕ. In this case, the label is Negation (see Table 3). Table <ref> includes more examples with the corresponding entailment labels. A total of 1,304 modified premise and hypothesis sentence pairs along with original sentence pairs were included in the final SICCK dataset. The data was annotated by 5 annotators which were distributed between two sub-groups of annotators, based on the complexity of the labels. In the first two rounds of annotations, we re-grouped to develop concrete guidelines for annotations, without defining too strict rules by leaving room for more natural “if-this-then-that” deductions. There were disagreements between annotations which were resolved by verifying the sizes of sets mathematically over X ∪ Y, X ∩ Y to follow the entailment relations defined as in <cit.>. While in the initial round the inter-annotator agreement was low (k < 0.4), the annotations were revised until each group of annotators converged. Tables <ref>, <ref>, and <ref> provide summary statistics about the SICCK dataset. § EVALUATION We conducted an evaluation of how NLI methods capture the explicit compositionality in our dataset using two configurations: a zero-shot setting, in which we used NLI systems trained externally, and a fine-tuned setting, in which the same models were fine-tuned using our dataset. §.§ Zero-shot Analysis of NLI Models For this analysis, we evaluate three pretrained neural entailment models on our dataset. However, all these systems emit just the three “traditional” entailment labels (Forward Entailment, Contradiction, and Neutral) whereas our dataset contains the seven labels from NL. To align these label spaces, we performed the following transformations: * In case a system produces a Neutral label, we run the prediction in the opposite direction, i.e., from hypothesis to premise. If the top label in the reversed direction is Forward Entailment (FE), we label the pair as Reverse Entailment. Otherwise, we keep the Neutral label. This heuristic allows these systems to produce four labels instead of three. * We convert our seven labels to four labels through the following heuristics: (a) Equivalence was removed since we had only one sentence pair labeled as Equivalence in our dataset; (b) Alternation is merged with Negation; (c) Cover and Independence become Neutral; and (d) the 7 examples that were annotated as Cover|FE were removed. We conducted zero shot evaluation using three NLI models: the cross-encoder model of <cit.> (nli-deberta-v3-base in our tables), the adversarial NLI model of <cit.> (ynie/roberta-large-…), and ELMo-based Decomposable Attention model <cit.> (pair-classification-…). We draw the following observations from this experiment: * Table <ref> indicates that the ELMO-based NLI model performs considerably worse than the other two transformer-based models. This is a testament to how far our field has progressed in just a few years. However, no model approaches 70 F1 points, which indicates that none of these models truly understand the task well. * The NLI models do better over adjectives and adverbs, but they struggle to understand statements modified with universal and existential quantifiers, and negation. Tables <ref>–<ref> indicates that the transformer-based NLI models perform at over 70 F1 points on adjectives/adverbs, at over 65 F1 for universal quantifiers, at approximately 60 F1 for existential quantifiers, and at only 30–35 F1 for negation. This is a surprising finding considering how much attention negation has received in the NLP literature <cit.> <cit.> <cit.>. * Lastly, Tables <ref>–<ref> indicate that NLI models process objects best, followed by subjects, and, lastly, verbs. This is not surprising considering the increased semantic ambiguity of verbs. §.§ Analysis of Fine-tuned NLI models To understand if NLI methods are capable of learning this compositional information, we fine-tuned the two NLI models that performed better over the SICCK dataset. To maximize the data available, we implemented a 5-fold cross-validation evaluation over the entire SICCK dataset and experimented with multiple hyperparameters. In particular, we used 4 or 8 epochs, and batch sizes of 8, 16, or 32 data points. The results of these experiments are summarized in Table <ref>. We draw the following observation from this experiment: * The difference in F1 scores between the fine-tuned systems and the corresponding zero-shot setting ranges from -0.19 to 0.2. This indicates that these systems do not acquire substantial new knowledge despite the fact that they've been exposed to approximately 1,300 sentences with compositional information. This suggests that understanding compositionality is harder than expected. * Similar to the zero-shot setting, NLI models did better over adjectives, and adverbs and relatively better over existential quantifiers in comparison to that of the negation and universal quantifiers. We also observed that models seem to be confused when the annotated label was Neutral but the modifier types were negations. * NLI models perform somewhat better over subject and object-modified examples than on examples with modified verbs. This indicates that the semantic ambiguity of verbs is likely to impact NLI models. § ERROR ANALYSIS We analyze the incorrect predictions of the NLI models over SICCK dataset in this section. We observed that NLI models performed better over adjectives and adverbs, and relatively well over universal quantifiers in comparison to sentences modified with negation and existential quantifiers under both fine-tuned as well as zero-shot settings. We also observed that models seem to be confused when the adjusted label was Neutral and the modifier types were negations. §.§ Neutral Labels with Negation Modifiers Negation understanding in natural language has been a challenging problem <cit.>. <cit.> discussed that Negation is underrepresented in natural language corpora. Further, <cit.> show that even though the transformers were fine-tuned with modified premises with negation (i.e., verb modifiers with negation), the transformers struggle with inference over negated sentence pairs. In our SICCK dataset, there are 167 examples with negation modifiers. Table <ref> shows some statistics relevant to this. Of these 167 examples with Negation modifiers, there are 118 Neutral examples. We observed that nli-deberta-v3-base model incorrectly predicted ground truth for approximately 70% of these examples and while the other NLI model (ynie/roberta-large-snli-mnli-fever-anli-R1-R2-R3-nli) incorrectly predicted 23% of the examples. For all the incorrectly predicted labels for negation-modified examples with Neutral labels, the models seemed to be confused for various compositional cases, i.e. subject or verb or object-modified examples almost equivalently. Modifiers such as no, not every, not with Neutral and Contradiction labels seem to contribute to the confusion. SICCK examples also include the format of alternating modifiers between premises, hypothesis, or both i.e. (P_i^', H_i), (P_i,H_i^'), (P_i^',H_i^') Section <ref> which further seems to confuse the NLI models. This is surprising since we have 593 Neutral examples in our SICCK dataset, albeit with fewer negation examples. Since the dataset is small and has a limited number of examples with negation modifiers, the evaluation analysis seems less generalizable. As emphasized by the analysis from <cit.> and <cit.>, detecting negation in natural language continues to be an unresolved problem. §.§ Verb-modified Examples For verb modifiers, we selected abnormally, elegantly, always, never. Our SICCK dataset has a total of 220 verb-modified examples of which, we have 89 universal modifiers, 90 adverbs/adjectives, and 41 negation. Among the 31 verb-modified examples with negation modifiers and with Neutral label, NLI models incorrectly alternate between Contradiction and FE for 99% of the examples. Of the 49 examples with universal modifiers over verbs with Neutral labels, approximately 69.4% were incorrectly predicted. This further emphasizes that negation (especially when occurring in Neutral examples) remains a challenge. § CONCLUSION This paper introduced a new, synthetic dataset that facilitates analyses of how NLI models capture compositionality. The dataset contains 1,304 sentence pairs that were created by modifying 15 examples from the SICK dataset <cit.> with a variety of modifiers that correspond to universal quantifiers, existential quantifiers, negation, and other concept modifiers in Natural Logic (NL) <cit.>. We used these phrases to modify the subject, verb, and object parts of the premise and hypothesis. Lastly, we annotated these modified texts with the corresponding entailment labels following NL rules. We conducted a preliminary analysis of how well the change in the structural and semantic composition is captured and detected by neural NLI models, in both zero-shot and fine-tuned scenarios. We found that the performance of NLI models is poor in both settings, especially for modified sentences with negation and existential quantifiers, and when verbs are modified. § LIMITATIONS While this work explores the impact of the typical compositional modifiers on entailment relations, we did not consider other fine-grained information that further captures upward or downward monotonicity from the monotonicity calculus of the premise/hypothesis sentence pairs. Further, the dataset that we generated is relatively small, at approximately 1,300 sentences. We also did not evaluate the dataset over T5, BART, GPT-x, and other state-of-the-art LLMs, which may provide more insights. We also did not conduct any evaluation for explanations and interpretation of the evaluated NLI models, which could be future work. Lastly, we did not include a comparison with existing datasets that were created specifically for negation modifiers and universal & existential quantifiers. We see all these issues as exciting avenues for future work. acl_natbib § APPENDIX We provide the modifier-based evaluation results for zero-shot setting over NLI models and finetuned NLI models.
http://arxiv.org/abs/2307.04079v1
20230709011140
Projective Rectangles
[ "Rigoberto Florez", "Thomas Zaslavsky" ]
math.CO
[ "math.CO", "Primary 51E26, Secondary 05B15, 05B35, 05C22, 51A30, 51E20" ]
myheadings Flórez and ZaslavskyProjective Rectangles empty Dept. of Mathematical Sciences, The Citadel, Charleston, South Carolina 29409 [email protected] Dept. of Mathematical Sciences, Binghamton University, Binghamton, New York 13902-6000 [email protected] A projective rectangle is like a projective plane that has different lengths in two directions. We develop the basic theory of projective rectangles including incidence properties, projective subplanes, configuration counts, a partial Desargues's theorem, a construction from projective planes, and alternative formulations. In sequels we study harmonic conjugation and the graphs of lines and subplanes. [2010]Primary 51E26; Secondary 05B15, 05B35, 05C22, 51A30, 51E20 Projective Rectangles Thomas Zaslavsky August 12, 2023 ===================== empty § INTRODUCTION A projective rectangle is like a projective plane, but narrower than it is tall. More precisely, it is like the set of points on a certain kind of family of lines in a projective plane, with their induced lines. Very precisely, it is an axiomatic incidence structure based on adapting axioms of projective geometry. Projective rectangles are found in all known harmonic matroids, such as full algebraic matroids. Harmonic matroids are matroids within which there is harmonic conjugation <cit.>; their definition was inspired by Lindström's article <cit.> about abstract harmonic conjugation. Harmonic conjugation applied to complete lift matroids of group expansions <cit.> of a triangle (for instance, L_2^k, Example <ref>) led us to structures that looked like vertical strips in projective planes—whence the name “projective rectangle” and the impulse to find a general theory of this idea in terms of incidence geometry. Projective rectangles themselves are almost examples of harmonic matroids, seemingly falling short only in special lines, as we prove in the sequel <cit.>. An indication of what we accomplish in this article: First, the axioms (Section <ref>) and basic consequences for incidence geometry (Section <ref>) and counting (Section <ref>). Especially, we see that a projective rectangle, if it is not a projective plane, contains a multitude of maximal projective planes; we call them its “planes”. Section <ref> develops partial Desarguesian properties of projective rectangles, which satisfy limited versions of the two halves of Desargues's Theorem. In Section <ref> we show that the construction based on a subplane and a special point, alluded to above, actually works to produce projective rectangles in planes that are Pappian, i.e., coordinatized by a field; we do not know how far that subplane construction generalizes. The following section treats the narrowest projective rectangles, which are the simplest and best understood. Next are two sections that give alternative viewpoints: in Section <ref> we see that a projective rectangle is essentially a Paschian transversal design and thus is equivalent to a special kind of orthogonal array, and in Section <ref> we take the approach of projective duality by interchanging points and lines, which may suggest new properties but which we have not studied deeply. We have only an elementary understanding of projective rectangles in general, as is shown by the list of significant open problems in Section <ref>. In sequels we treat adjacency graphs and harmonic conjugation. One concerns the graphs of adjacency of lines and of planes <cit.>. Notably, in projective rectangles that are not projective planes the graph of planes, where adjacency means having an ordinary line in common, has striking internal structure that presents a tantalizing vision of higher dimensionality. The other sequel <cit.> explores abstract harmonic conjugation as a theme linking harmonic matroids and projective rectangles. In one direction, a projective rectangle is almost a harmonic matroid. In the other direction, a harmonic matroid contains a projective rectangle if it contains a matroid of a finite-field expansion of a triangle, in particular if it contains a Reid cycle matroid. Our personal interest is mainly in finite systems, but many results apply to infinite projective rectangles. For instance, Section <ref> encompasses infinite systems, while Section <ref> requires finiteness. Our viewpoint is influenced by matroid theory but is largely that of incidence geometry; matroid theory is not needed to read this paper. We wish to acknowledge the inspiration of the elegant and deep short papers <cit.> of Bernt Lindström. Lindström's ideas, as further developed by the first author in his doctoral dissertation and <cit.>, led to this study of projective rectangles. § PROJECTIVE RECTANGLES An incidence structure is a triple (,ℒ,ℐ) of sets with ℐ⊆×ℒ. The elements of are points, the elements of ℒ are lines. A point p and a line l are incident if (p,l) ∈ℐ. A set P of points is said to be collinear if all points in P are in the same line. We say that two distinct lines intersect in a point if they are incident with the same point. A projective rectangle is an incidence structure (,ℒ,ℐ) that satisfies the following axioms: * Every two distinct points are incident with exactly one line. * There exist four points with no three of them collinear. * Every line is incident with at least three distinct points. * There is a special point D. A line incident with D is called special. A line that is not incident with D is called ordinary, and a point that is not D is called ordinary. * Each special line intersects every other line in exactly one point. * If two ordinary lines l_1 and l_2 intersect in a point, then every two lines that intersect both l_1 and l_2 in four distinct points, intersect in a point. A complete quadrilateral is an incidence structure that consists of four lines, no three concurrent, and their six points of intersection. A nearly complete quadrilateral is like a complete quadrilateral but with only five of the intersection points; the sixth intersection point may or may not exist. Axiom (A<ref>) states that almost every nearly complete quadrilateral in a projective rectangle is complete. This is a partial Pasch axiom (e.g., see <cit.>), not the full Pasch axiom because it has an exception when either of the first two lines is special; then the remaining two lines may or may not be concurrent. This exception is what admits projective rectangles that are not projective planes. Section <ref> has more discussion of the significance of Axiom (A<ref>). Notation: We write pq for the unique line that contains two points p and q. After we establish the existence of projective planes in , we use the notation abc… to mean the unique line (if abc… are collinear) or plane (if they are coplanar but not collinear) that contains the points abc…. The projective planes are some familiar examples of projective rectangles. A projective plane is called a trivial projective rectangle. In particular the Fano plane F_7 is the smallest projective rectangle (see Theorem <ref> Part (<ref>)). The non-Fano configuration is not a projective rectangle; it fails Axiom (A<ref>). The matroid L_2^k is another example of a projective rectangle (see Figure <ref>). It has m=3 special lines. Let A:= { a_g | g ∈_2^k }∪{D }, B:= { b_g | g ∈_2^k }∪{D } and C:= { c_g | g ∈_2^k }∪{D }, where we think of _2^k as a multiplicative group, writing gh for the group operation. Let L_2^k be the simple matroid of rank 3 defined on the ground set E:= A∪ B∪ C by its rank-2 flats. The non-trivial rank-2 flats are A, B, C, which are the special lines, and the sets {a_g, b_g h, c_h } with g and h in _2^k, which are the ordinary lines. We note that L_2^k is the complete lift matroid of the group expansion of a triangle, i.e., L_0(_2^k) in the language of <cit.>. We say more about projective rectangles with m=3 and matroids similar to L_2^k in Section <ref>. § PROPERTIES OF PROJECTIVE RECTANGLES In this section we study essential properties of projective rectangles. We begin with basic facts; then we prove that the projective rectangle contains projective planes and we conclude with a section of counting formulas for later use. §.§ Fundamental properties If a projective rectangle with exactly m special lines has one of them with n points, then we say that the order of is (m,n). We do not assume m or n is finite unless we so state. In Theorem <ref> we prove m≤ n; we also prove that every special line has the same number of points, that every ordinary line has the same number of points, and many other elementary facts about points and lines. (If we define ν := n-1 and μ := m-1, then when the projective rectangle is a projective plane, ν=μ= the order of the plane as customarily defined; that is, one less than the number of points in a line.) The following result states basic properties of a projective rectangle. If is a projective rectangle of order (m,n), then the following hold in : * The point set of ∖ D is partitioned by all special lines deleting D. * There are at least three special lines and four ordinary lines. Moreover, there are at least seven points. * If l is a line and p is a point not in l, then the number of distinct lines incident with p intersecting l equals the number of points on l. * Through each ordinary point there passes exactly one special line. * All ordinary lines have the same number of points. The number of points in an ordinary line is equal to the number of special lines, that is, m. * All special lines have the same number of points, i.e., n points, and the same number of ordinary points, i.e., n-1. * There are exactly m(n-1) ordinary points. * The number of lines incident with an ordinary point is equal to the number of points in a special line, that is, n. The number of ordinary lines that contain each ordinary point is n-1. * The number of points in a special line is at least the number of points in an ordinary line; that is, n ≥ m. * There are exactly (n-1)^2 ordinary lines. * For a given point p in an ordinary line l, there are n-2 ordinary lines intersecting l at p. Proof of Part (<ref>). By Axiom (A<ref>), every point p ∈∖ D belongs to the unique special line pD. Proof of Part (<ref>). From Axiom (A<ref>) we know that in there are four points, no three of them collinear. If one is D, each other one with D generates a special line, all of which are distinct by noncollinearity. If none of them is D, the points generate six distinct lines, of which at most two can contain D because no three of the four points are collinear. Thus, the four remaining lines are ordinary lines. Since in one of the ordinary lines there are at least three points, these points form with D three special lines. We have proved that in there are at least three special lines and three ordinary lines. By Axiom (A<ref>), each special line contains at least two ordinary points, so there are at least seven points. Now consider two special lines s, s' and two ordinary points p_1,p_1' on s and p_1',p_2' on s'. The lines p_ip'_j are four distinct ordinary lines. We prove Part (<ref>). From Part (<ref>) we can deduce that in there are a non-incident ordinary point and ordinary line, also that there are a non-incident ordinary point and special line. Let q ∈ l and p∉ l. From (A<ref>) there is exactly one line incident with p that intersects l at q, and all such lines are distinct. We prove Parts (<ref>) and (<ref>). Given an arbitrary ordinary line l, we know by (A<ref>) that each point in l together with D determines a unique special line. Every special line is generated in this way, by (A<ref>). Thus, there is a bijection between the special lines and the points in l. This implies the number of points in any ordinary line equals the number of special lines. We prove Parts (<ref>) and (<ref>). We suppose that l_1 and l_2 are special lines in with n_1 and n_2 points, respectively. Let p be a point non-incident with either of those lines. Part (<ref>) implies that there are n_1 distinct lines intersecting l_1 that are incident with p. Those n_1 lines also intersect l_2. Indeed, one of those lines is special and the remaining (n_1-1) lines intersects l_2 because they are ordinary. Therefore, n_1 ≤ n_2. Similarly, n_2 ≤ n_1. This proves that all special lines have the same number of points. Deducting 1 for the special point D gives the number of ordinary points on a special line. Proof of Part (<ref>). The number of special lines is m, Part (<ref>) says the number of ordinary points in each special line equals n-1 and Part (<ref>) says the special lines partition the ordinary points. Proof of Part (<ref>). We suppose that p is an ordinary point with exactly k incident lines. Let l be a special line with n points and p∉l. From Part (<ref>) we know that there are exactly n distinct lines intersecting l that are incident with p. This implies that k≥ n. We want to prove that k = n. Suppose by contradiction that there is another line l_1 incident with p and not intersecting l. It is clear the l_1 must be an ordinary line. That is a contradiction, because an ordinary always intersect special lines. By Part (<ref>) every special line has n-1 ordinary points, and by definition there are m special lines. Proof of Part (<ref>). Let p be a point in an ordinary line. Two ordinary points in two special lines give rise to a unique ordinary line. Since every special line has n points and one of them is D, it is easy to see that the two special lines give rise to (n-1)^2 ordinary lines. Those are all the ordinary lines that intersect the two special lines. Since every ordinary line intersects every special line, we conclude that there are no more ordinary lines in . Proof of Part (<ref>). Since p is a point in an ordinary line l, from Part (<ref>) there are n lines incident with p. Only one of those n lines is special; the other n-1 are not. This implies that there are n-2 ordinary lines intersecting l at p. §.§ Projective subplanes We show that a projective rectangle is a combination of projective planes, in the strong sense that every two intersecting ordinary lines are lines of a substructure that is a projective plane. Before our results, though, we have to clarify the notion of substructure of an incidence structure (,,). An incidence substructure of (,,) is an incidence structure (',',') in which ' ⊆, ' ⊆, and ' = |'×', i.e., the incidence relation is the same as in the superstructure but restricted to the elements of the substructure. In particular, if (',',') is a projective plane, we call it a subplane of (,,). In a projective rectangle a subplane may contain an ordinary line and all its points; we call that kind full. A full subplane necessarily has order m-1. A subplane need not be full; it also need not be a maximal subplane, for instance if it is a proper subplane of a full subplane. In fact, that is the only way a subplane can fail to be maximal, as we will see in Theorem <ref>. The special point D is very special, as are the special lines. In a projective rectangle , the special point D is a point of every full subplane. Also, for every special line s and every full subplane π, s∩π is a line of π. A full subplane π contains at least two lines, l and l', which intersect at a point p ∈π, and at least one is ordinary, say l. If l' is ordinary, then every special line s intersects both l and l' at different points, unless s is the special line s_p on p. These two points of s determine a line of π, which is the intersection of s with π. Thus, for every special line except possibly s_p, s ∩π is a line of π. If l' is special, or rather if l'=s'∩π for some special line s', then there is at least one point p' on l' that is neither p nor D. Let q be a point in l ∖ p; then π has a line m determined by p' and q, which is ordinary since it contains not only p ∈ s_p but also q ∉ s_p. Then we can replace l' by m and have the case of two ordinary lines, so we may as well assume l' is ordinary. Let s_1 and s_2 be two special lines that are not s_p. Their intersection is in π, but their intersection is D. Therefore, D ∈π. Let p_1 be the intersection of l with s_1 and let p_2 be the intersection of l' with s_2. Since p_1 ∉ l' and p_2 ∉ l, the line m of π determined by p_1 and p_2 does not contain p. Since the points p_1,p_2 are not D and are not in the same special line, m is ordinary, hence it is contained in π. Therefore, m intersects s_p in a point p_12, which cannot be p, so p and p_12 determine a line of π, which must be s_p∩π. That is, s_p∩π is a line of π. Now we present the fundamental result about subplanes. Let be a projective rectangle. If two ordinary lines in intersect in a point, then both lines are lines of a unique full projective plane in . First we state the construction that gives the projective plane. Let l_0 and l_1 be ordinary lines in with exactly one point q in common. (See Figure <ref>.) Let a_0s= l_0∩ s and a_1s= l_1∩ s, where s ranges over the set of special lines in , and pick three special lines to be called x, y, and z such that q ∈ x. Thus, q=a_0x=a_1x. (We know there are three special lines by Theorem <ref> Part (<ref>).) Let b_1s= n_1∩ s, where n_1 is the ordinary line that passes through a_0y and a_1z. Suppose that s and t denote two special lines. We denote by l_st the ordinary line passing through a_0s and a_1t with s,t x and we denote by n_st the ordinary line passing through a_0s and b_1t with s,t y. Let L={l_st: s,t ∈, s,t x and s t } and N={n_st: s,t ∈, s,t y and s t }. Note that n_1 = l_yz∈ L and l_1 = n_xz∈ N. We set :=(_,ℒ_,ℐ_), where ℐ_ is the incidence relation defined in and [ _ := (⋃_l∈ N l) ∪ (⋃_l∈ L l) ∪ l_0 ∪{ D },; _1 := { s∩_ : s ∈},; _2 := L ∪ N ∪{ l_0},; _ := _1 ∪_2. ] We begin with the incidence structure given by Construction <ref>. With the notation there, we prove that is a projective plane. First of all, we note that one of the defining properties of a projective plane, that there are four points in _ with no three of them collinear, is satisfied by a_0y, a_1z, q, and D. We next prove that given two lines in , they intersect. Suppose that the two given lines are in L (they are ordinary). If they intersect in a point in l_0 or in a point in l_1, there is nothing to prove. Suppose that neither of those two cases holds. So, they are two ordinary lines that intersect l_0 and l_1 in four different points. Therefore, by Axiom (A<ref>) the two given lines intersect. By a similar argument we conclude that if the two given lines are in N, then they intersect. It is clear that any two lines in _1 intersect in D and that a line in _2 intersects every line in _1. Suppose the two given lines are λ and η with λ∈ L and η∈ N. If a_0y∈λ and q∈η, then λ and η intersect both l_0 and n_1 in four distinct points. Since l_0 and n_1 intersect in a_0y, by (A<ref>) we conclude that λ and η intersect. Now suppose that a_0y∉λ. Since λ intersects both l_0 and l_1 in distinct points, and n_1 intersects l_0 and l_1 in distinct points, by (A<ref>) we know that λ intersects n_1. Then λ intersects l_0 and n_1 in distinct points (because n_1 intersects l_0 at a_0y∉λ). The fact that λ and η both intersect l_0 and n_1 in distinct points, with (A<ref>), implies that λ and η intersect in a point. Supposing q∉η, the proof is similar. Since λ meets l_0 at a_0y∉ l_1, and q = l_0 ∩ l_1 ∉η, each of λ and η intersects l_0 and l_1 in distinct points; thus, λ and η intersect in a point. This completes the proof that any two lines in _ intersect. We now prove that given two points p_0, p_1 ∈_, they are in a line in . (If they are in one line, they cannot be in two, because the lines of are ordinary lines or restrictions of special lines of , and every line in is determined by two of its points.) This proof requires cases depending on the locations of the two points. The proofs (if not trivial) depend on repeated application of Axiom (A<ref>). For economy of notation we employ a shorthand: p_34 = A6(l_1,l_2;l_3,l_4| p_12;p_13,p_23,p_14,p_24) means that each pair {l_i,l_j} intersects at p_ij for ij = 12, 13, 14, 23, 24. Axiom (A<ref>) then implies that l_3 and l_4 intersect at a point p_34, provided that l_1 and l_2 are ordinary. In this proof all four lines are always ordinary. Case 1. If both points are in a special line s, the line in is s∩_∈_1. This includes the case in which one of those points is D. Henceforth we assume the points are not in the same special line. Case 2. If both points are in l_0 or l_1, there is nothing to prove. Case 3. Suppose both points are not in x ∪ l_0 ∪ l_1. Then p_0 is in a line l_st = a_0sa_1t for some two special lines s and t, not equal, and p_1 is in a line l_uv = a_0ua_1v for some two special lines u and v, not equal (but s,t may not be distinct from u,v). Form the point p_3 = A6(l_0,l_1;l_st,l_uv| q; q_0u,a_1v,a_0s,a_1t), then the point p_4 = A6(l_st,l_uv;l_1,| p_3;a_1t,a_0s,p_1,p_0), and finally the point p_5 = A6(l_st,l_uv;l_0,| p_3;a_0u,a_1v,p_1,p_0). Now p_3 and p_4 are the intersections of l_0 and l_1, respectively, with . Since p_3 ≠ p_4, is a line generated by a point on l_0 ∖ q and a point on l_1 ∖ q (as p_0, p_1 ≠ q). Since that line is not a special line, it is in L. Therefore, p_0 and p_1 are collinear. Case 4. In this case p_0 ∈ l_0 but p_1 ∉ x ∪ l_0 ∪ l_1. We choose names so p_0 = a_0s and p_1 ∈ l_uv as in Case 3. Choose a_1t∈ l_0 ∖ (∪{q}) and form p_2 = A6(l_0,l_1;l_uv,l_st| q;a_0u,a_1v,a_0s, a_1b); then let p_3 = A6(l_uv,l_st;,l_1 | p_2;a_1t,a_1v,p_0,p_1). Now p_3 is the intersection of with l_1, which implies that is generated by p_0 ∈ l_0 ∖ q and p_3 ∈ l_1 ∖ q. Since is not special, it is a line in L. Case 5. In this most complicated case we assume p_0 ∈ x ∖ q and p_1 ∉ x ∪ l_0 ∪ l_1. As in the preceding cases we take p_1 ∈ l_uv. Step 1: Choose p_2 = A6(n_1,l_0;n_st,l_1 | a_0s;b_1t,a_0s,a_0u,a_1v). Step 2: p_3 = A6(l_0,l_1;n_st,l_uv| q;a_0s,p_2,a_0u,a_1v). Step 3: p_4 = A6(n_st,l_uv;l_1,| p_3;p_2,a_1v,p_0,p_1). Step 4: p_5 = A6(n_st,l_uv;l_0,| p_3;a_0s,a_0u,p_0,p_1). The result is that is generated by p_5 ∈ l_0 ∖ q and p_4 ∈ l_1 ∖ q so it is in L. Case 6. Here we assume p_0 ∈ x ∖ q and p_1 ∈ l_1 ∖ q. In this case we take p_0 ∈ n_su. We first find p_2 = A6(l_1,n_1;n_su,l_1 | a_0s;a_0s,b_1u,q,a_1z). Then we find p_3 = A6(n_su,l_1;,n_1 | p_2;p_0,p_1,b_1u,a_1z) and last p_4 = A6(l_1,n_1;l_0,| a_1t;q,a_0s,p_1,p_3). Then is generated by p_4 ∈ l_0 ∖ q and p_1 ∈ l_1 ∖ q, therefore it is in L. Case 7. Now p_0=q and p_1 ∉ x ∪ l_0 ∪ l_1. As usual we take p_1 ∈ l_uv. The first step is to define p_2 = A6(l_0,l_1;l_uv,n_1 | q;a_0u,a_1v,a_0s,a_1t), and then p_3 = A6(l_1,l_uv;,n_1 | a_0u;p_0,p_1,a_1t,p_2). Since p_3 lies on n_1 it is a point b_1w for a special line w ≠ x. Thus, is generated by p_0 = q = a_0x and p_3 = b_1w; this line is n_xw so it is in N. Case 8. The last case is where p_0=q and p_1 ∈ l_1. Both are in the line l_1. In all cases there is a line in _ that contains both p_0 and p_1, so they are collinear in . We have proved collinearity of all pairs of points in _, so is indeed a projective planes. An interpretation of Theorem <ref> is the following corollary. Given three noncollinear ordinary points in a projective rectangle , there is a unique full projective plane in that contains all three points. Given an ordinary line l and an ordinary point p not in l, there is a unique full projective plane in that contains both. For the first part, let the three points be p,q,r. No special line contains all three, so there is one, say p, that is not in a special line through the others. The lines pq and pr are ordinary lines, they are distinct by noncollinearity of the three points, and they intersect, so by Theorem <ref> there is a unique full projective plane that contains them and the three points. The second part follows by taking q,r ∈ l. In a projective rectangle, every maximal subplane is full. The line set of an incidence subplane π contains two ordinary lines l_1,l_2 and its point set contains their intersection point. It follows from Theorem <ref> that π is a subplane of the full subplane determined by l_1 and l_2. Thus, maximality and fullness are equivalent for projective subplanes of a projective rectangle. From now on, when we refer to a plane in a projective rectangle, we mean a full projective subplane. Also, when we say several lines are coplanar, we mean there is a plane π such that each of the lines that is ordinary is a line of π and for each line s that is special, s ∩π is a line of π. We can now characterize a nontrivial projective rectangle as a projective rectangle that contains more than one maximal projective subplane. Such projective rectangles have properties not common to all projective planes; e.g., they satisfy the dual half of Desargues's Theorem (see Theorem <ref>) and they are harmonic matroids (see <cit.>). Let be a projective rectangle. Every ordinary line in is a line of a plane in . If is nontrivial, then every ordinary line l is a line of at least three planes that contain l. Let l be an ordinary line in . From Theorem <ref> Part (<ref>) we know that there is another ordinary line l' that intersects l at exactly one point. This and Theorem <ref> imply that l is in a plane π. If is nontrivial, there is a point q not in π. Let p_1,p_2 ∈π be points in l that are not in the special line that contains q. Then the plane p_1p_2q that contains both ordinary lines p_1q and p_2q, which exists and is unique by Theorem <ref>, is a plane containing l that is different from π. To find a third plane, let p_1 ∈π_1 and p_2 ∈π_2 be ordinary points not in l. There is an ordinary line p_1p_2 that must contain a third point p_3 since m≥3 by Theorem <ref>. By Corollary <ref> there is a unique plane π_3 that contains l and p_3. If s is a special line in the projective rectangle and π is a plane in , then s ∩π is a line of π. Let p_1 and p_2 be points in distinct special lines that are not s. Then by Axiom (A<ref>) there is an ordinary line l that contains both p_1 and p_2, and by Corollary <ref> there is a plane π that contains l. In π there is another line l' that intersects l at p_1; then q=l∩ s and q'=l' ∩ s are two points in s ∩π, which determine a line in π that is contained in the unique line s of that contains q and q'. Thus, s ∩π is a line of π. Now we prove a generalization of Theorem <ref> to all lines, although we lose uniqueness of the containing plane. Let be a projective rectangle. If two lines l_1 and l_2 intersect in a point p, then they are coplanar. Suppose l_1 is a special line. There are points p_1 in l_1 ∖ l_2 ∖ D and p_2 in l_2 ∖ l_1. By Axiom (A<ref>) there is an ordinary line l_3 determined by p_1 and p_2. If l_2 is ordinary, by Theorem <ref> there is a unique plane π that contains l_2 and l_3. By Proposition <ref> the restriction of l_1 to π is a line of π, so l_1 and l_2 are coplanar. If l_2 is special, then l_3 is ordinary. By Proposition <ref> there is a plane π that contains l_3, and by Proposition <ref> both l_1∩π and l_2∩π are lines of π. Thus, l_1 and l_2 are coplanar. Next is an intersection property of lines that has a consequence for the matroid structure of a projective rectangle. Suppose three lines in a projective rectangle intersect pairwise in three different points. Then they are a coplanar triple. Equivalently, if three lines intersect pairwise (i.e., are pairwise coplanar) but are not a coplanar triple, then they all intersect in the same point. Suppose two ordinary lines l_1, l_2 intersect in a point p and lie in a common plane π, and suppose a third line l_3, possibly special, intersects l_1 and l_2 in points different from p. Choosing any points q_1 ∈ l_1 ∖ p and q_2 ∈ l_2 ∖ p determines a line of π through q_1 and q_2. By Construction <ref> and Theorem <ref>, this line is either an ordinary line of or the restriction to π of a special line of . In particular, this applies to l_3, hence l_1, l_2 and l_3 are a coplanar triple of lines of . In case l_1 is ordinary while l_2 and l_3 are special, by Corollary <ref> l_1 and l_2 are coplanar in a plane π and by Proposition <ref> l_3∩π is a line of π, so the three lines are coplanar. The second statement, which is the contrapositive of the first (and see Corollary <ref>), is a useful restatement. If a finite projective rectangle has order (n,n), then it is a projective plane. Because n=m, the projective plane of Corollary <ref> is the whole projective rectangle. This proposition does not apply to the infinite case; see Example <ref>. §.§ No Vamos configuration The Vamos matroid is the matroid of eight points in Figure <ref>. It is one of the smallest matroids that cannot be represented in a projective geometry; for that reason it is one of the fundamental matroid examples. However, we shall not think of it as a matroid but as an incidence structure with eight points as well as lines and planes. The lines are the solid lines in Figure <ref> and the planes are the ones composed of pairs of lines as described in the caption. (As a matroid a projective rectangle has rank 3 while the Vamos matroid has rank 4 and therefore it is trivial that it cannot be a submatroid of a projective rectangle. That is why it is important to think of the Vamos incidence structure instead of the Vamos matroid, even though they look the same in a diagram.) The Vamos incidence structure is not a substructure of any projective rectangle. Suppose a configuration of this kind exists in a projective rectangle. By Proposition <ref> the lines l_1,l_2,l_3 are concurrent in a point and the lines l_2,l_3,l_4 are also concurrent in a point. Clearly, these points are one point, so l_1 and l_3 contain a common point and hence are coplanar, contrary to the structure of the Vamos matroid. That proves the corollary. § FINITE PROJECTIVE RECTANGLES In finite projective rectangles there are many possibilities for counting elements and configurations. They are the topic of this section. §.§ Counts We extend the counts of points, lines, etc. in Section <ref> to planes and various kinds of incidence. Let be a projective rectangle of order (m,n). * The number of ordinary lines that are concurrent with each ordinary line is m(n-2). * There are m(m-1) ordinary points and (m-1)^2 ordinary lines in each plane. * The number of pairs (p,l) that consist of an ordinary point p and an ordinary line l that contains p is m(n-1)^2. * The number of planes that contain each ordinary line is n-2m-2. * The number of pairs (l,π) such that l is an ordinary line and π is a plane that contains l is (n-1)^2 n-2m-2. * The number of planes in is (n-1)^2(n-2)(m-1)^2(m-2). * For a fixed ordinary point p, the number of triples (p,l,π) such that l is an ordinary line incident with p and π is a plane that contains l is (n-1) n-2m-2. * The number of triples (p,l,π) such that p is an ordinary point, l is an ordinary line, and π is a plane that contains l is m(n-1)^2 n-2m-2. * The number of pairs (p,π) such that p is an ordinary point and π is a plane that is incident with p is m(n-1)^2m-1 n-2m-2. * The number of planes that are incident with each ordinary point is n-1m-1 n-2m-2. Proof of (<ref>). Let l be an ordinary line. From Part (<ref>)) there are m points on l. From Theorem <ref> Part (<ref>) we know there are n-2 ordinary lines that intersect l at each point. All those lines are distinct. Proof of (<ref>). This follows from the fact that the plane is projective of order m-1. We exclude the one special point D and the m special lines in the plane. Proof of (<ref>). Each of the (n-1)^2 ordinary lines (Theorem <ref> Part (<ref>)) contains m ordinary points (Part (<ref>)). Proof of (<ref>). Let l be an ordinary line. From Part (<ref>) there are m(n-2) ordinary lines l' that intersect l at exactly one point. Theorem <ref> guarantees the existence of a unique plane π that contains both l and l'. By Part (<ref>) the number of ordinary lines in π that intersect l is (m-1)^2-1 = m(m-2). Thus, the number of planes on l is the quotient, m(n-2)/m(m-2)=(n-2)/(m-2). Proof of (<ref>). The number of ordinary lines should be multiplied by the number of planes on each line. Proof of (<ref>). The number of incident line-plane pairs should be divided by the number of ordinary lines in a plane. Proof of (<ref>). The number of incident line-plane pairs should be multiplied by the number of points in an ordinary line. Proof of (<ref>). The number of triples in Part (<ref>) should be multiplied by the number of ordinary points from Part (<ref>). Proof of (<ref>). The number of triples in Part (<ref>) should be divided by the number of ordinary lines in pi that contain p, which is m-1. Proof of (<ref>). Either divide the number of triples in Part (<ref>) by m-1, the number of ordinary lines on p in π, or divide the number in Part (<ref>) by m(n-1), the whole number of ordinary lines on p. Two lines are skew if they have no point in common. A skew class of lines is a maximal set of lines, in which every pair is skew. If a line has no skew mate, it is a skew class of one. A line may belong to more than one skew class. Two lines that are skew to the same line may intersect. If is a finite projective rectangle of order (m,n), then the following hold in : * Given an ordinary point p and given any ordinary line l that does not contain p, there are exactly n-m ordinary lines containing p that are skew to l. * If l is an ordinary line, then there are (n-2)(n-m) lines that are skew to l. * If l_1 is skew to l, there are m(n-m) lines skew to l that are concurrent with l_1. Proof of Part (<ref>). From Theorem <ref> Part (<ref>) we know that there are exactly n lines passing through p (including a special line). From Theorem <ref> Part (<ref>) we also know that there are exactly m lines passing through p that intersect l (including a special line). Therefore, there are exactly (n-1)-(m-1) ordinary lines passing through p and skew to l. Part (<ref>) follows by subtracting from the number of ordinary lines, (n-1)^2 (Theorem <ref> Part (<ref>)), the number that are concurrent with l, which is m(n-2) (Theorem <ref> Part (<ref>)), and the number that are l, which is 1. Part (<ref>) follows from Part (<ref>). Suppose that is a nontrivial projective rectangle of order (m,n). Let l be an ordinary line l∈. Tthere is a skew line class containing l that has at least m lines in it. I.e., there are m-1 ordinary lines skew to l and skew to one other. Let M = ⌈ (n-m)/(m-1) ⌉ - m, the largest integer such that (n-1)/(m-1)>m+M. Then there is a skew class containing l that has at least m+M lines in it. I.e., there are m+M-1 ordinary lines skew to l and skew to one other. Let l be an ordinary line and let l_1 l be an ordinary line passing though q∈ l. Let p q be a second point in l. By Theorem <ref> Part (<ref>), since n>m there is an ordinary line l_2 passing through p skew to l_1. Let a_i and b_i' be the points in l_1 and l_2 for i=1,2, …, m, labeled so that the line a_ib_i' is special. Lines a_ib_i and a_jb_j for i,j∈{1,2, …, m} with i j, b_i ≠ b_j, b_i ≠ b_i', and b_j ≠ b_j' are ordinary and are skew to each other, because if they intersect, then by Axiom (A<ref>), l_1 intersects l_2, which is a contradiction. Note that it is easy to choose all b_i ≠ b_i' since m>1. Also, we can suppose that l is the line a_1b_1. Now we suppose that (n-1)/(m-1)-m>0 and M is the largest integer such that (n-1)/(m-1)>m+M. (Thus, n>m+M.) Let s be a special line with points s_1, s_2, …, s_m, …, s_n-1,D. Suppose that S∩ a_ib_i=s_i for i=1, …, m. We prove by induction that there are lines h_1, h_2, …, h_M, skew to one other and to all lines of the form a_ib_i. Assume we have k lines h_1, h_2, …, h_k that are skew to one other and to all lines of the form a_ib_i for some k∈{0,1, …, M-1}, where s_m+t∈ h_t for t=1, 2, …, k. First note that neither h_t nor a_ib_i contains the point s_m+k+1 and that (m-1)(m+k) is the number of points in (⋃_t=1^k h_t∪⋃_i=1^m a_ib_i)∖ S. Thus, the maximum number of ordinary lines passing through s_m+k+1 intersecting a line of the form a_ib_i and the lines h_1, …, h_k is (m-1)(m+k). Since s_m+k+1 is an ordinary point, by Theorem <ref> Part (<ref>) we know there are n-1 ordinary lines passing through this point. Since (n-1)>(m-1)(m+k) there must be at least one ordinary line h_k+1 passing through s_m+k+1 that is skew to all lines of the form a_ib_i and the lines h_1, …, h_k. This proves the induction, completing the proof. In the notation of Theorem <ref>, M = (τ-1)m - 2τ. This is negative or zero if τ = 1, or if τ=2 and m≤4, and positive otherwise, so in the “otherwise” case the second bound on the maximum size of the skew class is the better one. §.§ Constraints on the parameters We have found some integers in Theorem <ref>, namely, ρ=n-2m-2, n-1m-1 n-2m-2, and (n-1)^2(m-1)^2 n-2m-2. These integral fractions imply relationships between m and n. Theorem <ref> is a constraint on n, given a value of m. By Section <ref> m-1 must be the order of a projective plane; that is the only constraint we know on m. Let p,p' be two ordinary points in a special line s. Let s' be any other special line. The planes π that contain both p and p' partition s'∖ D into sets π∩(s'∖ D) of size m-1, and each such set is in a unique plane that contains p and p', so there are (n-1)/(m-1) such planes. For an ordinary point q∈ s' let π(q) denote the plane that contains p,p',q. This plane is unique, by Theorem <ref>, because it is determined by the intersecting ordinary lines pq and p'q. Choose another ordinary point q' ∈ s' ∖π(q) and suppose π(q) and π(q') contain a common point r. Then both planes contain the intersecting ordinary lines pr and p'r, so they must be the same plane. It follows that the distinct planes π(q) for q ∈ s' ∖ D partition the points of s' ∖ D. The intersection π(q) ∩ s' is a line of π(q) that contains D, so the number of ordinary points in it is m-1. The number of sets into which s' ∖ D is partitioned is therefore equal to (n-1)/(m-1), and this is the number of planes that contain both p and p'. For a projective rectangle of order (m,n), there is an integer τ≥ 0 such that n = m + τ (m-1)(m-2). If is nontrivial, then τ≥ 1. We simplify the notation by writing ν=n-1 and μ=m-1. Integrality of (n-2)/(m-2) implies that there is an integer ρ≥ 1 such that ν = 1 + ρ(μ-1). Proposition <ref> implies that ν = σμ for some positive integer σ. Therefore, ν = ρ(μ-1)+1 = σμ. It follows that (ρ-σ)μ = ρ-1, so ρ-1 is a multiple of μ, say ρ = τμ+1 where τ≥0. Then substituting for ρ gives (τμ+1-τ)μ = τμ, and upon division by μ we find that σ = τ(μ-1) + 1. This implies ν = τμ(μ-1) + μ, so n-m = ν-μ = τμ(μ-1). We infer the expressions n-2m-2 = τ(m-1)+1, n-1m-1 = τ(m-2)+1, n-1m-1 n-2m-2 = [τ(m-2)+1] [τ(m-1)+1], (n-1)^2(m-1)^2 n-2m-2 = [τ(m-2)+1]^2 [τ(m-1)+1]. If the projective rectangle is nontrivial, n ≥ (m-1)^2 + 1 and ρ≥ m. If the projective rectangle has m=3, then n= 3 + 2τ, where τ≥0. The value τ=0 gives the Fano plane and τ=1 gives n=5 as with the L_2^2 projective rectangle of Example <ref>. However, not all those values of τ admit a projective rectangle with m=3; there are examples only for n = 2^k+1, that is, for τ = 2^k-1-1 (see Section <ref>). Our numerical constraints need strengthening. § AXIAL AND CENTRAL DESARGUES'S THEOREMS Consider two triangles in a projective rectangle, A = a_1a_2a_3 and B = b_1b_2b_3. (A triangle consists of three points, not all collinear, and the three lines joining the points in pairs.) There are three lines l_i = a_ib_i; if they concur in a point p we say the triangles are centrally perspective from center p. If each of the three pairs of lines a_ia_j and b_ib_j meets in a point p_ij and the points p_12, p_13, p_23 are collinear in a line l, we say A and B are axially perspective from axis l. The Central Desargues's Theorem says that, if two triangles are centrally perspective, then they are axially perspective. The converse is the Axial Desargues's theorem. The two together are generally known as Desargues's Theorem. In a projective plane the points p_ij always exist. However, neither half of Desargues's Theorem is valid in every projective plane; in fact the validity of Desargues's Theorem is equivalent to the existence of plane coordinates in a division ring. Thus, for any plane, knowing whether Desargues's theorem holds true is a fundamental question. Every projective plane is a projective rectangle, so we cannot say that Desargues's Theorem holds true in every projective rectangle; but eliminating projective planes from consideration changes the situation. We first establish that each triangle in the axial configuration is necessarily coplanar. If A= a_1a_2a_3 is a triangle and l is a line that intersects the three lines a_ia_j in three points p_ij, then all six points and the four lines are contained in a unique plane. There are four lines in the configuration of six points: l and the lines l_ij = a_ia_j. At most two can be special, so two are ordinary, say l' and l”. Any two of the four lines intersect, so l' and l” intersect; this implies they are in a unique plane π (by Theorem <ref>). The other two lines of the four are each determined by one point in l and one in l', so each is a line of π, or if special the intersection with π is a line of π. Let be a nontrivial projective rectangle. Every plane in satisfies the Axial Desargues's Theorem when the axis is an ordinary line. We begin by assuming triangles A and B are in planes π_A and π_B, respectively, and are axially perspective from an ordinary line l with intersection points p_ij, as in Figure <ref>. The two planes may be the same or different; if they are different, l is their intersection. We may assume a_i ≠ b_i for i=1,2,3 because otherwise the conclusion is trivial. If a_1b_1, a_2b_2, a_3b_3 are not all coplanar, they are coplanar in pairs, since a_i,b_i,a_j,b_j ∈p_ija_ia_j. Hence, by Proposition <ref> there is a point q at which all three lines are concurrent; therefore, q is a center of perspectivity for A and B. Thus, we assume henceforth that a_1b_1, a_2b_2, a_3b_3 are all in one plane, so that π_A = π_B. There is another plane π_ on l because is nontrivial and l is ordinary (by Corollary <ref>), and in this plane we can find a triangle = _1_2_3 that is axially perspective from l with the same intersection points p_ij = l ∩_i_j. The lines b_i_i and b_j_j are coplanar in a plane p_ijb_i_j = b_i_ib_j_j. Therefore, they intersect in a point s_ij. The pairwise coplanar lines b_1_1, b_2_2, and b_3_3 are not all coplanar because _1_2_3 = π_∌b_1,b_2,b_3. By Proposition <ref>, those three lines have a common point s = s_12 = s_13 = s_23. See Figure <ref>. Similarly, there is a point r = a_1_1∩a_2_2∩a_3_3. We prove that r ≠ s and r,s ∉π_A. If r=s, then a_i_i = ra_i_i = r_i and b_i_i = sb_i_i = r_i, so ra_i_i and rb_i_i are the same line; that is, a_i,b_i,_i are collinear; but this is impossible. Similarly, a_i,b_i,_i are collinear, which is impossible, if r or s ∈π_A. Each plane a_ib_i_i contains r and s so the lines a_ib_i and rs are coplanar. We know that r,s ∉a_ib_i⊂π_A. Hence, we have three triples a_ib_i, a_jb_j, rs of lines that are coplanar in pairs but not all coplanar. By Proposition <ref> there is a point q_ij at which each triple is concurrent. Then taking i=1 and j=2,3, we have q_12 = rs∩a_1b_1 = q_13, so q_12=q_13 is a point on all three lines a_1b_1, a_2b_2, a_3b_3 and a center of perspectivity for A and B. That completes the proof. The case in which A and B are not coplanar is reminiscent of the higher-dimensional Desargues's Theorem for projective geometries. That suggests a central Desargues's Theorem for noncoplanar triangles. Let be a nontrivial projective rectangle. Then satisfies the Central Desargues's Theorem for triangles that are not coplanar. We begin by assuming triangles A and B are in two different planes, π_A and π_B respectively, and are centrally perspective from a point p. We show that we may assume a_i ≠ b_i for i=1,2,3. Since the triangles are not coplanar, they cannot be equal; in particular, say, a_3 ≠ b_3. The conclusion is trivial if a_1=b_1 and a_2=b_2; the axis is then a_1a_2=b_1b_2. Suppose henceforth that a_2 ≠ b_2 and a_3 ≠ b_3. Assume first that a_1 ≠ b_1. Let l_i := a_ib_i (which exists and contains p by central perspectivity), p_ij := l_i ∩ l_j (which exists because a_i,b_i,a_j,b_j,p are coplanar and any distinct three of them, excluding D if one of them is not ordinary, determine the plane), and λ_ij := p_ikp_jk where {i,j,k} = {1,2,3}. The lines λ_ij exist if a_1 ≠ b_1 because if p_ij=p_ik (i,j,k all different), then this point is the intersection of a_ia_j and a_ia_k but that intersection is a_i, and it is also the intersection of b_ib_j and b_ib_k but that intersection is b_i, from which it follows that a_i=b_i, contrary to our assumption. Now we observe that all points p_ij∈π_A ∩π_B, so all lines λ_i ⊆π_A ∩π_B. But as we assumed π_A ≠π_B, their intersection cannot consist of more than one line. It follows that λ_12 = λ_13 = λ_23 and this is the required axis of perspectivity. If a_1=b_1, in the previous discussion the line l_1 degenerates to a point and the rest of the proof is similar but simpler, with a_1p_23 as the axis of perspectivity. We note that any of the lines in the proof might be special, but because we only argue within planes, the proof is not affected. Theorem <ref> reinforces our belief that a nontrivial projective rectangle should be regarded as, in a strange way, nonplanar. Unfortunately, we were not able to make this intuition precise. § THE SUBPLANE CONSTRUCTION OF PROJECTIVE RECTANGLES Given a projective plane π and a subplane π', we wish to get a projective rectangle by taking a point D, all the lines joining it to points of π', all the points on those lines, and all the restrictions to our point set of the lines in π that are generated by our points (i.e., contain at least two of our points). D must be taken in the subplane. Suppose D is not in π'. Take a point P ∈π' and the line PD. This is supposed to be a special line so it must be a line of any plane in the projective rectangle; the proof is that every line of a projective rectangle, thus every line of π', intersects every special line (Axiom (A<ref>)), so L ∩π' cannot be one point. Therefore L ∩π' must be a line of π'. Now consider a second point P' ∈π' ∖ L. Then L and L'=P'D are both extensions of lines of π' so they intersect in π', but they intersect in D; this means D ∈π'. We could simplify the construction: Take a subplane π' and one line l in it, and any point D in π' ∖ l. For the projective rectangle, take all lines that join D to l and for ' take all points of π on those lines. This gives precisely the subplane construction, because already it gives all the points of π' and then only the points generated from D and π' in that construction. A plane is Pappian if it is coordinatized by a (commutative) field. The subplane construction in a Pappian projective plane produces a projective rectangle. Let our point set be ' and the incidence structure induced on it by π be '. There are two kinds of line in ': a long line is a line of π and a short line l is the restriction to ' of a line L of π that is not contained in ', so if l is any short line, L denotes its extension into π. If ' turns out to be a projective rectangle, the long lines will be the special lines of ' and the short lines will be the ordinary lines. Axiom (A<ref>): By definition, since we took every line generated by two points of '. Axiom (A<ref>): Four such points exist in the subplane π'. Axiom (A<ref>): By definition. Axiom (A<ref>): Every point of ' is in a long line, every short line of ' is a restriction of a line L of π, and any two lines of π intersect in a point P. Thus, for each short line l of ', its extension L intersects each long line s in a point which, by definition of ', is in the long line s. Axiom (A<ref>): Follows from (A<ref>) because there are at least 3 special lines. Axiom (A<ref>): Let the other two lines be l_1' and l_2'. If either of them is long, the conclusion follows from Axiom (A<ref>). Therefore, assume l_1' and l_2' are short lines. If two or more of them are in π', then all four are and the property follows from that of a projective plane. This leaves two cases: One of the lines is in π', or none is. We give an analytic proof, using coordinates, when π=π() for a field , so we can take π' to be a subplane generated by a subfield '. We write P := l_1 ∩ l_2, Q_ij := l_i ∩ l_j', R := L_1' ∩ L_2'. We need to prove that R ∈'. We give an analytic proof. Write I_m for the point on the ideal line L_∞ that is on all lines of slope m. We choose D to be the point I_∞ on all vertical lines of π; thus, the point set of our supposed projective rectangle is ' = {[z:x:y] : z=0, or z=1 and x ∈'}. We consider two cases, depending on whether or not one of the short lines is within π'=π('). Case 1. One of the short lines is in π', say l_1 ⊆π'. Since we can assign noncollinear coordinates arbitrarily to any three noncollinear points in π', we may choose the coordinate system so that l_1 has the equation y=0, P = (0,0), l_2 has the equation y=m_2x, and l_2' has the equation y = b_2' (where b_2' ∉' since l_2' ⊈π'). Then Q_12 = I_0. The equation of l_1' has the form y = m_1'x+b_1'. Note that m_2, m_1' ∉' since l_2, l_1' are not in π'. From this information we can find the coordinates of the other intersection points. They are Q_11 = (-b_1'/m_1', 0 ), Q_21 = (b_1'/m_2-m_1', y_21), Q_22 = (b_2'/m_2, b_2'), R = (b_2'-b_1'/m_1' , b_2'). Because Q_11, Q_21, Q_22∈', their x-coordinates are in '. None equals 0. Therefore, m_1'/b_1', m_2-m_1'/b_1', b_2'/m_2∈', so also m_2/b_1'∈'. The x-coordinate of R is b_2'-b_1'/m_1' = b_2'/m_1' - b_1'/m_1' = b_2'/m_2m_2/b_1'b_1'/m_1' - b_1'/m_1'∈', proving that R ∈'. Case 2. None of the four short lines is in π'. We choose coordinates so that P ∈ L_∞; that is, P = I_m for some m ∈, so l_1 has equation y=mx+b_1 and l_2 has equation y=mx+b_2 with b_1,b_2 ∈ and b_1 ≠ b_2. The other lines l_j' have equations y = m_j'x+b_j', where m_j' ≠ m. The special case m_1'=m_2' is not excluded, but then R ∈ L_∞⊆', so we may assume m_1' ≠ m_2'. The special case b_1' = b_2' is also not excluded; then R is in the line x=0; this case will be dealt with in the course of the proof. We can exclude m_1'=m and m_2'=m since then P ∈ l_1' or l_2', respectively, which violates the assumption of Axiom (A<ref>). The intersection points (other than P), which cannot be in L_∞, have coordinates Q_11 = (b_1-b_1'/m_1'-m, y_11), Q_12 = (b_1-b_2'/m_2'-m, y_12), Q_21 = (b_2-b_1'/m_1'-m, y_21), Q_22 = (b_2-b_2'/m_2'-m, y_22), R = (b_1'-b_2'/m_2'-m_1', y_R). The x-coordinates of the Q_ij are in '; we want to show that of R is also in '. Write ρ_ij for the x-coordinate of Q_ij. That is, b_i-b_j' = ρ_ij(m_j'-m). These are four equations E_ij. By combining E_11 with E_21 and E_12 with E_22 we infer that b_2-b_1 = (ρ_21-ρ_11)(m_1'-m) = (ρ_22-ρ_12)(m_2'-m). Thus, m_1'-m/m_2'-m = ρ_22-ρ_12/ρ_21-ρ_11 =: α∈'. (This last step would be forbidden if ρ_21=ρ_11, but that implies l_1' contains D, contrary to assumption.) Now combining E_11 with E_12 and E_21 with E_22 we infer that b_2'-b_1' = ρ_11(m_1'-m) - ρ_12(m_2'-m) = (ρ_12-αρ_11)(m_2'-m) with α∈' and similarly b_2'-b_1' = (ρ_22-βρ_21)(m_1'-m) with β∈'. Rewriting, m_1'-m = b_2'-b_1'/ρ_22-βρ_21, m_2'-m = b_2'-b_1'/ρ_12-αρ_11, which combine to give m_2'-m_1' = (b_2'-b_1') ( 1/ρ_12-αρ_11 - 1/ρ_22-βρ_21), or in a different form, m_2'-m_1'/b_2'-b_1'∈'. This is the reciprocal of the x-coordinate of R; consequently, R ∈'. The one caveat is that, if b_1'=b_2', we cannot proceed from Equation (<ref>); but then that equation implies m_1'=m_2', which was excluded at the beginning of the proof. So this difficulty will not occur. That concludes the proof of Theorem <ref>. If π is Pappian and not prime, it has a prime subplane so there are proper subplanes to carry out this construction. All Desarguesian planes and many others have proper subplanes (e.g., planes over near fields; cf. the book of Hughes and Piper <cit.>). However, we do not know whether the subplane construction works in a non-Pappian plane. We did not try to construct an algebraic proof for Desarguesian planes; we chose to study only Pappian planes to keep the algebra simple. We fear that generalization may require finding a synthetic proof. There are nontrivial projective rectangles in which n=m, but n,m must be infinite. Suppose is a field that has a proper subfield ' of the same infinite cardinality. The subplane construction generates a nontrivial projective rectangle with n=|| and m = |'| = n, within which π(') is one of the (full) planes. This contrasts with the case of finite m=n in Proposition <ref>. § NARROW RECTANGLES The smallest allowed value of m is 3. We call a projective rectangle narrow if it has m=3. The matroid L_2^k of Example <ref> is defined for any group 𝔊 (except the trivial group), simply replacing _2^k by 𝔊. In fact, all we need for 𝔊 is a (nontrivial) quasigroup; this matroid is the complete lift matroid L_0(𝔊K_3) from <cit.> or <cit.>). We define L_0(𝔊K_3) in a way compatible with Example <ref>. The ground set is E:= A∪ B∪ C where A:= { a_g | g ∈𝔊}∪{D }, B:= { b_g | g ∈𝔊}∪{D } and C:= { c_g | g ∈𝔊}∪{D }. The lines (rank-2 flats of the matroid) are A, B, and C and the sets {a_g, b_g h, c_h } with g, h ∈𝔊. If this is a projective rectangle, A, B, and C are the special lines and the other lines are the ordinary lines. But L_0(𝔊K_3) is not always a projective rectangle. Every narrow projective rectangle has the form L_0(𝔊K_3) where 𝔊 is a nontrivial group with exponent 2, and conversely. If is finite the group is ℤ_2^k with k≥1 and its parameters are (m,n)=(3,2^k+1) with k≥1. This proposition includes infinite groups. First we note that every narrow projective rectangle is an L_0(𝔊K_3) where 𝔊 is a quasigroup of order greater than 1. There are three special lines, which we call A, B, and C. We label the elements of each line, except D, by a set G of labels and we define an operation on G by gh=k such that a_gc_hb_k is an ordinary line of . It is clear that this is well defined and that any two of g,h,k determine the third, so G is a quasigroup. Then is the same as L_0(𝔊K_3) except that in the projective rectangle we ignore the trivial lines of the matroid. Now let us assume that a matroid L_0(𝔊K_3) is a projective rectangle. We prove that 𝔊 satisfies the following fundamental property: gh=ef gf=eh. Consider the lines l_1={a_g,b_gh,c_h} and l_2={a_e,b_ef,c_f in Axiom (A<ref>), and two other lines, l={a_g,b_gf,c_f} and l'={a_e,b_eh,c_h}. According to Axiom (A<ref>) the lines l and l' should have a common point, so b_gf=b_eh, which means gf=eh. Any quasigroup is isotopic to a loop (a quasigroup with identity element, 1), so we may assume 𝔊 is a loop. Suppose h=e=1 in Equation (<ref>). Then g=f gf=1; in other words, gg=1 for every element of 𝔊. Suppose g=h and e=f. Then 1=1 ge=eg; that is, 𝔊 is commutative. A property that characterizes a quasigroup that is isotopic to a group is the Quadrangle Criterion <cit.>, which is .[ a_1c_1=a_2c_2; a_1d_1=a_2d_2; b_1c_1=b_2c_2 ]} b_1d_1=b_2d_2. We prove the Quadrangle Criterion for 𝔊 by means of Equation (<ref>). a_1c_1=a_2c_2 a_1a_2=c_1c_2, a_1d_1=a_2d_2 a_1a_2=d_1d_2, b_1c_1=b_2c_2 b_1b_2=c_1c_2. The first two lines imply that c_1c_2=d_1d_2 and combined with the third line we deduce that b_1b_2=d_1d_2, proving the Quadrangle Criterion. Hence, 𝔊 is isotopic to a group. By isotopy we may assume 𝔊 is a group, and we have seen that it is abelian and has exponent 2. If 𝔊 is finite, it is _2^k for some positive integer k as in Example <ref>. These necessary properties of 𝔊 are sufficient for L_0(𝔊K_3) to be a projective rectangle, because exponent 2 implies Axiom (A<ref>), as is easy to verify. The geometry of a narrow projective rectangle is determined by the isotopy type of its quasigroup. Thus, the finite such rectangles are obtained from a finite Pappian projective plane of 2-power order by the subplane construction of Section <ref> using a Fano subplane. § ORTHOGONAL ARRAYS FROM PROJECTIVE RECTANGLES A transversal design is a partition of a set _T of m(n-1) points into m special sets of size n-1 together with a family of m-subsets of _T such that each such m-set intersects each special set exactly once and each pair of points not contained in a special set lies in exactly one m-set. A projective rectangle with D deleted is exactly a transversal design with the extra partial Pasch property Axiom (A<ref>). A dual concept to transversal designs is that of orthogonal arrays; the corresponding dual to projective rectangles is orthogonal arrays with a dual property to (A<ref>). We explore that dual concept in this section.[We thank Douglas Stinson for drawing our attention to transversal designs.] An orthogonal array (OA) is a generalization of orthogonal latin squares. We adopt the notation for orthogonal arrays used in <cit.>. An N× k array with A entries from S (a set of size s) is said to be an orthogonal array, OA_λ(N,k,s,t), with s symbols, strength 0≤ t ≤ k, and index λ if every N× k subarray of A contains each tuple based on S exactly λ times as a row. We write a(r,c) for the label that appears in row r and column c. §.§ An orthogonal array from points and lines We represent a projective rectangle as an orthogonal array of points and lines. In ∖ D we have m special lines partitioning all the points, and (n-1)^2 ordinary lines. By Theorem <ref>, every ordinary line intersects every special line exactly once and every pair of points in different special lines lie in exactly one ordinary line. Each ordinary line will give a row of the orthogonal array and each special line will give a column. We label the points in each special line by the numbers 1,…,n-1 and we write a(p) for the label of the point p. The entries in a row are the labels of the points that appear in that ordinary line, arranged in the column of the special line that contains the point. Thus, each pair of labels appears once in each pair of columns. That is a 2-(n-1,m,1) orthogonal array in standard notation. In the notation used in <cit.>, it is an OA_1((n-1)^2, m, n-1,2). We formulate a special property for an orthogonal array of type OA_1((n-1)^2, m, n-1,2). (OA6) If four rows in the orthogonal array appear like the first five columns c_ij in this table, c_12 c_13 c_24 c_14 c_23 c_34 r_1 a_12 a_13 a_14 r_2 a_12 a_24 a_23 r_3 a_13 a_23 a_34 r_4 a_24 a_14 a_34 where it is possible that c_13=c_24 or c_14=c_23, then there is a sixth column that appears like c_34. (The empty cells are arbitrary.) The property (OA6) does not follow from the definition of an orthogonal array. We are not aware that it has been considered in the theory of orthogonal arrays or dually in transversal designs. Its contrary, that the sixth column of (OA6) never appears, arises (in the language of transversal designs) as the “anti-Pasch configuration” in <cit.> (whose “Pasch configuration” is slightly stricter than ours).[We are very grateful to Charles Colbourn for hunting in the literature and communicating these facts.] Let n≥ m ≥ 3. * A projective rectangle of order (m,n) gives rise to an orthogonal array OA_1((n-1)^2, m, n-1,2) with property (OA6). * An orthogonal array OA_1((n-1)^2, m, n-1,2) gives rise to a projective rectangle of order (m,n) if, and only if, it satisfies the additional property (OA6). Proof of Part (i). We have shown that gives rise to an orthogonal array with the stated parameters. Conversely, suppose we have an OA_1((n-1)^2, m, n-1,2). Let C be the set of m columns, let R be the set of rows, let L be the set of n-1 labels in the array, and write a(r,c) for the entry in row r, column c. We form an incidence structure whose point set is (C× L) ∪ D. The lines of this structure are special lines, of the form s_c = {(c,a) : a ∈ L }∪ D, for each c∈ C, and ordinary lines, of the form l_r = {(c,a) : c ∈ C and a= a(r,c) }, for each r∈ R. We prove this incidence structure satisfies Axioms (A<ref>)–(A<ref>) of a projective rectangle. We assumed n-1≥ m-1≥2 so in the orthogonal array there are at least two distinct labels, which we call a_1 and a_2, and at least 3 columns, of which three are c_1,c_2,c_3. There are also at least 2^3 rows. Proof of Axiom (A<ref>). We consider two points p_1=(r_1,a_1) and p_2=(r_2,a_2) where a_1=a(r_1,c_1) and a_2=a(r_2,c_2). The points belong to the same special line if and only if c_1=c_2. The special line is s_c_1. Otherwise, there is exactly one row r where the entry in column c_1 is a_1 and the entry in column c_2 is a_2. Then p_1 and p_2 belong to the ordinary line l_r. Proof of Axiom (A<ref>). Among the three pairs a(r_1,c_j), a(r_2,c_j) for j=1,2,3, only one can be the same label, a(r_1,c_j) = a(r_2,c_j), because each ordered pair of labels appears only once in the same two columns. Say a(r_1,c_1) ≠ a(r_2,c_1) and a(r_1,c_2) ≠ a(r_2,c_2). Then (c_1,a(r_1,c_1)), (c_1,a(r_2,c_1)), (c_2,a(r_1,c_2)), (c_2,a(r_2,c_2)) are four points, no three collinear. Proof of Axiom (A<ref>). The special line s_c contains at least the three points D, (c,a_1), (c,a_2). The ordinary line l_r contains the points (c_1,a(r,c_1)), (c_2,a(r,c_2)), (c_3,a(r,c_3)). Proof of Axiom (A<ref>). This follows by the definition of the incidence structure. Proof of Axiom (A<ref>). Two special lines intersect only in D. A special line s_c and an ordinary line l_r intersect only in the point (c,a(r,c)). Proof of Part (ii). We assume an orthogonal array is constructed from . Property (OA6) is the interpretation of Axiom (A<ref>) for an OA_1((n-1)^2, m, n-1,2). In Axiom (A<ref>) let l_3 and l_4 be the two lines besides l_1 and l_2. The assumption in the axiom is that points p_ij = l_i ∪ l_j exist for (i,j) = (1,2),(1,3),(2,4),(1,4),(2,3). Let s_ij be the special line that contains p_ij; we note that the special lines are distinct except that s_13 may be the same as s_24 and s_14 may be the same as s_23. In the orthogonal array derived from , the row of line l_i is r_i, the column of line s_ij is c_ij, and the label of p_ij is a(r_i,c_ij)=a(r_j,c_ij). Therefore, the array looks as in Property (OA6), except for the last column. The conclusion of Axiom (A<ref>) is that there is a point p_34 that is incident with both lines l_3 and l_4. That translates to the existence of a final column as in (OA6) with a_34 = a(p_34). Hence, Property (OA6) is satisfied by the array derived from the projective rectangle . Conversely, we prove Axiom (A<ref>) from Property (OA6). Let r_1, r_2 be the rows of the array that correspond to the lines l_1, l_2 in this axiom and let l_3,l_4 be the two other lines with corresponding rows r_3,r_4. The hypotheses of intersection imply that the diagram in Property (OA6) is satisfied, possibly except for the last column. By the assumption of Property (OA6), the final column does exist. This implies that l_3∩ l_4 is the point p_34 in the special line s_34 that corresponds to column c_34 and has the label a(p_34 = a_34. Therefore, the conclusion of Axiom (A<ref>) is satisfied. §.§ An orthogonal array from points and planes Ryser gives a nice construction of an orthogonal array from a projective plane <cit.>. We extend Ryser's ideas to construct an orthogonal array from points and planes of a projective rectangle by partitioning the ordinary points outside a given ordinary line by means of the separate planes that contain that line. The proof is based on the proof that Ryser gives for projective planes, adapted to the existence of multiple planes. Let l be an ordinary line in a finite . The family of sets π∖ (l∪ D) for all planes π that contain l is a partition of the points in ∖ (l ∪ D) into (n-2)/(m-2) parts of m(m-2) points each. We observe that every plane in containing l also contains the special point D. If p∉l ∪ D, then by Corollary <ref> there is a unique plane on l that contains p; thus, the planes on l partition the points in ∖ (l ∪ D). The number of such planes is given by Theorem <ref> Part (<ref>). The number of parts of the resulting partition equals the number of planes that contain the line l. Suppose that (m,n) is the order of the projective rectangle . Let l ∈ be an ordinary line and let π_1, π_2, …, π_w be all the planes in that contain l, where w=(n-2)/(m-2). Then gives rise to an orthogonal array of the form OA_w((m-1)+w, m, m-1,2). Let p_1, p_2, …, p_m be the points of l. We label the points in π_i∖ l by q_1^i, q_2^i, …, q_k^i where k=(m-1)^2 (D is one of these points) and label the lines on p_r in π_i∖ l with 1, 2, …, m-1 for each r=1,2, …, m. We write a_st^i to record the label of the line q_s^ip_t∈π_i. We claim that the matrix A_i=[a_st^i]_s,t is an orthogonal array of the form OA_1((m-1)^2,m,m-1,2). We prove this by contradiction. Suppose that there two ordered pairs in the rows of A_i that are equal; that is, (a_s_1t_1^i,a_s_1t_2^i) =(a_s_2t_1^i,a_s_2t_2^i) with s_1 s_2. Therefore, a_s_1t_1^i=a_s_2t_1^i and a_s_1t_2^i =a_s_2t_2^i. The equality of these labels implies that the points q_s_1^i, q_s_2^i, and p_t_1^i are collinear and that q_s_1^i, q_s_2^i, and p_t_2^i are also collinear. Thus, each p_t_j^i is the unique point of l on the same line q_s_1^i q_s_2^i. Therefore, p_t_1^i = p_t_2^i, but that is impossible because t_1 ≠ t_2. Now let B=[ A_1; A_2; ⋮; A_w ]. This matrix is an orthogonal array of the form OA_λ((m-1)^2+w,m,m-1,2) where λ = ∑_i=1^w 1 = w. That completes the proof. We give an example for Theorem <ref> using the projective rectangle L_2^2 depicted in Figure <ref>. For the sake of simplicity we pick the line l={a_1, b_1,c_1}. We recall that for an ordinary line in L_2^2, there are exactly λ=3 planes having that line in common. Figure <ref> shows the three planes embedded in L_2^2 with l as common line. For the first plane, let's say π_1, we distinguish the points a_1, a_g, b_1, b_g, c_1, c_g and D_1:=D. For a fixed point in l theres two lines in π_1∖ l passing by the fixed point; from the set {1,2} we assign labels to these lines. For the lines {a_1,a_g,D_1} and {a_1,b_g,c_g}, which intersect l at a_1, we assign 1 and 2 to them, respectively. We arbitrarily assign 1 and 2 to {b_1,b_g,D_1} and {a_g,b_g,c_g}, respectively, and also to {a_g,b_g,c_1} and {c_g,c_1,D_1}. With these labels we construct the first four rows of the rectangular array in Table <ref>. The columns of the array are labeled on top with the points in the line l and the rows are labeled on the left with the points in each plane that are not in l. In this case the first four rows are labeled with the points in π_1∖ l. The entries of the rectangular array are the labels of the lines passing through the point in the column label and the point in the row label. For instance, the first entry of the first row in Table <ref> is 1, because the line passing through a_1 and a_g has label 1. The first entry of the fourth row is 1, because the line passing through a_1 and D has label 1. The second plane in Figure <ref>, π_2, has the points a_1, a_h, b_1, b_h, c_1, c_h and D_2:=D. As in π_1, we assign arbitrary labels from {1,2}. We choose 1 to be the label of {a_1,b_h,c_h}, {a_h,b_1,c_h}, and {c_1,c_h,D_2} and 2 as the label of {a_1,a_h,D_2}, {b_1,b_h,D_2}, and {a_h,b_h,c_1}. For the third plane in Figure <ref>, π_3 with points a_1, a_gh, b_1, b_gh, c_1, c_gh and D_3:=D, we also assign arbitrary labels from {1,2}. So, for example, 1 will be the label of {a_1,a_gh,D_3}, {a_gh,b_1,c_gh}, and {a_gh,b_bh,c_1} and 2 will be the label of {a_1,b_gh,c_gh}, {b_1,b_gh,D_3}, and {c_1,c_gh,c_1}. These give the orthogonal array OA_3(12,3,2,2). This is a 12 × 3 array filled with 2 symbols, such that in any 2 columns there are 4 different ordered pairs, each repeated λ=3 times. § THE DUAL INCIDENCE STRUCTURE The dual structure is obtained by interchanging the roles of points and lines. It is interesting in its own right, as it connects projective rectangles with incidence geometry in a different way. The dual is essentially a net with a complete quadrangle property. Being a dual projective rectangle, it contains all the dual projective planes of the planes of the original projective rectangle. A net is an incidence structure (,,ℐ) which consists of a set of points and a set of parallel classes _i (i ∈ an index set) of lines, such that each line is a set of points, every point belongs to exactly one line of each parallel class, and any two lines of different parallel classes have exactly one point in common. The theory of nets is extensive. It is easy to prove that every parallel class has the same number of lines and that the number of points on every line is the same. We call these points and lines ordinary. By adding a special point for each parallel class, which is defined to belong to all lines of that class and no other ordinary lines, and adding one special line that contains all the special points, we get a projectively extended net. (“Projectively” refers to the existence of the special line.) Two points might not be in any common line. They are called collinear if they are in a line. They cannot be in more than one line. A complete quadrangle in a net consists of 4 points, no three collinear, and 6 lines determined by them. A nearly complete quadrangle consists of the same 4 points and 5 of the 6 lines, the 6th line possibly existing or not existing. The dual of Axiom (A<ref>) is (A<ref>*) (Complete Quadrangle Property) Every nearly complete quadrangle is complete. A projective extension of a net has the complete quadrangle property if and only if the unextended net has it. Assume a net has the complete quadrangle property and consider the cases in its extension that are not in itself. If P_1' and P_2' are special points, they are already collinear. Suppose only P_1' is special: then it is in every line of some parallel class, and that class includes a line that contains P_2'. The dual of a projective rectangle is a projective extension of a net that has the complete quadrangle property, at least three parallel classes, and at least 2 lines in each parallel class, and vice versa. We dualize the rectangle axioms and consider how they apply to the net. * Every two distinct lines contain exactly one point in common. This is true by definition if one of the lines is the special line. It is valid in the net except when the lines are parallel. Parallel lines have a common point in the extension. * There exist four lines in the extended net with no three of them concurrent. Take the special line, three special points, and one ordinary line on each of the special points. If the three ordinary lines are concurrent, replace one of them by a parallel line. Or, take two lines from each of two parallel classes. * Every point is in at least three distinct lines. This is equivalent for an ordinary point to the existence of at least 3 parallel classes and for a special point to the existence of a parallel to each ordinary line. * There is a special line D. (A point in with D is called special. A point that is not in D and a line that is not D are called ordinary.) This is part of the definition of a projectively extended net. * Each special point belongs to exactly one line with each other point. This is part of the definition of a projectively extended net. * If two ordinary points P_1 and P_2 are collinear, then any two other points that are collinear with P_1 and P_2 through four distinct lines (i.e., there are four distinct lines P_iP_j' for i,j=1,2), are themselves collinear. It is clear that Axiom (A*<ref>) is the complete quadrangle property for the extended net, excluding the case where P_1 or P_2 is special. Lemma <ref> says that the two formulations are actually equivalent. § OPEN PROBLEMS Our work on nontrivial projective rectangles leaves many unanswered questions. Here are some to add those in the body of the paper. * All our examples of projective rectangles are substructures of Pappian projective planes that can be obtained by the subplane construction. Are there other examples? * We are ignorant of how a special line compares in its intersections with two planes π and π'. Two questions stand out. * If a plane π has an ordinary line l, there are many other planes in which l is a line. However, if l is special, i.e., l = s ∩π for a special line s, we have no idea whether even one other plane has l as a line. * We do not know whether there may be another plane π' such that s ∩π∩π' has a specific cardinality (not greater than m), what the possible values of |s ∩π∩π'| may be, whether 0 is a possible value in every nontrivial (aside from L_2^2, where it is not), or in the infinite case whether it is even possible that s ∩π' may properly contain s ∩π. * We proved the subplane construction of Section <ref> only for Pappian planes, coordinatizable by a field. * Is there an analytic proof for skew fields? * Does an analytic proof using alternative algebras succeed in planes with weaker coordinate algebras such as near fields and alternative algebras? * Is there a synthetic proof for Pappian or Desarguesian or other projective planes? * Does the construction exist in non-Desarguesian, or non-Moufang, planes? * Are all planes in a projective rectangle isomorphic? We were unable to find a proof or a counterexample. * What do the partial Desargues's theorems in Section <ref> imply about automorphisms and coordinatizations? * Is there a rigorous sense in which a projective rectangle is higher-dimensional, as suggested in Section <ref> and <cit.>? * If every plane in is Moufang, it has coordinates in an alternative ring. If all such rings are isomorphic, does extend to a Moufang plane with an alternative ring that extends that of the planes in ? * Given a projective rectangle, in what projective planes can it be embedded? In particular, our constructions by subplanes and harmonic extension give projective rectangles embedded in a Pappian plane but the same rectangles may possibly be isomorphically embeddable in planes that are not Pappian, not Desarguesian, maybe not even Moufang, in a nontrivial way, i.e., not by finding the Pappian plane as a subplane of a non-Pappian plane. 99 dk J. Dénes and A. D. Keedwell, Latin Squares and Their Applications. Academic Press, New York–London, 1974. dls Jeff H. Dinitz, Alan C. H. Ling, and Douglas R. Stinson, Perfect hash families from transversal designs. Australas. J. Combin. 37 (2007), 233–242. rfhc Rigoberto Flórez, Harmonic conjugation in harmonic matroids. Discrete Math. 309 (2009), 2365–2372. bgpp Rigoberto Flórez and Thomas Zaslavsky, Projective planarity of matroids of 3-nets and biased graphs. Australasian J. Combin. 77(2) (2020), 299–338. pr2 Rigoberto Flórez and Thomas Zaslavsky, Projective rectangles: Incidence graphs and higher structure. In preparation. pr3 Rigoberto Flórez and Thomas Zaslavsky, Projective rectangles: Harmonic conjugation. In preparation. Hedayat A. S. Hedayat, N. J. A. Sloane, and J. Stufken, Orthogonal Arrays, Theory and Applications. Springer-Verlag, New York, 1999. HP Daniel R. Hughes and Fred C. Piper, Projective Planes. Grad. Texts in Math., Vol. 6. Springer-Verlag, New York, 1973. MR 48 #12278. Zbl 267.50018. ldt Bernt Lindström, A Desarguesian theorem for algebraic combinatorial geometries. Combinatorica 5 (1985), no. 3, 237–239. lhc Bernt Lindström, On harmonic conjugates in full algebraic combinatorial geometries. Europ. J. Combin. 7 (1986), 259–262. Ryser H. J. Ryser, Combinatorial Mathematics. Carus Math. Monographs, No. 14. Math. Assoc. Amer., New York, 1963. vw J. H. van Lint and R. M. Wilson, A Course in Combinatorics. Second ed. Cambridge University Press, Cambridge, Eng., 2001. b1 Thomas Zaslavsky, Biased graphs. I. Bias, balance, and gains. J. Combin. Theory Ser. B 47 (1989), 32–52. b2 Thomas Zaslavsky, Biased graphs. II. The three matroids. J. Combin. Theory Ser. B 51 (1991), 46–72.
http://arxiv.org/abs/2307.04628v1
20230710151413
Tight Algorithmic Applications of Clique-Width Generalizations
[ "Vera Chekan", "Stefan Kratsch" ]
cs.DS
[ "cs.DS" ]
Kibble-Zurek Mechanism for Nonequilibrium Generation of Magnetic Monopoles in Spin Ices Gia-Wei Chern August 12, 2023 ========================================================================================== In this work, we study two natural generalizations of clique-width introduced by Martin Fürer. Multi-clique-width (mcw) allows every vertex to hold multiple labels [ITCS 2017], while for fusion-width (fw) we have a possibility to merge all vertices of a certain label [LATIN 2014]. Fürer has shown that both parameters are upper-bounded by treewidth thus making them more appealing from an algorithmic perspective than clique-width and asked for applications of these parameters for problem solving. First, we determine the relation between these two parameters by showing that ≤ + 1. Then we show that when parameterized by multi-clique-width, many problems (e.g., Connected Dominating Set) admit algorithms with the same running time as for clique-width despite the exponential gap between these two parameters. For some problems (e.g., Hamiltonian Cycle) we show an analogous result for fusion-width: For this we present an alternative view on fusion-width by introducing so-called glue-expressions which might be interesting on their own. All algorithms obtained in this work are tight up to (Strong) Exponential Time Hypothesis. § INTRODUCTION In parameterized complexity apart from the input size we consider a so-called parameter and study the complexity of problems depending on both the input size and the parameter where the allowed dependency on the input size is polynomial. In a more fine-grained setting one is interested in the best possible dependency on the parameter under reasonable conjectures. A broad line of research is devoted to so-called structural parameters measuring how simple the graph structure is: different parameters quantify various notions of possibly useful input structure. Probably the most prominent structural parameter is treewidth, which reflects how well a graph can be decomposed using small vertex separators. For a variety of problems, the tight complexity parameterized by treewidth (or its path-like analogue pathwidth) has been determined under the so-called Strong Exponential Time Hypothesis (e.g., <cit.>). However, the main drawback of treewidth is that it is only bounded in sparse graphs: a graph on n vertices of treewidth k has no more than nk edges. To capture the structure of dense graphs, several parameters have been introduced and considered. One of the most studied is clique-width. The clique-width of a graph is at most k if it can be constructed using the following four operations on k-labeled graphs: create a vertex with some label from 1, …, k; form a disjoint union of two already constructed graphs; give all vertices with label i label j instead; or create all edges between vertices with labels i and j. It is known that if a graph has treewidth k, then it has clique-width at most 3 · 2^k-1 and it is also known that an exponential dependence in this bound is necessary <cit.>. Conversely, cliques have clique-width at most 2 and unbounded treewidth. So on the one hand, clique-width is strictly more expressive than treewidth in the sense that if we can solve a problem efficiently on classes of graphs of bounded clique-width, then this is also true for classes of graphs of bounded treewidth. On the other hand, the exponential gap has the effect that as the price of solving the problem for larger graph classes we potentially obtain worse running times for some graph families. Fürer introduced and studied two natural generalizations of clique-width, namely fusion-width (fw) <cit.> and multi-clique-width (mcw) <cit.>. For fusion-width, additionally to the clique-width operations, he allows an operator that fuses (i.e., merges) all vertices of label i. Originally, fusion-width (under a different name) was introduced by Courcelle and Makowsky <cit.>. However, they did not suggested studying it as a new width parameter since it is parametrically (i.e., up to some function) equivalent to clique-width. For multi-clique-width, the operations remain roughly the same as for clique-width but now every vertex is allowed to have multiple labels. For these parameters, Fürer showed the following relations to clique-width (cw) and treewidth (tw): ≤≤· 2^ ≤≤ 2^ ≤ + 2 ≤ + 2 Fürer also observed that the exponential gaps between clique-width and both fusion- and multi-clique-width are necessary. As our first result, we determine the relation between fusion-width and multi-clique-width: For every graph G, it holds that (G) ≤(G) + 1. Moreover, given a fuse-k-expression ϕ of G, a multi-clique-width-(k + 1)-expression of G can be created in time polynomial in |ϕ| and k. The relations in (<ref>) imply that a problem is FPT parameterized by fusion-width resp. multi-clique-width if and only if this is the case for clique-width. However, the running times of such algorithms might strongly differ. Fürer initiated a fine-grained study of problem complexities relative to multi-clique-width, starting with the Independent Set problem. He showed that this problem can be solved in (2^) where hides factors polynomial in the input size. On the other hand, Lokshtanov et al. proved that under SETH no algorithm can solve this problem in ((2-ε)^) where denotes the parameter called pathwidth <cit.>. Clique-width of a graph is at most its pathwidth plus two <cit.> so the same lower bound holds for clique-width and hence, multi-clique-width as well. Therefore, the tight dependence on both clique-width and multi-clique-width is the same, namely (2^k). We show that this is the case for many further problems. Let G be a graph given together with a multi-k-expression of G. Then: * Dominating Set can be solved in time (4^k); * q-Coloring can be solved in time ((2^q - 2)^k); * Connected Vertex Cover can be solved in time (6^k); * Connected Dominating Set can be solved in time (5^k). And these results are tight under SETH. Further, Chromatic Number can be solved in time f(k) · n^2^𝒪(k) and this is tight under ETH. We prove this by providing algorithms for multi-clique-width with the same running time as the known tight algorithms for clique-width. The lower bounds for clique-width known from the literature then apply to multi-clique-width as well proving the tightness of our results. By <ref>, these results also apply to fusion-width. For the following three problems we obtain similar tight bounds relative to fusion-width as for clique-width, but it remains open whether the same is true relative to multi-clique-width: Let G be a graph given together with a fuse-k-expression of G. Then: * Max Cut can be solved in time f(k) · n^𝒪(k); * Edge Dominating Set can be solved in time f(k) · n^𝒪(k); * Hamiltonian Cycle can be solved in time f(k) · n^𝒪(k). And these results are tight under ETH. To prove these upper bounds, we provide an alternative view on fuse-expressions, called glue-expressions, interesting on its own. We show that a fuse-k-expression can be transformed into a glue-k-expression in polynomial time and then present dynamic-programming algorithms on glue-expressions. Due to the exponential gap between clique-width and both fusion- and multi-clique-width, our results provide exponentially faster algorithms on graphs witnessing these gaps. Related Work Two parameters related to both treewidth and clique-width are modular treewidth (mtw) <cit.> and twinclass-treewidth <cit.> (unfortunately, sometimes also referred to as modular treewidth). It is known that ≤mtw + 3 (personal communication with Falko Hegerfeld). Further dense parameters have been widely studied in the literature. Rank-width (rw) was introduced by Oum and Seymour and it reflects the _2-rank of the adjacency matrices in the so-called branch decompositions. Originally, it was defined to obtain a fixed-parameter approximation of clique-width <cit.> by showing that rw≤≤ 2^rw + 1 - 1. Later, Bui-Xuan et al. started the study of algorithmic properties of rank-width <cit.>. Recently, Bergougnoux et al. proved the tightness of first ETH-tight lower bounds for this parameterization <cit.>. Another parameter defined via branch-decompositions and reflecting the number of different neighborhoods across certain cuts is boolean-width (boolw), introduced by Bui-Xuan et al. <cit.>. Fürer <cit.> showed that boolw≤≤ 2^boolw. Recently, Eiben et al. presented a framework unifying the definitions and algorithms for computation of many graph parameters <cit.>. Organization We start with some required definitions and notations in <ref>. In <ref> we prove the relation between fusion-width and multi-clique-width from <ref>. After that, in <ref> we introduce glue-k-expressions and show how to obtain such an expression given a fuse-k-expression of a graph. Then in <ref> we employ these expressions to obtain algorithms parameterized by fusion-width. In <ref> we present algorithms parameterized by multi-clique-width. We conclude with some open questions in <ref>. § PRELIMINARIES For k ∈_0, we denote by [k] the set {1, …, k} and we denote by [k]_0 the set [k] ∪{0}. We use standard graph-theoretic notation. Our graphs are simple and undirected if not explicitly stated otherwise. For a graph H and a partition (V_1, V_2) of V(H), by E_H(V_1, V_2) = {{v_1, v_2}| v_1 ∈ V_1, v_2 ∈ V_2} we denote the set of edges between V_1 and V_2. For a set S of edges in a graph H, by V(S) we denote the set of vertices incident with the edges in S. A k-labeled graph is a pair (H, _H) where _H V(H) → [k] is a labeling function of H. Sometimes to simplify the notation in our proofs we will allow the labeling function to map to some set of cardinality k instead of the set [k]. In the following, if the number k of labels does not matter, or it is clear from the context, we omit k from the notions (e.g., a labeled graph instead of a k-labeled graph). Also, if the labeling function is clear from the context, then we simply call H a labeled graph as well. Also we sometimes omit the subscript H of the labeling function _H for simplicity. For i ∈ [k], by U^H_i = _H^-1(i) we denote the set of vertices of H with label i. We consider the following four operations on k-labeled graphs. * Introduce: For i ∈ [k], the operator v⟨ i ⟩ creates a graph with a single vertex v that has label i. We call v the title of the vertex. * Union: The operator ⊕ takes two vertex-disjoint k-labeled graphs and creates their disjoint union. The labels are preserved. * Join: For i ≠ j ∈ [k], the operator η_i, j takes a k-labeled graph H and creates the supergraph H' on the same vertex set with E(H') = E(H) ∪{{u, v}|_H(u) = i, _H(v) = j}. The labels are preserved. * Relabel: For i ≠ j, the operator ρ_i → j takes a k-labeled graph H and creates the same k-labeled graph H' apart from the fact that every vertex that with label i in H instead has label j in H'. A well-formed sequence of such operations is called a k-expression or a clique-expression. With a k-expression ϕ one can associate a rooted tree such that every node corresponds to an operator, this tree is called a parse tree of ϕ. With a slight abuse of notation, we denote it by ϕ as well. By G^ϕ we denote the labeled graph arising in ϕ. And for a node t of ϕ by G^ϕ_t we denote the labeled graph arising in the subtree (sometimes also called a sub-expression) rooted at t, this subtree is denoted by ϕ_t. The graph G^ϕ_t is then a subgraph of G^ϕ. A graph H has clique-width of at most k if there is a labeling function _H of H and a k-expression ϕ such that G^ϕ is equal to (H, _H). By (H) we denote the smallest integer k such that H has clique-width at most k. Fürer has studied two generalizations of k-expressions <cit.>. Fuse: For i ∈ [k], the operator θ_i takes a k-labeled graph H with ^-1_H(i) ≠∅ and fuses the vertices with label i, i.e., the arising graph H' has vertex set (V(H) - ^-1_H(i)) ∪̇{v}, the edge relation in V(H) - ^-1_H(i) is preserved, and N_H'(v) = N_H(^-1_H(i)). The labels of vertices in V(H') - v are preserved, and vertex v has label i. A fuse-k-expression is a well-formed expression that additionally to the above four operations is allowed to use fuses. We adopt the above notations from k-expressions to fuse-k-expressions. Let us only remark that for a node t of a fuse-k-expression ϕ, the graph G^ϕ_t is not necessarily a subgraph of G^ϕ since some vertices of G^ϕ_t might be fused later in ϕ. Originally, Fürer allows that a single introduce-node creates multiple, say q, vertices with the same label. However, we can eliminate such operations from a fuse-expression ϕ as follows. If the vertices introduced at some node participate in some fuse later in the expression, then it suffices to introduce only one of them. Otherwise, we can replace this introduce-node by q nodes introducing single vertices combined using union-nodes. These vertices are then also the vertices of G^ϕ. So in total, replacing all such introduce-nodes would increase the number of nodes of the parse tree by at most 𝒪(|V(G^ϕ)|), which is not a problem for our algorithmic applications. Another generalization of clique-width introduced by Fürer is multi-clique-width (mcw) <cit.>. A multi-k-labeled graph is a pair (H, _H) where _H V(H) → 2^[k] is a multi-labeling function. We consider the following four operations of multi-k-labeled graphs. * Introduce: For q ∈ [k] and i_1, … i_q ∈ [k], the operator v ⟨ i_1, …, i_q ⟩ creates a multi-k-labeled graph with a single vertex that has label set {i_1, …, i_q}. * Union: The operator ⊕ takes two vertex-disjoint multi-k-labeled graphs and creates their disjoint union. The labels are preserved. * Join: For i ≠ j ∈ [k], the operator η_i, j takes a multi-k-labeled graph H and creates its supergraph H' on the same vertex set with E(H') = E(H) ∪{{u, v}| i ∈_H(u), j ∈_H(v)}. This operation is only allowed when there is no vertex in H with labels i and j simultaneously, i.e., for every vertex v of H we have {i, j}⊈_H(v). The labels are preserved. * Relabel: For i ∈ [k] and S ⊆ [k], the operator ρ_i → S takes a multi-k-labeled graph H and creates the same multi-labeled graph apart from the fact that every vertex with label set L ⊆ [k] such that i ∈ L in H instead has label set (L ∖{i}) ∪ S in H'. Note that S = ∅ is allowed. A well-formed sequence of these four operations is called a multi-k-expression. As for fuse-expressions, Fürer allows introduce-nodes to create multiple vertices but we can eliminate this by increasing the number of nodes in the expression by at most 𝒪(|V(G^ϕ)|). We adopt the analogous notations from k-expressions to multi-k-expressions. Complexity To the best of our knowledge, the only known way to approximate multi-clique-width and fusion-width is via clique-width, i.e., to employ the relation (<ref>). The only known way to approximate clique-width is, in turn, via rank-width. This way we obtain a 2^2^k-approximation of multi-clique-width and fusion-width running in FPT time. For this reason, to obtain tight running times in our algorithms we always assume that a fuse- or multi-k-expression is provided. Let us emphasize that this is also the case for all tight results for clique-width in the literature (see e.g., <cit.>). In this work, we will show that if a graph admits a multi-k-expression resp. a fuse-k-expression, then it also admits one whose size is polynomial in the size of the graph. Moreover, such a “compression” can be carried out in time polynomial in the size of the original expression. Therefore, we delegate this compression to a black-box algorithm computing or approximating multi-clique-width or fusion-width and assume that provided expressions have size polynomial in the graph size. (Strong) Exponential Time Hypothesis The algorithms in this work are tight under one of the following conjectures formulated by Impagliazzo et al. <cit.>. The Exponential Time Hypothesis (ETH) states that there is 0 < ε < 1 such that 3-Sat with n variables and m clauses cannot be solved in time (2^ε n). The Strong Exponential Time Hypothesis (SETH) states that for every 0 < ε < 1 there is an integer q such that q-Sat cannot be solved in time (2^ε n). In this work, hides factors polynomial in the input size. Simplifications If the graph is clear from the context, by n we denote the number of its vertices. If not stated otherwise, the number of labels is denoted by k and a label is a number from [k]. § RELATION BETWEEN FUSION-WIDTH AND MULTI-CLIQUE-WIDTH In this section, we show that for every graph, its multi-clique-width is at most as large as its fusion-width plus one. Since we are interested in parameterized complexity of problems, the constant additive term to the value of a parameter does not matter. To prove the statement, we show how to transform a fuse-k-expression of a graph H into a multi-(k+1)-expression of H. Fürer has proven the following relation: For every graph H, it holds that (H) ≤(H) · 2^(H). We will use his idea behind the proof of this lemma to prove our result. For every graph H, it holds that (H) ≤(H) + 1. Moreover, given a fuse-k-expression ϕ of H, a multi-(k + 1)-expression of H can be created in time polynomial in |ϕ| and k. Let H be a graph. We start by showing that (H) ≤ 2 ·(H) holds. To prove this, we will consider a fuse-k-expression of H and from it, we will construct a multi-2k-expression of H using labels {1, …, k, 1, …, k}. For simplicity of notation, let [k] = {1, …, k}. For this first step, we strongly follow the construction of Fürer in his proof of <ref>. There he uses k · 2^k labels from the set [k] × 2^[k] so the second component of such a label is a subset of [k]. We will use that multi-clique-width perspective already allows vertices to have sets of labels and model the second component of a label via subsets of [k]. Then we will make an observation that allows us to (almost) unify labels i and i for every i ∈ [k]. Using one additional label ⋆, we will then obtain a multi-(k+1)-expression of H using labels [k] ∪{⋆}. First of all, we perform several simple transformations on ϕ without changing the arising graph. We suppress all join-nodes that do not create new edges, i.e., we suppress a join-node t if for its child t' it holds G_t = G_t'. Then we suppress all nodes fusing less than two vertices, i.e., a θ_i-node t for some i ∈ [k] is suppressed if for its child t', the labeled graph G^ϕ_t' contains less than two vertices with label i. Now we provide a short intuition for the upcoming transformation. Let x be a θ_i-node creating a new vertex, say u, by fusing some vertices, say U. And let y be an ancestor of x such that y is a fuse-node that fuses vertex u with some further vertices, say W. Then we can safely suppress the node x: the fuse of vertices from U is then simply postponed to y, where these vertices are fused together with W. Now we fix some notation used in the rest of the proof. Let x be a node, let y be an ancestor of x, and let t_1, …, t_q be all inner relabel-nodes on the path from x to y in the order they occur on this path. Further, let s_1, …, s_q ∈ [k] and r_1, …, r_q ∈ [k] be such that the node t_j is a ρ_s_j → r_j-node for every j ∈ [q]. Then for all i ∈ [k], we define ρ^*_x,y(i) = σ_q ( σ_q-1 ( …σ_1(i) ) ) where σ_j(i') = i' if i' ≠ s_j r_j if i' = s_j for j ∈ [q]. Intuitively, if we have some vertex v of label i in G^ϕ_x, then ρ^*_x, y(i) denotes the label of v in G^ϕ_y' where y' denotes the child of y, i.e., ρ^*_x, y(i) is the label of v right before the application of y. Now for every i ∈ [k] and every θ_i-node x, if there exists an ancestor y of x in ϕ such that y is a θ_ρ^*_x,y(i)-node, we suppress the node x. In this case, we call x skippable. Finally, we transform the expression in such a way that a parent of every leaf is a union-node as follows. Let x be a leaf with introducing a vertex v of label i for some i ∈ [k]. As a result of the previous transformations, we know that the parent y of x is either a relabel- or a union-node. In the latter case, we skip this node. Otherwise, let i_1 ≠ i_2 ∈ [k] be such that y is a ρ_i_1 → i_2-node. If i_1 ≠ i, then we suppress y. Otherwise, we suppress y and replace x with a node introducing the same vertex but with label i_2. This process is repeated for every leaf. We denote the arising fuse-k-expression of H by ψ. Now let x be a node of ψ and let v be a vertex of G^ψ_x. We say that v is a fuse-vertex at x if v participates in some fuse-operation above x, that is, there is an ancestor y of x (in ψ) such that y is a θ_ρ^*_x, y(i)-node. Note that first, since we have removed skippable fuse-nodes, if such a node y exists for x, then it is unique. And second, in this case all vertices of label i in G^ψ_x will participate in the fuse-operation. So we also say i is a fuse-label at x. Hence, instead of first, creating these vertices via introduce-nodes and then fusing them, we will introduce only one vertex representing the result of the fusion. And the creation of the edges incident with these vertices needs to be postponed until the moment where the new vertex is introduced. For this, we will store the label of the new vertex in the label set of the other end-vertex. But for postpone-purposes we will use labels from [k] to distinguish from the original labels. We now formalize this idea to obtain a multi-2k-expression ξ of H. In the following, the constructed expression will temporarily contain at the same time vertices with multiple labels and fuse-nodes, we call such an expression mixed. First, we mark all fuse-nodes in ψ as unprocessed and start with ξ := ψ. We proceed for every leaf ℓ of ψ as follows. Let v and i ∈ [k] be such that ℓ is a v⟨ i ⟩-node. If v is not a fuse-vertex at ℓ in ψ, we simply change the operation at ℓ in ξ to be 1 ⟨{i}⟩. Otherwise, let x be the fuse-node in ψ in which v participates. Note that since we have suppressed skippable fuse-nodes, such a node x is unique. Let i ∈ [k] be such that x is a θ_i-node. First, we remove the leaf ℓ from ξ and suppress its parent in ξ. Note that since the parent of ℓ is ψ is a union-node, the mixed expression remains well-formed. Second, if x is marked as unprocessed, we replace the operation at x in ξ to be a union, add a new 1 ⟨{i}⟩-node as a child of x, and mark x as processed. We refer to the introduce-nodes created in this process as well as to the vertices introduced by these nodes as new. Observe that first, the arising mixed expression does not contain any fuse-nodes. Second, the set of leaves of ξ is now in bijection with the set of vertices of H. Also, the set of edges induced by vertices, that do not participate in any fuse-operation in ψ, has not been affected. So it remains to correctly create the edges for which at least one end-point is new. This will be handled by adapting the label sets of vertices. First, for every i ≠ j ∈ [k], every ρ_i → j-node is replaced with a path consisting of a ρ_i →{j}-node and a ρ_i→{j}-node. Now let i ≠ j ∈ [k] and let x be a η_i, j-node in ξ. In order to correctly handle the join-operation, we make a case distinction. If both i and j are not fuse-labels at x in ψ, we skip x. Next, assume that exactly one of the labels i and j, say i, is a fuse-label at x in ψ. Then we replace the operation in x in ξ with ρ_j →{j, i} to store the information about the postponed edges in the vertices of label j. From now on, we may assume that both i and j are fuse-labels at x in ψ. Observe that x creates only one edge of H since all vertices of label i (resp. j) are fused into one vertex later in ψ. Let x_i (resp. x_j) be the ancestor of x in ψ such that x_i (resp. x_j) is a θ_p_i-node (resp. θ_p_j-node) where p_i = ρ^*_x, x_i(i) (resp. p_j = ρ^*_x, x_j(j)). Since we have suppressed skippable fuse-nodes, the nodes x_i and x_j are unique. By our construction, x_i (resp. x_j) is in ξ a union-node that has a child y_i (resp. y_j) being an introduce-node. Without loss of generality, we may assume that x_i is above x_j in ξ. Then, we store the information about the postponed edge in y_j as follows. Let S ⊆ [k] ∪[k] be the label set such that y_j is currently a 1 ⟨ S ⟩-node. Note: initially S consists of a single label p_j but after processing several join-nodes, this is not the case in general. We now replace the operation in y_j with 1 ⟨ S ∪{ρ^*_x,x_j(i)}⟩. After all join-nodes are processed, we create the postponed edges at every new introduce-node x of ξ as follows. Let y be the parent of x in ξ and let S ⊆ [k] be such that x is an 1 ⟨ S ⟩-node. By construction, there exists a unique label i ∈ [k] ∩ S. Then right above y, we add the sequence ρ_i→∅∘η_i, i and we refer to this sequence together with y as the postponed sequence of x. This concludes the construction of a multi-2k-expression, say α, of H. It can be verified that we have not changed the construction of Fürer <cit.> but only stated it in terms of multi-clique-width. Therefore, the construction is correct. Now as promised, we argue that the number of required labels can be decreased to k + 1. Before formally stating this, we provide an intuition. First, observe that moving from ξ to α, we did not change the unique label from [k] kept by each vertex at any step, only the labels from [k] have been affected. We claim that for i ∈ [k], both labels i and i may appear in a subgraph G^α_y only in very restricted cases, namely when y belongs to a postponed sequence of a new introduce-node. We now sketch why this is the case. Let x be a node such that G^α_x contains a vertex with label i. This can only occur if i is a fuse-label at x in ψ, i.e., there exists a unique fuse-node z such that z is an ancestor of x and the vertices from G^ψ_x of label i participate in the fuse at z. By the construction of ξ, all introduce-nodes creating these vertices have been removed so G^α_z contains a unique vertex holding the label j := ρ^*_x, z(i), namely the one introduced at its child, say t. Then in the end of the postponed sequence of t, the label j is removed from all vertices. So the only moment where both labels j and j occur is during the postponed sequence of t. Also note that postponed sequences do not overlap so if such j exists, then it is unique. This is formalized as follows. Let y be a node in α and let i ∈ [k] be such that the labeled graph G^α_y contains a vertex containing label i and a vertex containing label i. Then y belongs to the postponed sequence of some new 1 ⟨ S ⟩-node x with S ⊆ [k] ∪[k] and i ∈ S. Moreover, the only vertex in G^α_y containing label i is the vertex introduced at x. In particular, since the postponed sequences for distinct nodes are disjoint by construction, for every j ≠ i ∈ [k], the graph G^α_y does not contain a vertex containing label j or it does not contain a vertex containing label j. So up to postponed sequences, we can unify the labels i and i for every i ∈ [k]. And inside postponed sequences, we will use an additional label ⋆ to distinguish between i and i. So we process new introduce-nodes as follows. Let x be a new 1 ⟨ S ⟩-node for some S ⊆ [k] ∪[k] and let i ∈ [k] be the unique value in S ∖[k]. We replace the operation in x with a 1 ⟨ S ∖{i}∪{⋆}⟩ and we replace the postponed sequence of x with the sequence ρ_⋆→ i∘ρ_i→∅∘η_i, ⋆∘⊕. After processing all new introduce-nodes, we replace every occurrence of label i with label i for all i ∈ [k]. The new multi-expression uses k+1 labels and by the above observation, it still creates H. Also it can be easily seen that the whole transformation can be carried out in polynomial time. § REDUCED GLUE-EXPRESSIONS In this section, we show that a fuse-k-expression can be transformed into a so-called reduced glue-k-expression of the same graph in polynomial time. Such expressions will provide an alternative view on fusion-width helpful for algorithms. We formally define them later. In the following, we assume that the titles used in introduce-nodes of a fuse-expression are pairwise distinct. Along this section, the number of labels is denoted by k and polynomial is a shorthand for “polynomial in the size of the expression and k”. To avoid edge cases, we will assume that any expression in this section does not contain any useless nodes in the following sense. If a join-node does not create new edges, it is suppressed. Similarly, if a fuse-node fuses at most one node, it is suppressed. Also during our construction, the nodes of form ρ_i → i might arise, they are also suppressed. Further, if ρ_i → j is applied to a labeled graph with no vertices of label i, it is suppressed. Clearly, useless nodes can be found and suppressed in polynomial time. For this reason, from now on we always implicitly assume that useless nodes are not present. We say that fuse-expressions ϕ_1 and ϕ_2 are equivalent if there exists a label-preserving isomorphism between G^ϕ_1 and G^ϕ_2. In this section, we provide rules allowing to replace sub-expressions with equivalent ones. For simplicity, the arising expression will often be denoted by the same symbol as the original one. The following equivalencies can be verified straight-forwardly. Although some of them might seem to be unnatural to use at first sight, they will be applied in the proofs of <ref>. Let k ∈, let H be a k-labeled graph, let q ∈, let i, j, a, b, a_1, …, a_q ∈ [k] be integers, let H_1, H_2 be k-labeled graphs, and let v be a title. Then the following holds if none of the operators on the left-hand side is useless: * θ_i ∘η_a, b (H) = η_a, b∘θ_i(H); * If i ∉{a, b}, then θ_i ∘ρ_a → b(H) = ρ_a → b∘θ_i(H); * If a, b ∈{a_1, …, a_q, i}, then: θ_i ∘ρ_a_1 → i∘…∘ρ_a_q → i∘η_a, b(H) = θ_i ∘ρ_a_1 → i∘…∘ρ_a_q → i(H); * If a ∈{a_1, …, a_q, i} and b ∉{a_1, …, a_q, i}, then: θ_i ∘ρ_a_1 → i∘…∘ρ_a_q → i∘η_a, b(H) = η_i, b∘θ_i ∘ρ_a_1 → i∘…∘ρ_a_q → i(H); * If a, b ∉{a_1, …, a_q, i}, then: θ_i ∘ρ_a_1 → i∘…∘ρ_a_q → i∘η_a, b(H) = η_a, b∘θ_i ∘ρ_a_1 → i∘…∘ρ_a_q → i(H); * If a, b ∉{a_1, …, a_q, i}, then θ_i ∘ρ_a_1 → i∘…∘ρ_a_q → i∘ρ_a, b(H) = ρ_a, b∘θ_i ∘ρ_a_1 → i∘…∘ρ_a_q → i(H); * If b ∈{a_1, …, a_q}, then θ_i ∘ρ_a_1 → i∘…∘ρ_a_q → i∘ρ_a → b(H) = θ_i ∘ρ_a_1 → i∘…∘ρ_a_q → i∘ρ_a → i(H); * If b ∉{a_1, …, a_q, i}, then θ_i ∘ρ_a_1 → i∘…∘ρ_a_q → i∘ρ_i → b(H) = ρ_a_1 → i∘θ_a_1∘ρ_a_2 → a_1∘…∘ρ_a_q → a_1∘ρ_i → b(H); * θ_i ∘ρ_a_1 → i∘…∘ρ_a_q → i (H_1 ⊕ H_2) = θ_i ((ρ_a_1 → i∘…∘ρ_a_q → i(H_1)) ⊕(ρ_a_1 → i∘…∘ρ_a_q → i(H_2)) ); * If b ∈{a_1, …, a_q, i}, then: θ_i ∘ρ_a_1 → i∘…∘ρ_a_q → i∘θ_b(H) = θ_i ∘ρ_a_1 → i∘…∘ρ_a_q → i (H); * If b ∉{a_1, …, a_q, i}, then: θ_i ∘ρ_a_1 → i∘…∘ρ_a_q → i∘θ_b(H) = θ_b ∘θ_i ∘ρ_a_1 → i∘…∘ρ_a_q → i (H); * ρ_a → j∘ v ⟨ a ⟩ = v ⟨ j ⟩; * η_i, j∘ρ_a → i (H) = ρ_a → i∘η_i, j∘η_a, j (H); * If a, b ∉{i, j}, then: η_i, j∘ρ_a → b (H) = ρ_a → b∘η_i,j (H). We fix some notation. Let t be a fuse-node in some fuse-expression ϕ. Since t is not useless, there is at least one successor of t being a union-node. The union-nodes are the only nodes with more than one child so there exists a unique topmost successor of t being a union-node, we denote it by t_⊕. The children of t_⊕ are denoted by t_1 and t_2. For a node t, we call the maximum number of union-nodes on a path from t to any leaf in the subtree rooted at t the ⊕-height of t. Informally speaking, a fuse-expression we want to achieve in this section has the following two properties. First, for any pair of distinct vertices that are fused at some point, their fuse happens “as early as possible”. Namely, two vertices are fused right after the earliest union these vertices participate in together: in particular, these vertices come from different sides of the union. This will allow us to replace a sequence of fuse-nodes by a so-called glue-node that carries out non-disjoint union of two graphs under a certain restriction. Second, we want that each edge of the graph is created exactly once. We split the transformation into several steps. In the very first step we shift every fuse-node to the closest union-node below it (see <ref>. Let k ∈ and let ϕ be a fuse-k-expression. Then in time polynomial in |ϕ| + k we can compute a fuse-k-expression of the same labeled graph such that for every fuse node t, every inner node on the path from t to t_⊕ is a fuse-node. We start with ϕ and transform it as long as there is a fuse-node violating the property of the lemma. We say that such a node is simply violating. If there are multiple such nodes, we choose a node t to be processed such that every fuse-node being a proper successor of t satisfies the desired property. Since any parse tree is acyclic, such a node exists. So let t be a fuse-node to be processed and let i ∈ [k] be such that t is a θ_i-node. We will shift t to t_⊕ by applying the rules from <ref> to t and its successors as follows. When we achieve that the child of t is a union- or a fuse-node, we are done with processing t. While processing t, with t_c we always refer to the current child of t and by α we denote the operation in t. Recall that t is not useless so as long as t is processed, α is a join or a relabel. If α is a join-node, then we apply the rule from <ref> to swap t and t_c. Otherwise, we have α = ρ_a → b for some a ≠ b ∈ [k]. We proceed depending on the values of a and b. If a ≠ i and b ≠ i, then the rule from <ref> is applied to swap t and α. If a = i and b ≠ i (resp. and b = i), then t (resp. t_c) would be useless so this is not the case. We are left with the case a ≠ i and b = i, i.e., α = ρ_a → i. Note that here we cannot simply swap the nodes t and t_c since the vertices that have label a at the child of t_c also participate in the fuse at t. So this is where we will have to apply the rules from <ref> to longer sequences of nodes. From now on, we always consider the maximal sequence (t_0 = t), t_1, …, t_q (for some q ∈) such that for every j ∈ [q], the node t_j is a relabel-node ρ_a_j → i for some a_j ∈ [k] and t_j is a child of t_j-1. In particular, we have t_1 = t_c. Since the nodes are not useless, the values {a_1, …, a_q} are pairwise distinct. Let t' be the child of t_q. If t' is a join-node, then depending on the joined labels, we apply one of the rules from <ref> to either suppress t' (see <ref> (a)) or shift it above t with possibly changing the labels joined in t' (see <ref> (b)). If t' is a relabel-node, let c, d ∈ [k] be such that it is a ρ_c → d-node. By maximality, we have d ≠ i. Now depending on c and d, we can apply one of the rules from <ref>. In the case of <ref>, the length of the maximal sequence t_0, …, t_q increases. In the cases of <ref>, the height of t decreases. So in any case we make progress. If t' is a union-node, we apply the rule from <ref>. Now we may assume that t' is a fuse-node. Observe that while processing t we have not affected the subtree rooted at t' all inner nodes on the path from t' to t'_⊕ are fuse-nodes. So there exist r ∈ with r > q, the nodes (t' = t_q+1), …, t_r, and values b_q+1, …, b_r-1∈ [k] with the following two properties. First, for every j ∈ [q+1, r-1], the node t_j is a θ_b_j-node while t_r is a union-node. And second, for every j ∈ [q+1, r], the node t_j is a child of t_j-1. For ℓ = q+1, …, r we do the following to achieve that t_ℓ is the child of t_q. This holds at the beginning for ℓ = q+1. Now let ℓ > q + 1 and suppose this holds for ℓ - 1. Depending on b_ℓ we apply the rule from <ref> to the sequence (t = t_0), …, t_q, t_ℓ-1. This either suppresses t_ℓ-1 or shifts it to become the parent of t. In any case, t_ℓ becomes the child of t_q as desired. In the end, this holds for ℓ = r, i.e., the vertices t, t_1, …, t_q, t_r form a path in the parse tree (see <ref> (c)). Finally, we apply the rule <ref> to achieve that t is a parent of the union-node t_r, i.e., t now satisfies the desired condition (see <ref> (d)). This concludes the description of the algorithm processing t. Now we argue that the algorithm terminates and takes only polynomial time. We analyze the process for the node t and then conclude about the whole algorithm. It can be verified that every application of a rule either decreases the height of t or increases q. The latter case can only occur at most k times: if q > k, then at least one of t_1, …, t_q would be redundant. So only a polynomial number of rules is applied until t satisfies the property of the lemma. The application of any of these rules increases neither the height nor the number of leaves of the parse tree. On the other hand, suppressing a useless node below t decreases the height of t as well. So to conclude the proof, it suffices to show that for any fuse-node s, if s satisfied the property of the lemma before processing t, then this still holds after processing t, i.e., the number of violating fuse-nodes decreases. While processing the node t, some fuse-node t^* ∈{t_q+1, …, t_r-1} might violate our desired property when this node is shifted to become the parent of t after the application of the rule from <ref>. But observe that after processing t, the path between t^* and t contains only fuse-nodes (which similarly to s have been shifted there as a result of the rule from <ref>) and the child of t is a union-node. So t^* again satisfies the desired condition. Therefore, every fuse-node is processed at most once and no new fuse-nodes are created. There is a linear number of fuse-nodes and a single application of any rule from <ref> can be accomplished in polynomial time. Above we have argued that per fuse-node, the number of rule applications is polynomial. Altogether, the algorithm runs in polynomial time. As the next step, we will shift the fuse-nodes further below so that every fuse-node t fuses exactly two vertices, namely one from G_t_1 with another from G_t_2. Let k ∈ and let ϕ be a fuse-k-expression. Then in time polynomial in |ϕ| + k we can compute a fuse-k-expression of the same labeled graph such that for every fuse node t, the following holds. First, every inner node on the path from t to t_⊕ is a fuse-node. Second, let i ∈ [k] be such that t is a θ_i-node. Then for every pair u ≠ v ∈_t^-1(i), it holds that |{u, v}∩ V(G_t_1)| = |{u, v}∩ V(G_t_2)| = 1. In particular, we have |_t^-1(i)| = 2. First of all, we apply <ref> to transform ϕ into a fuse-k-expression of the same labeled graph satisfying the properties of that lemma in polynomial time. We still denote the arising fuse-expression by ϕ for simplicity. We will now describe how to transform ϕ to achieve the desired property. We will process fuse-nodes one by one and as invariant, we will maintain that after processing any number of fuse-nodes, the expression still satisfies <ref>. As long as ϕ does not satisfy the desired property, there is a fuse-node t such that at least two fused vertices u and v come from the same side, say G_t_1 of t_⊕. We call t violating in this case. Since ϕ satisfies <ref>, all inner nodes on the path between t and t_⊕ are fuse-nodes. The vertices u and v have therefore the same label in G_t_1 and they can already be fused before the union. The way we do it might increase the number of fuse-nodes in the expression so we have to proceed carefully to ensure the desired time complexity. Now we formalize this idea. Among violating fuse-nodes, we always pick a node t with the largest ⊕-height to be processed. Let i ∈ [k] be such that t is a θ_i-node. First, we subdivide the edge t_⊕ t_1 resp. t_⊕ t_2 with a fresh θ_i-node t_1' resp. t_2' (see <ref> (a)). Clearly, this does not change the arising labeled graph and t now fuses at most two vertices: at most one from each side of the union. Now the following sets of nodes may become useless: {t_1', t}, {t_2', t}, {t_1'}, {t_2'}, or ∅, these nodes are therefore suppressed. In particular, if the node t is not suppressed, then it is not violating anymore. Let now ∅≠ S ⊆{t_1', t_2'} denote the set of non-suppressed nodes in {t_1', t_2'}. These nodes now potentially violate <ref>. So for every node x' in S, we proceed as in the proof of <ref> to “shift” x' to x'_⊕ to achieve that all nodes between x' and x'_⊕ are fuse-nodes (see <ref> (b)). Note that this shift only affects the path from x' to x'_⊕. Observe that every node x' ∈ S has strictly smaller ⊕-height than t. Thus the order of processing violating fuse-nodes implies that after processing any node, the value (a, b) lexicographically decreases where a denotes the maximum ⊕-height over violating fuse-nodes and b denotes the number of violating nodes with ⊕-height a. Indeed, if t was the only violating node with the maximum ⊕-height, then a decreases after this process. Otherwise, a remains the same, and b decreases. Since no new introduce-nodes are created, the maximum ⊕-height does not increase and the value of a is always upper-bounded by |ϕ|. Further, recall that after processing a fuse-node, the expression again satisfies <ref>, i.e., all inner nodes of a path from any fuse-node s to s_⊕ are fuse-nodes. Let us map every fuse-node s to s_⊕. The expression never contains useless fuse-nodes so at most k nodes (i.e., one per label) are mapped to any union-node and the value of b never exceeds k |ϕ|. Therefore, the whole process terminates after processing at most k |ϕ|^2 fuse-nodes. Next observe that none of the rules from <ref> increases the length of some root-to-leaf path. Thus, processing a fuse-node t might increase the maximum length of a root-to-leaf path by at most one, namely due to the creation of nodes t_1' and t_2'. Since on any root-to-leaf path there are at most |ϕ| ⊕-nodes and there are no useless fuse-nodes, there are at most k |ϕ| fuse-nodes on any root-to-leaf path at any moment. Initially, the length of any root-to-leaf path is bounded by |ϕ| and during the process it increases by at most one for any fuse-node on it. Hence, the length of any root-to-leaf path is always bounded by (k+1)|ϕ|. Altogether, processing a single fuse-node can be done in time polynomial in k and |ϕ| and the running time of the algorithm is polynomial in k and |ϕ|. Now we may assume that a fuse-expression looks as follows: every union-operation is followed by a sequence of fuse-nodes and each fuse-nodes fuses exactly two vertices from different sides of the corresponding union. Thus, we can see the sequence consisting of a union-node and following fuse-nodes as a union of two graphs that are not necessarily vertex-disjoint. So these graphs are glued at the pairs of fused vertices. Now we formalize this notion. A glue-k-expression is a well-formed expression constructed from introduce-, join-, relabel-, and glue-nodes using k labels. A glue-operation takes as input two k-labeled graphs (H_1, _1) and (H_2, _2) satisfying the following two properties: * For every v ∈ V(H_1) ∩ V(H_2), the vertex v has the same label in H_1 and H_2, i.e., we have _1(v) = _2(v). * For every v ∈ V(H_1) ∩ V(H_2) and every j ∈ [2], the vertex v is the unique vertex with its label in H_j, i.e., we have |_1^-1(_1(v))| = |_2^-1(_2(v))| = 1 In this case, we call the k-labeled graphs H_1 and H_2 glueable. The output of this operation is then the union of these graphs denoted by H_1 ⊔ H_2, i.e., the labeled graph (H, ) with V(H) = V(H_1) ∪ V(H_2) and E(H) = E(H_1) ∪ E(H_2) where the vertex-labels are preserved, i.e., (v) = _1(v) if v ∈ V(H_1) _2(v) if v ∈ V(H_2) . We denote the arising k-labeled graph with H_1 ⊔ H_2 (omitting for simplicity) and we call the vertices in V(H_1) ∩ V(H_2) glue-vertices. Unlike fuse-expressions, if t is a node of a glue-expression ϕ, then G^ϕ_t is a subgraph of G^ϕ. Let k ∈ and let ϕ be a fuse-k-expression. Then in time polynomial in |ϕ| + k we can compute a glue-k-expression of the same labeled graph. In polynomial time we can obtain a fuse-k-expression satisfying <ref> that creates the same graph. For simplicity, we still denote this expression by ϕ. We assume that the introduce-nodes of ϕ use pairwise distinct titles. For titles v and w, by identification of v with w we denote the operation that for every i ∈ [k], replaces every leaf v ⟨ i ⟩ of the current expression with a leaf w ⟨ i ⟩. Informally speaking, our goal is to assign the vertices that are fused at some point in the expression the same title. Then such vertices will be “automatically” glued by a glue-node. We start with α := ϕ and α will always denote current “mixed” expression, i.e., it potentially contains union-, fuse-, and glue-nodes simultaneously. We process ⊕-nodes in the order of increasing ⊕-height as follows. Let t be the union-node to be processed. Let f^t_1, …, f^t_p for some p ∈_0 denote the maximal sequence of predecessors of t in the parse tree of α such that f^t_1, …, f^t_p are fuse-nodes and t, f^t_1, …, f^t_p form a path in α. Simply speaking, f^t_1, …, f^t_p are exactly the fuse-operations following t. For j ∈ [p], let i^t_j ∈ [k] be such that f^t_j is a θ_i^t_j-node. If p > 0, we denote f^t_p by t_θ for simplicity. If p = 0, then with t_θ we denote t itself. If t is clear from the context, we sometimes omit this superscript. We will replace the path t, f^t_1, …, f^t_p with a single glue-node denoted by t_⊔ and identify some titles in the sub-expression of α rooted at t so that we maintain the following invariant. For every union-node t of ϕ such that t has already been processed, the labeled graphs G^ϕ_t_θ and G^α_t_⊔ are isomorphic. This has the following implication. For every node t in α such that t is not a glue-node, the labeled graphs G^α_t and G^ϕ_t are isomorphic. Up to some formalities, this property just ensures that all sub-expressions still create the same labeled graph. Let s_1 and s_2 be the children of t in α. The order of processing then implies that α_s_1 and α_s_2 are glue-k-expressions (i.e., contain no fuse-nodes). Let t_1 and t_2 be the children of t in ϕ. By invariant, the labeled graphs G_t_q^ϕ and G_s_q^α are isomorphic for each q ∈ [2]. So since ϕ satisfies <ref>, for every j ∈ [p] and q ∈ [2], there exists exactly one vertex v^q_i_j in G_t_q^α with label i_j. Now for every j ∈ [p], we identify v^1_i_j with v^2_i_j in α. Let ξ denote the arising expression. Now we replace the sequence t, f_1, …, f_p with a single glue-node denoted by t_⊔. And let ζ denote the constructed expression. We claim that G_t_⊔^ζ is isomorphic to G^α_t_θ. First, note that since union-nodes are processed from bottom to top and in ϕ all titles are pairwise distinct, there was no title occurring in both α_t_1 and α_t_2. Therefore, after identifying v^1_i_j with v^2_i_j for every j ∈ [p], we still have that the labeled graph G^ξ_t_1 (resp. G^ξ_t_2) is isomorphic to the labeled graph G^α_t_1 (resp. G^α_t_2). In simple words, no identification has lead to the gluing of vertices inside G^α_t_1 or G^α_t_2. Moreover, the ordering of processed nodes implies that the titles other than v^2_i_1, …, v^2_i_p are pairwise distinct in ξ_t_1 and ξ_t_2. Therefore, the glue-node t_⊔ takes two labeled graphs G^ξ_t_1≅ G^α_t_1 and G^ξ_t_1≅ G^α_t_1 with shared vertices {v^2_i_1, …, v^2_i_p} and produces their union. Observe that this is the same as applying the sequence f_p ∘…∘ f_1 ∘ t to G^α_t_1 and G^α_t_2. Therefore, we have G^ζ_t ≅ G^α_t as desired. After t is processed, we set α := ζ to denote the current expression and move on to the next union-node (if exists). After all nodes are processed, the expression α contains neither union- nor fuse-nodes. So α is a glue-k-expression such that (by invariant) G^α≅ G^ϕ holds, i.e., α creates the same labeled graph. The number of union-nodes in ϕ is bounded by |ϕ| and processing a single node can be done in time polynomial in k and the number of leaves of ϕ (i.e., also bounded by |ϕ|). Hence, the transformation takes time polynomial in k and ϕ. Transforming a glue-k-expression into a fuse-k-expression is trivial: Replace every glue-node by a union-node followed by a sequence θ_i_1, …, θ_i_q of fuse-nodes where i_1, …, i_q are the labels of vertices shared by the glued graphs. This implies that fuse- and glue-expressions are equivalent and there is no reason to define “glue-width” as a new parameter. As a last step of our transformations, we show that similarly to the existence of irredundant k-expressions defined by Courcelle and Olariu <cit.> and widely used in dynamic-programming algorithms (e.g., <cit.>), certain irredundancy can be achieved for glue-expressions. Let k ∈ and let ϕ be a fuse-k-expression. Then in time polynomial in |ϕ| + k we can compute a glue-k-expression ξ of the same labeled graph without useless nodes such that: * Let i, j ∈ [k], let t be a η_i, j-node in ξ, and let t' be the child of t in ξ. Then G^ξ_t' contains no edge {v, w} with ^ξ_t'(v) = i and ^ξ_t'(w) = j. * Let t be a glue-node in ξ and let t_1 and t_2 be its children. Then the graphs G^ξ_t_1 and G^ξ_t_2 are edge-disjoint. * Let t be a glue-node in ξ, let t_1 and t_2 be its children, and let v be a glue-vertex. Then for every q ∈ [2], the vertex v has an incident edge in G^ξ_t_q. We call a glue-k-expression satisfying these properties reduced. First, we apply <ref> to obtain a glue-k-expression ϕ of the same graph in polynomial time. As in the previous proofs of this section, we will transform ϕ iteratively until it satisfies the desired properties. In the first phase, as long as there is a join-node t and an edge {v, w} violating the first property, we proceed as follows. There exists a successor t' of t in ϕ such that t' is a η_i', j'-node for some i', j' ∈ [k], the vertices v and w are the vertices of G_t', and it holds that ^ϕ_t'(v) = i' and ^ϕ_t'(w) = j'. There can be multiple such nodes t' so we fix an arbitrary one. We suppress the node t'. Let ϕ' denote the arising expression. Note that once two vertices have the same label, this property is maintained along the expression. So similarly to the construction of irredundant clique-expressions (see <cit.>), it holds that every edge e' created by t' is also created by t. Formally, the following holds. Since t' is a successor of t, for all v' ∈ V(G^ϕ_t') the property ^ϕ_t'(v') = ^ϕ_t'(v) implies ^ϕ_t(v') = ^ϕ_t(v) and hence, also ^ϕ'_t(v') = ^ϕ'_t(v). The analogous statement holds for vertices w' ∈ V(G^ϕ_t') with ^ϕ_t'(w') = ^ϕ_t'(w). Therefore, the labeled graphs G^ϕ and G^ϕ' are isomorphic. Now we set ϕ := ϕ', and the process is repeated until ϕ satisfies the first property. As mentioned above, the node t' is not necessarily unique for t so after t' is suppressed, t and {v, w} can still violate the first property of the lemma. The number of join-nodes decreased by one though. Therefore, the process terminates after at most |ϕ| steps and it results in a glue-k-expression of the same labeled graph. Clearly, each step takes only polynomial time so the running time of this transformation is polynomial. In the second phase, we proceed similarly to satisfy the second property. As long as there exist a glue-node t and an edge {v, w}∈ E(G^ϕ_t_1) ∩ E(G^ϕ_t_2) violating the second property, we proceed as follows. Note that v and w are then glue-vertices. There exists a successor t' of t_1 such that t' is a η_i', j'-node for some i', j' ∈ [k], the vertices v and w are the vertices of G_t', and it holds that ^ϕ_t'(v) = i' and ^ϕ_t'(w) = j'. We claim that e is then the only edge created by t', i.e., v (resp. w) is the unique vertex with label i' (resp. j') in G^ϕ_t'. Suppose not, then without loss of generality, there exists a vertex v' ≠ v in G^ϕ_t' with label i'. Then the vertices v and v' would also have the same label in G^ϕ_t_1. But the glueabilty of G_t_1^ϕ and G_t_2^ϕ implies that v is the unique vertex with label (^ϕ_t_1)^-1(v) in G^ϕ_t_1 – a contradiction. Let now ϕ' denote the expression arising from ϕ by suppressing t'. Then it holds G^ϕ'_t_1 = G^ϕ_t_1 - e. Therefore, G^ϕ'_t = G^ϕ_t and also G^ϕ' = G^ϕ. Now we set ϕ := ϕ' and repeat the process until ϕ satisfies the second property. Since in each repetition the expression loses one join-node, the process terminates after at most |ϕ| steps. Also, one step takes only polynomial time so the total running time is also polynomial. Clearly, the first condition remains satisfied during the process. Now we move on to the third property. Let t and v violate it, i.e., without loss of generality v has no incident edge in G^ϕ_t_1. The crucial observation for the transformation described below is that since v also belongs to G^ϕ_t_2, it holds that G^ϕ_t_1⊔ G^ϕ_t_2 = (G^ϕ_t_1 - {v}) ⊔ G^ϕ_t_2 (in particular, the glueability precondition applies to the right side are as well). Hence, we want to transform the sub-expression rooted at t_1 so that it does not create the vertex v. For this, we simply need to cut off all introduce-nodes using title v from this sub-expression. Formally, we start with ϕ' := ϕ and as long as ϕ'_t_1 contains a leaf ℓ being a v⟨ i ⟩-node for some i ∈ [k]: * As long as the parent t' of ℓ is not a glue-node we repeat the following. Since there are no useless nodes, t' is a ρ_i → j-node so we apply the rule from <ref> to suppress it. * Now t' is a glue-node so we remove ℓ and suppress t'. Note that when the last item is reached, the parent t' is a glue-node whose one side of the input is the graph that consists of a single vertex v. A simple inductive argument shows that G^ϕ'_t_1 = G^ϕ_t_1 - {v} holds as desired. Now we set ϕ := ϕ' and repeat until the third property is satisfied. Clearly, the first two properties are maintained and since in each repetition the size of the expression decreases by at least one, the process take only polynomial time. It is known that for clique-expressions, the number of leaves of an expression is equal to the number of vertices in the arising graph. For fuse-, and hence, glue-expressions, the situation is different. Since a fuse-node, in general, decreases the number of vertices in the arising graph, the number of leaves in a fuse-expression can be unbounded. However, we now show that the number of leaves of a reduced glue-expression is bounded by 𝒪(m+n) where n resp. m is the number of vertices resp. edges of the arising graph. This will lead to an upper bound of 𝒪(k^2(m+n)) on the number of nodes of a reduced glue-expression. Let ϕ be a fuse-k-expression of a graph H on n vertices and m edges. Then in time polynomial in |ϕ| and k we can compute a reduced glue-k-expression ζ of H such that the parse tree of ζ contains 𝒪(k^2(m+n)) nodes. By <ref>, in polynomial time we can compute a reduced glue-k-expression ξ of the same labeled graph. Let L(ξ) denote the set of leaves of ξ. We start by bounding the size of this set. Let H = G^ξ. We will now define a mapping h: L(ξ) → V(H) ∪ E(H) and then show some properties of it. Let ℓ∈ L(ξ) and let v ∈ V(H) and i ∈ [k] be such that ℓ is a v⟨ i ⟩-node. If no other leaf in L(ξ) has title v, then we simply set h(ℓ) = v. Otherwise, there exists at least one other leaf with title v. Hence, there exists at least one glue-node t such that v is a glue-vertex at t. Note that any such t is an ancestor of ℓ. So let g(ℓ) denote the bottommost among such glue-nodes and let s_1 and s_2 be its children. Without loss of generality, we assume that ℓ belongs to the sub-expression rooted at s_1. The reducedness of ξ implies that there is an edge e ∈ E(G^ξ_s_1) ∖ E(G^ξ_s_2) incident with v. We set h(ℓ) = e. In particular we have e ∈ G^ξ_g(ℓ). Observe that any vertex is mapped by h either to itself or to an incident edge. Now let e be an edge of H and let v be one of its end-vertices. We claim that there exists at most one leaf with title v mapped to e. For the sake of contradiction, suppose there exist leaves ℓ_1 ≠ℓ_2 and i_1, i_2 ∈ [k] such that ℓ_1 (resp. ℓ_2) is a v⟨ i_1 ⟩-node (resp. v⟨ i_2 ⟩-node) and h(ℓ_1) = h(ℓ_2) = e. Then let t denote the lowest common ancestor of g(ℓ_1) and g(ℓ_2). Let q ∈ [2] be arbitrary. Let t_q be the child of t such that g(ℓ_q) is a (not necessarily proper) descendant of t_q. The property h(ℓ_q) = e implies that e ∈ E(G^ξ_g(ℓ_q)) ⊆ E(G^ξ_t_q) holds. Therefore, we obtain e ∈ E(G^ξ_t_1) ∩ E(G^ξ_t_2) contradicting the reducedness of ξ. So there indeed exists at most one leaf in ξ with title v mapped to e. Since e has two end-points, there are at most two leaves of ξ mapped to e. Finally, for every vertex u of H in the image of h, we know that there exists exactly one leaf with title u so there exists at most one leaf of ξ mapped to u. These two properties imply that the size of preimage of h, i.e., the size of L(ξ) is bounded by 2m + n. It is folklore that a rooted tree with at most 2m+n leaves has 𝒪(m+n) inner nodes with at least two children. Hence, ξ has 𝒪(m+n) glue-nodes. Now we will apply a simple folklore trick (originally applied to clique-expressions) to bound the number of nodes between any two consecutive glue-nodes. For this, we apply rules from <ref> of <ref> to ensure that for any two consecutive glue-nodes t_1 and t_2 (where t_1 is an ancestor of t_2), the path from t_2 to t_1 first contains join-nodes and then relabel-nodes. Any duplicate on this path would be useless. Therefore, this path contains 𝒪(k^2) relabel- and join-nodes (at most one per possible operation). Finally, by applying the rule from <ref> of <ref> we can ensure that a parent of any introduce-node is a glue-node. Let ζ be the arising glue-expression. Clearly, ζ is still reduced. The number of relabel- and join-nodes in ζ is now bounded by 𝒪(k^2(m+n)) and so the total numbers of nodes is also bounded by this value as claimed. § ALGORITHMS PARAMETERIZED BY FUSION-WIDTH In this section, we parameterize by fusion-width. We will present three algorithms for problems W[1]-hard when parameterized by clique-width, these are Hamiltonian Cycle, Max Cut, and Edge Dominating Set. The algorithms are XP-algorithms and they have the same running time as the tight (under ETH) algorithms parameterized by clique-width. Fusion-width is upper-bounded by clique-width and this has two implications. First, the lower bounds from clique-width apply to fusion-width as well, and hence, our algorithms are also ETH-tight. And second, our results show that these problems can be solved for a larger class of graphs within the same running time. Each of the following algorithms gets a fuse-k-expression of the input graph and as the very first step transforms it into a reduced glue-k-expression of the same graph in polynomial time (cf. <ref>). By <ref>, the size of the expression is then linear in the size of the graph. §.§ Max Cut In this problem, given a graph G = (V, E) we are asked about the maximum cardinality of E_G(V_1, V_2) over all partitions (V_1, V_2) of V. In this subsection we solve the Max Cut problem in time n^𝒪(k) given a fuse-k-expression of a graph. For this, we will rely on the clique-width algorithm by Fomin et al. and use the same dynamic-programming tables <cit.>. Then, it suffices to only show how to handle glue-nodes. Later, in the description of the algorithm, we also sketch why general fuse-expressions (i.e., with unrestricted fuse-nodes) seem to be problematic for the algorithm. Let H be a k-labeled graph. The table T_H contains all vectors h = (s_1, …, s_k, r) with 0 ≤ s_i ≤ |U^H_i| for every i ∈ [k] and 0 ≤ r ≤ |E(H)| for which there exists a partition (V_1, V_2) of V(H) such that |V_1 ∩ U^H_i| = s_i for every i ∈ [k] and there are at least r edges between V_1 and V_2 in H. We say that the partition (V_1, V_2) witnesses the vector h. For their algorithm, Fomin et al. provide how to compute these tables for nodes of a k-expression of the input graph. In particular, they show that this table can be computed for a k-labeled graph H with n vertices in time n^𝒪(k) in the following cases: * If H consists of a single vertex. * If H = ρ_i → j(H') for some i, j ∈ [k] and a k-labeled graph H' given T_H'. * If H = η_i, j(H') for some i ≠ j ∈ [k] and a k-labeled graph H' given T_H'. The correctness of their algorithm requires that a k-expression is irredundant, i.e., no join-node creates an already existing edge. Our extended algorithm will process a reduced glue-expression and the first property in <ref> will ensure that this property holds so the approach of Fomin et el. can indeed be adopted for the above three types of nodes. First, let us mention that processing a fuse-node θ_i seems problematic (at least using the same records). Some vertices of label i might have common neighbors. So when fusing these vertices, multiple edges fall together but we do not know how many of them so it is unclear how to update the records correctly. For this reason, we make use of glue-expressions where, as we will see, the information stored in the records suffices. To complete the algorithm for the fusion-width parameterization, we provide a way to compute the table T_H if H = H^1 ⊔ H^2 for two glueable edge-disjoint k-labeled graphs H^1 and H^2 if the tables T_H^1 and T_H^2 are provided. Let {v_1, …, v_q} = V(H^1) ∩ V(H^2) for some q ∈_0 and let i_1, …, i_q be the labels of v_1, …, v_q in H^1, respectively. The glueability implies that for every j ∈ [q], it holds that |U^H^1_i_j| = |U^H^2_i_j| = 1. Hence, for every entry (s_1, …, s_k, r) of T_H^1 and every j ∈ [q], it also holds that s_i_j∈{0, 1} with s_i_j = 1 if and only if v_j is put into V_1 in the partition witnessing this entry. The same holds for the entries in T_H^2. This gives the following way to compute the table T_H. It will be computed in a table S_H. We initialize this table to be empty. Then we iterate through all pairs of vectors h^1 = (s_1^1, …, s_k^1, r^1) from T_H^1 and h^2 = (s_1^2, …, s_k^2, r^2) from T_H^2. If there is an index j ∈ [q] such that s_i_j^1 ≠ s_i_j^2, then we skip this pair. Otherwise, for every 0 ≤ r ≤ r^1 + r^2, we add to S_H the vector h = (s_1, …, s_k, r) where for all i ∈ [k] s_i = s_i^1 + s_i^2 i ∉{i_1, …, i_q} s_i^1 · s_i^2 i ∈{i_1, …, i_q} and we call h^1 and h^2 compatible in this case. The table S_H contains exactly the same entries as T_H. For the one direction, let h^1 = (s_1^1, …, s_k^1, r^1) and h^2 = (s_1^2, …, s_k^2, r^2) be compatible entries of T_H^1 and T_H^2, respectively. And let (V^1_1, V^1_2) and (V^2_1,V^2_2) be the partitions witnessing h^1 in H^1 and h^2 in H^2, respectively. Also let 0 ≤ r ≤ r_1 + r_2. We claim that then (V_1, V_2) with V_1 = V^1_1 ∪ V^2_1 and V_2 = V^1_2 ∪ V^2_2 is a partition witnessing a vector h = (s_1, …, s_k, r) of S_H constructed as above in H so h belongs to T_H. First, we show that this is a partition of V(H). We have V(H) = V(H^1) ∪ V(H^2) = (V^1_1 ∪ V^1_2) ∪ (V^2_1 ∪ V^2_2) = (V^1_1 ∪ V^2_1) ∪ (V^1_2 ∪ V^2_2) = V_1 ∪ V_2. Since the sets V^1_1 and V^1_2 are disjoint and also the sets V^2_1 and V^2_2 are disjoint, we have: V_1 ∩ V_2 = (V^1_1 ∩ V^2_2) ∪ (V^2_1 ∩ V^1_2). Any vertex v in V^1_1 ∩ V^2_2 belongs to both H^1 and H^2 so there exists an index j^* ∈ [q] with v = v_q. The property v ∈ V^1_1 then implies s^1_i_j = 1 while the property v ∈ V^2_2 implies s^2_i_j = 0. So we obtain s^1_i_j≠ s^2_i_j contradicting the fact that h^1 and h^2 are compatible. Therefore, (V^1_1 ∩ V^2_2) is empty. A symmetric argument shows that (V^2_1 ∩ V^1_2) is empty as well. Hence V_1 and V_2 are disjoint and they indeed form a partition of V(H). Let j ∈ [q]. Since v_j is the unique vertex with label i_j in H, the set V_1 contains exactly one vertex of this label if s_i_j^1 = s_i_j^2 = 1 and zero such vertices if s_i_j^1 = s_i_j^2 = 0. So we have |V_1 ∩ U^i_j_H| = s_i_j^1 · s_i_j^2 = s_i_j. For every i ∈ [k] ∖{i_1, …, i_q} the sets U^H^1_i and V(H^2) as well as U^H^2_i and V(H^1) are disjoint by definition of {i_1, …, i_q} and therefore: |V_1 ∩ U^H| = |V_1 ∩ (U^H^1_i ∪ U^H^2_i)| = |(V_1 ∩ U^H^1_i) ∪ (V_1 ∩ U^H^2_i)| = |(V_1 ∩ U^H^1_i)| + |(V_1 ∩ U^H^2_i)| = |(V_1^1 ∩ U^H^1_i)| + |(V_1^2 ∩ U^H^2_i)| = s_i^1 + s_i^2 = s_i. Finally, we bound the number of edges in E_H(V_1, V_2). It holds that E_H^b(V^b_1, V^b_2) ⊆ E_H(V^b_1, V^b_2) ⊆ E_H(V_1, V_2) for every b ∈ [2] so we obtain E_H^1(V^1_1, V^1_2) ∪ E_H^2(V^2_1, V^2_2) ⊆ E_H(V_1, V_2). Recall that the graphs H^1 and H^2 are edge-disjoint, then we have r ≤ r^1 + r^2 ≤ |E_H^1(V^1_1, V^1_2)| + |E_H^2(V^2_1, V^2_2)| ≤ |E_H(V_1, V_2)|. So (V_1, V_2) is indeed a partition witnessing h in H. For the other direction, let h = (s_1, …, s_k, r) be an entry of T_H. Then there exists a partition (V_1, V_2) of V(H) witnessing h. We will show that there exist entries h^1 and h^2 of T_H^1 and T_H^2, respectively, such that the above algorithm adds h to the table S_H at the iteration of h^1 and h^2. We set V^j_2_j_1 = V_j_1∩ V(H^j_2) for j_1, j_2 ∈ [2]. Let j ∈ [2] be arbitrary but fixed. Since (V_1, V_2) is a partition of V(H) and we have V(H^j) ⊆ V(H), the pair (V_1^j, V_2^j) is a partition of V(H^j). Let r^j = |E_H^j(V_1^j, V_2^j)| and let h^j = (s_1^j, …, s_k^j, r^j) be the vector such that (V_1^j, V_2^j) witnesses h^j. First, consider i ∈{i_1, …, i_q}. Recall that glueability implies that s_i^j, s_i ∈{0, 1}. If s_i = 1, then we have v_i ∈ V_1 and therefore also v_i ∈ V_1^j so we obtain s_i^j = 1. Similarly, if s_i = 0, then we have v_i ∈ V_2 and therefore also v_i ∈ V_2^j so we obtain s_i^j = 0. Therefore, it holds that s_i = s_i^1 · s_i^2. Next, consider i ∈ [k] ∖{i_1, …, i_q}. The sets U^H^1_i and U^H^2_i are disjoint. Therefore, the sets V_1 ∩ U^H^1_i = V_1^1 ∩ U^H^1_i and V_1 ∩ U^H^2_i = V_1^2 ∩ U^H^2_i partition V_1 ∩ U^H_i so we obtain s_i = s_i^1 + s_i^2. Let W^1 = V(H^1) ∖ V(H^2), W^2 = V(H^2) ∖ V(H^1), and let I = V(H^1) ∩ V(H^2). For j_1, j_2 ∈ [2], let W^j_1_j_2 = W^j_1∩ V_j_2 and let I_j_2 = I ∩ V_j_2. Then W^1_j, W^2_j, and I_j partition V_j for j ∈ [2]. The following holds: E_H(V_1, V_2) = E_H(W^1_1, W_2^1) ∪ E_H(W^1_1, I_2) ∪ E_H(W^1_1, W_2^2) ∪ E_H(I_1, W_2^1) ∪ E_H(I_1, I_2) ∪ E_H(I_1, W_2^2) ∪ E_H(W^2_1, W_2^1) ∪ E_H(W^2_1, I_2) ∪ E_H(W^2_1, W_2^2) = E_H(W^1_1, W_2^1) ∪ E_H(W^1_1, I_2) ∪ E_H(W^1_1, W_2^2) ∪ E_H(I_1, W_2^1) ∪(E_H^1(I_1, I_2) ∪ E_H^2(I_1, I_2)) ∪ E_H(I_1, W_2^2) ∪ E_H(W^2_1, W_2^1) ∪ E_H(W^2_1, I_2) ∪ E_H(W^2_1, W_2^2). Therefore, the sets occurring after the second equality are pairwise disjoint so the size of E_H(V_1, V_2) is the sum of their sizes. Next, recall that every edge of H is either an edge of H^1 or of H^2 and therefore, for every edge of H, there exists an index j^* such that both end-points of this edge belong to H^j*. Therefore, the sets E_H(W^1_1, W_2^2) and E_H(W^2_1, W_2^1) are empty. This also implies the following equalities: E_H(W^1_1, W_2^1) = E_H^1(W^1_1, W_2^1) E_H(W^1_1, I_2) = E_H^1(W^1_1, I_2) E_H(I_1, W_2^1) = E_H^1(I_1, W_2^1) E_H(I_1, W_2^2) = E_H^2(I_1, W_2^2) E_H(W^2_1, I_2) = E_H^2(W^2_1, I_2) E_H(W^2_1, W_2^2) = E_H^2(W^2_1, W_2^2) Now by using these properties we obtain E_H(V_1, V_2) = E_H^1(W^1_1, W_2^1) ∪ E_H^1(W^1_1, I_2) ∪ E_H^1(I_1, W_2^1) ∪ E_H^1(I_1, I_2) ∪ E_H^2(I_1, I_2) ∪ E_H^2(I_1, W_2^2) ∪ E_H^2(W^2_1, I_2) ∪ E_H^2(W^2_1, W_2^2). By rearranging the terms we get E_H(V_1, V_2) = (E_H^1(W^1_1, W_2^1) ∪ E_H^1(W^1_1, I_2) ∪ E_H^1(I_1, W_2^1) ∪ E_H^1(I_1, I_2)) ∪ (E_H^2(I_1, I_2) ∪ E_H^2(I_1, W_2^2) ∪ E_H^2(W^2_1, I_2) ∪ E_H^2(W^2_1, W_2^2)). Finally, note that for j_1, j_2 ∈ [2], the pair (W^j_1_j_2, I_j_2) is a partition of V^j_1_j_2. So we obtain E_H(V_1, V_2) = E_H^1(V^1_1, V^1_2) ∪ E_H^2(V^2_1, V^2_2). Since the graphs H^1 and H^2 are edge-disjoint, we get r^1 + r^2 = |E_H^1(V^1_1, V^1_2)| + |E_H^2(V^2_1, V^2_2)| = |E_H(V_1, V_2)| = r. Therefore, at the iteration corresponding to h^1 and h^2 the algorithm indeed adds h to S_H. This concludes the proof of the correctness of the algorithm. Observe that if a graph H has n nodes, the table T_H contains n^𝒪(k) entries. Therefore, this table can be computed from T_H^1 and T_H^2 in time n^𝒪(k) as well. This results in an algorithm that given a graph H together with a reduced glue-k-expression ξ of H, traverses the nodes x of ξ in a standard bottom-up manner and computes the tables T_G^ξ_x in time n^𝒪(k). Let y denote the root of ξ. Then G^ξ_y is exactly the graph H so we output the largest integer r such that T_G^ξ_y contains an entry (s_1, …, s_k, r) for some s_1, …, s_k ∈_0. By definition, this value is then the size of the maximum cardinality cut of the graph H. Given a fuse-k-expression of a graph H, the Max Cut problem can be solved in time n^𝒪(k). Fomin et al. have also shown the following lower bound: <cit.> Let H be an n-vertex graph given together with a k-expression of H. Then the Max Cut problem cannot be solved in time f(k) · n^o(k) for any computable function f unless the ETH fails. Since any k-expression of a graph is, in particular, a fuse-k-expression of the same graph, the lower bound transfers to fuse-k-expressions as well thus showing that our algorithm is tight under ETH. Let H be an n-vertex graph given together with a fuse-k-expression of H. Then the Max Cut problem cannot be solved in time f(k) · n^o(k) for any computable function f unless the ETH fails. §.§ Edge Dominating Set In this problem, given a graph G = (V, E) we are asked about the cardinality of a minimum set X ⊆ E such that every edge in E either belongs to X itself or it has an incident edge in X. In this section, we provide a way to handle the glue-nodes in order to solve the Edge Dominating Set problem. As in the previous subsection, we rely on the dynamic programming algorithm for the clique-width parameterization by Fomin et al. and use their set of records defined as follows. For a k-labeled graph H, the table T_H contains all vectors (s_1, …, s_k, r_1, …, r_k, ℓ) of non-negative integers such that there exists a set S ⊆ E(H) and a set R ⊆ V(H) ∖ V(S) with the following properties: * |S| ≤ℓ≤ |E(H)|; * for every i ∈ [k], exactly s_i vertices of U^H_i are incident with the edges of S; * for every i ∈ [k], we have |R ∩ U^H_i| = r_i; * every edge of H undominated by S has an end-vertex in R. We say that the pair (S, R) witnesses the vector (s_1, …, s_k, r_1, …, r_k, ℓ) in H. The last property reflects that it is possible to attach a pendant edge to every vertex in R so that the set S together with these pendant edges dominates all edges of H. In the following, we will sometimes use this view in our arguments and denote the set of edges pendant at vertices of R by E^R. Note that since no vertex incident with S belongs to R, we have s_i + r_i ≤ |U^H_i| for any i ∈ [k]. In particular, for every i ∈{i_1, …, i_q}, the property r_i = 1 implies s_i = 0 and therefore r_i s_i = r_i. For their algorithm, Fomin et al. provide how to compute these tables for nodes of a k-expression of the input graph. In particular, they show that this table can be computed for a k-labeled graph H with n vertices in time n^𝒪(k) in the following cases: * If H consists of a single vertex. * If H = ρ_i → j(H') for some i, j ∈ [k] and a k-labeled graph H' given the table T_H'. * If H = η_i, j(H') for some i ≠ j ∈ [k] and a k-labeled graph H' given the table T_H'. Similarly to the previous subsection, let us mention that processing a fuse-node θ_i seems problematic (at least using the same records). Some vertices of label i might have common neighbors. So when fusing these vertices, multiple edges of the set S of a partial solution fall together but we do not know how many of them so it is unclear how to update the records correctly. For this reason, we make use of glue-expressions where, as we will see, the information stored in the records suffices. To complete the algorithm for the fusion-width parameterization, we provide a way to compute the table T_H if H = H^1 ⊔ H^2 for two glueable edge-disjoint k-labeled graphs H^1 and H^2 if the tables T_H^1 and T_H^2 are provided. Let {v_1, …, v_q} = V(H^1) ∩ V(H^2) for some q ∈_0 and let i_1, …, i_q be the labels of v_1, …, v_q in H^1, respectively. Then for every j ∈ [q], it holds that |U^H^1_i_j| = |U^H^2_i_j| = 1. Hence, for every entry (s_1, …, s_k, r_1, …, r_k, ℓ) of T_H^1 and every j ∈ [q], it holds that s_i_j + r_i_j≤ 1. The same holds for the entries in T_H^2. This motivates the following way to compute the table T_H. It will be computed in a table S_H. We initialize this table to be empty. Then we iterate through all pairs of vectors h^1 = (s_1^1, …, s_k^1, r_1^1, …, r_k^1, ℓ^1) from T_H^1 and h^2 = (s_1^2, …, s_k^2, r_1^2, …, r_k^2, ℓ^2) from T_H^2 and for every ℓ^1 + ℓ^2 ≤ℓ≤ |E(H)|, we add the vector (s_1, …, s_k, r_1, …, r_k, ℓ) defined as follows. For every i ∈ [k] ∖{i_1, …, i_q}, it holds that s_i = s_i^1 + s_i^2 and r_i = r_i^1 + r_i^2. And for every i ∈{i_1, …, i_q}, it holds that s_i = s_i^1 s_i^2 and r_i = s_i^1 s_i^2 (r_i^1 r_i^2). The table S_H contains exactly the same entries as T_H. For the one direction, let h^1 = (s_1^1, …, s_k^1, r_1^1, …, r_k^1, ℓ^1) be an entry of T_H^1 and h^2 = (s_1^2, …, s_k^2, r_1^2, …, r_k^2, ℓ^2) be an entry of T_H^2. So for j ∈ [2], there exists a pair (S^j, R^j) witnessing h^j in H^j. Also let ℓ∈_0 be such that ℓ^1 + ℓ^2 ≤ℓ≤ |E(H)| and let h = (s_1, …, s_k, r_1, …, r_k, ℓ) be the entry constructed by the algorithm from h^1 and h^2. We now show how to construct a pair (S, R) witnessing h in H. First, let S = S^1 ∪ S^2. Then we have |S| ≤ |S^1| + |S^2| ≤ℓ^1 + ℓ^2 ≤ℓ. Now for every i ∈ [k], we determine the number s_i' of vertices in U^H_i = U^H^1_i_1∪ U^H^2_i_2 incident with an edge of S. For i ∈ [k] ∖{i_1, …, i_q}, the sets U^H^1_i_1 and U^H^2_i_2 are disjoint so we obtain s_i' = s^1_i + s^2_i = s_i. For every j ∈{1, …, q}, the value s_i_j' reflects whether the vertex v_j has an incident edge in S. Similarly, the values s_i_j^1 and s_i_j^2 reflect whether v_j has an incident edge in S^1 and S^2, respectively. Due to S = S^1 ∪ S^2, we obtain s_i_j' = s_i_j^1 s_i_j^2 = s_i_j. Altogether, we obtain s_i = s_i' for every i ∈ [k]. Next we set R = (R^1 ∪ R^2) ∖ V(S). Now for every i ∈ [k], let r_i' denote the size of R ∩ U^H_i, i.e., the number of vertices with label i that have a pendant edge attached to it. Recall that we have U^H_i = U^H^1_i ∪ U^H^2_i. First, consider i ∈ [k] ∖{i_1, …, i_q}. In this case, the sets U^H^1_i and U^H^2_i are disjoint. We claim that in this case we simply have R ∩ U^H_i = (R^1 ∪ R^2) ∩ U^H_i. Consider a vertex v ∈ R^1 ∩ U^H^1_i. Since (S^1, R^1) witnesses h^1 in H, the vertex v has no incident edge in S^1 and since v does not belong to H^2, it also has no incident edge in S^2. So v has no incident edge in S and therefore belongs to R. The symmetric argument shows that the vertices of R^2 ∩ U^H^2_i belong to R. So we obtain R ∩ U^H_i = (R^1 ∪ R^2) ∩ U^H^1_i and since the sets R^1 ∩ U^H^1_i and R^2 ∩ U^H^2_i are disjoint, we get |R ∩ U^H_i| = |R^1 ∩ U^H^1_i| + |R^2 ∩ U^H^2_i| = r^1_i + r^2_i = r_i. Now let j ∈ [q]. Recall that there exists a unique vertex v_j ∈ U^H_i_j. Also, the vertex v_j is the unique vertex in the set U^H^1_i_j as well as in U^H^2_i_j. By construction, this vertex belongs to R if and only if it belongs to R^1 ∪ R^2 and has no incident edge in S, i.e., r_i_j' = (r^1_i_j r^2_i_j) s_i_j = (r^1_i_j r^2_i_j) (s^1_i_j s^2_i_j) = (r^1_i_j r^2_i_j) s^1_i_j s^2_i_j = r_i_j. So we obtain r_i' = r_i for every i ∈ [k]. It remains to prove that pendant edges from R^E dominate all edges of E(H) undominated by S. So let e be an edge of E(H) undominated S. Without loss of generality, assume that e belongs to H^1. First, e is an edge of H^1 undominated by S^1 and therefore, it has an end-point v in R^1. Second, since e is not dominated by S, in particular, the vertex v has no incident edge in S and therefore, by construction, the edge e also belongs to R as desired. Altogether, we have shown that (S, R) witnesses h in H and therefore, the vector h belongs to T_H. For the other direction, we consider a vector h = (s_1, r_1, …, s_k, r_k, ℓ) from T_H. Let (S, R) be the pair witnessing h in H. For j ∈ [2], let S^j = S ∩ E(H^j) and let ℓ^j = |S^j|. We then have ℓ^j ≤ |E(H^j)|. Since the graphs H^1 and H^2 are edge-disjoint, we obtain ℓ^1 + ℓ^2 = |S^1| + |S^2| = |S| ≤ℓ. For i ∈ [k] and j ∈ [2], let s_i^j = |S^j ∩ U^H^j_i|. First, let i ∈ [k] ∖{i_1, …, i_q}, j ∈ [2], and let v be a vertex from U^H_i ∩ V(H^j). Recall that U^H^1_i and U^H^2_i are disjoint. Then by construction, the vertex v has an incident edge in S if and only if it has one in S^j. This, together with, again, the disjointness of U^H^1_i and U^H^2_i implies that s_i = s_i^1 + s_i^2 holds. Now let j ∈ [q]. Then the unique vertex v_j ∈ U^H_i_j has an incident edge in S if and only if it has an incident edge in S^1 or in S^2, i.e., we have s_i_j = s^1_i_j s^2_i_j. Now we construct the sets R^1 and R^2 from S and R as follows. For j ∈ [2], we set R^j = (R ∩ V(H^j)) ∪{v_p | p ∈ [q], s_i_p = 1, s_i_p^j = 0}. And for i ∈ [k] and j ∈ [2], we set r^j_i = R^j ∩ U^H^j_i. Observe that for i ∈{i_1, …, i_q} and j ∈ [2], we then have r^j_i = r_i (s_i s_i^j) and for i ∈ [k] ∖{i_1, …, i_1}, we have r_i = r^1_i + r^2_i since U^H^1_i and U^H^2_i are disjoint. Let now j ∈ [2] be arbitrary but fixed. We show that the edges of S^j together with pendant edges at vertices in R^j dominate all edges of H^j. So let e be an edge of H^j. Since e is also an edge of H, it is dominated by S ∪ E^R. So there exists an end-vertex u of e such that u is incident with an edge in S or u belongs to R. If u belongs to R, then by construction u also belongs to R^j and so e is dominated by a pendant edge from E^R^j. So we now may assume that u does not belong to R and it has an incident edge e' of H in S. First, assume that we have u ∉{v_1, …, v_q}. Since u is not a glue-vertex, any edge of H incident with u must be an edge of H^j, i.e., we have e' ∈ S ∩ E(H^j) = S^j. So e is dominated by S^j. Now we may assume that u ∈{v_1, …, v_q} holds and let p ∈ [q] be such that u = v_p. Suppose e is not dominated by S^j and the edges pendant at R^j. In particular, it implies that u does not belong to R^j. Since u = v_p is the unique vertex in U^H^j_i_p, we have s^j_i_p = 0 and r^j_i_p = 0. Recall that by the above assumption, the vertex u has an incident edge in S, i.e., s_i_p = 1. But this contradicts the equality <ref>. Thus, the edge e is dominated by S^j ∪ E^R^j. Since e was chosen arbitrarily, this holds for any edge of H^j. Altogether, we have shown that (S^j, R^j) witnesses h^j in H^j and therefore, h^j is an entry of T_H^j. It remains to show that at the iteration corresponding to h^1 and h^2, the algorithm adds h to S_H. So let (s_1', …, s_k', r_1', …, r_k', ℓ') be an entry added to S_H such that ℓ' = ℓ holds. Above we have shown that ℓ^1 + ℓ^2 ≤ℓ holds so such an entry indeed exists. Also, we have already shown that s_i = s_i^1 + s_i^2 for any i ∈ [k] ∖{i_1, …, i_q} and s_i = s_i^1 s_i^2 for any i ∈{i_1, …, i_q}. So it holds that s_i = s_i' for any i ∈ [k]. It remains to show that r_i = r_i' holds for any i ∈ [k] as well. Recall that by the construction of the algorithm, we have r_i' = r_i^1 + r_i^2 for any i ∈ [k] ∖{i_1, …, i_q}. The equality (<ref>) then implies that r_i = r_i' holds. For i ∈{i_1, …, i_q} the algorithm sets r_i' = s_i^1 s_i^2 (r_i^1 r_i^2). We then obtain r_i' = s_i^1 s_i^2 (r_i^1 r_i^2) (<ref>)= s_i^1 s_i^2 ((r_i [s_i s_i^1]) (r_i [s_i s_i^2])) s_i = s_i^1 s_i^2= s_i^1 s_i^2 ((r_i [(s_i^1 s_i^2) s_i^1]) (r_i [(s_i^1 s_i^2) s_i^2])) = s_i^1 s_i^2 ((r_i (s_i^2 s_i^1)) (r_i (s_i^1 s_i^2))) = s_i^1 s_i^2 (r_i (s_i^2 s_i^1) (s_i^1 s_i^2)) = s_i^1 s_i^2 r_i = (s_i^1 s_i^2) r_i = s_i r_i (<ref>)= r_i So we indeed obtain r_i = r_i' for every i ∈ [k]. Therefore, at the iteration corresponding to the entries h^1 and h^2 of T_H^1 and T_H^2, respectively, the algorithm indeed adds the entry h to S_H. Altogether, we obtain that S_H = T_H and the provided algorithm indeed computes the table T_H given the tables T_H^1 and T_H^2. Observe that if a graph H has n nodes, the table T_H contains n^𝒪(k) entries. Therefore, this table can be computed from T_H^1 and T_H^2 in time n^𝒪(k) as well. This results in an algorithm that given a graph H together with a reduced glue-k-expression ξ of H, traverses the nodes x of the expression in a standard bottom-up manner and computes the tables T_G^ξ_x in time n^𝒪(k). Let y denote the root of ξ. Then G^ξ_y is exactly the graph H. As noted by Fomin et al., the size of the minimum edge dominating set of H is the smallest integer ℓ such that the table T_G^ξ_y contains an entry (s_1, …, s_k, 0, …, 0, ℓ) for some s_1, …, s_k ∈_0. So the algorithm outputs this value. Given a fuse-k-expression of a graph H, the Edge Dominating Set problem can be solved in time n^𝒪(k). Fomin et al. have also shown the following lower bound: <cit.> Let H be an n-vertex graph given together with a k-expression of H. Then the Edge Dominating Set problem cannot be solved in time f(k)n^o(k) for any computable function f unless the ETH fails. Since any k-expression of a graph is, in particular, its fuse-k-expression, the lower bound transfers to fuse-k-expressions as well thus showing that our algorithm is tight under ETH. Let H be an n-vertex graph given together with a fuse-k-expression of H. Then the Edge Dominating Set problem cannot be solved in time f(k)n^o(k) for any computable function f unless the ETH fails. §.§ Hamiltonian Cycle In this problem, given a graph G = (V, E) we are asked about the existence of a cycle visiting each vertex exactly once. Here we provide an algorithm solving this problem. Similarly, to the previous two problems, we will rely on the existing algorithm for the parameterization by clique-width. The algorithm is by Bergougnoux et al. and runs in time n^𝒪() <cit.>. We will show how to handle glue-nodes in the same running time. We will follow the general idea for union-nodes from the original paper. However, with multiple vertices being glued, the situation becomes more complicated and the proof of correctness gets more involved. We start with some preliminary definitions, most of which were already introduced by Bergougnoux et al <cit.>. A path packing is a graph such that each of its connected components is a path. We say that a path packing is a path packing in H if is a subgraph of H. A maximal path packing of a graph H is a spanning subgraph of H that is a path packing. Note that no restrictions on the length of the paths are imposed so in particular, paths consisting of a single vertex are allowed. With a slight abuse of notation, depending on the context, we will refer to as a graph or as a set of paths. If not stated otherwise, speaking about paths in a path packing we always refer to its connected components, i.e., maximal paths in . We sometimes refer to maximal path packings of a graph as partial solutions and we denote the set of all partial solutions of a graph H by Π(H). With a (not necessarily maximal) path packing in a k-labeled graph H we associate an auxiliary multigraph _H() on the vertex set [k] such that for every i ≠ j ∈ [k], the multiplicity of the edge {i, j} is equal to the number of paths in whose end-points have labels i and j; and for every i ∈ [k], the multiplicity of the loop at the vertex i is equal to the number of paths whose both end-vertices have label i (in particular, this contains the paths consisting of a single vertex of label i). Note that this multigraph depends on the labeling of H. The edges of such a multigraph will often be referred to as red, this will allow us to speak about red-blue trails later. In their work, Bergougnoux et al. use the technique of so-called representative sets <cit.>. For two maximal path packings _1 and _2 of a k-labeled graph H they write _1 ≃_H _2 if for every i ∈ [k], it holds that __H(_1)(i) = __H(_2)(i) and the graphs _H(_1) and _H(_2) have the same set of connected components. This defines an equivalence relation on Π(H). For a set ⊆Π(H) of partial solutions, the operation _H() returns a set containing one element of each equivalence class of / ≃_H. The following has been shown by Bergougnoux et al. <cit.>: For every ⊆Π(H), we have |()| ≤ n^k · 2^k(log_2(k)+1) and we can moreover compute _H() in time 𝒪(|| · n k^2 log_2(nk)). In the following, we will work a lot with multigraphs on the vertex set [k]. For two such multigraphs A and B, with A ⊎ B we denote the multigraph on the vertex set [k] such that the multiplicity of every edge is given by the sum of multiplicities of this edge in A and B. As in the work of Bergougnoux et al. <cit.>, we say that the edges of a multigraph on the left resp. right of ⊎ are colored red resp. blue. They also use the following notion of representativity. Let _1, _2 ⊆Π(H) be two families of partial solutions of a k-labeled graph H. We write _1 ≲_H _2 and say that _1 H-represents _2 if, for every multigraph on the vertex set [k] whose edges are colored blue, whenever there exists a path packing _2 ∈_2 such that _H(_2) ⊎ admits , there also exists a path packing _1 ∈_1 such that _H(_1) ⊎ admits a red-blue Eulerian trail, where a red-blue Eulerian trail is a closed walk containing each edge exactly once and such that red and blue edges alternate on this walk. Crucially, they have shown the following lemma: For every ⊆Π(H), it holds that _H() ≲_H. Together with <ref>, this allows to keep the number of partial solutions maintained along the dynamic programming small. Recall that we aim at handling glue-nodes. As in standard algorithms based on representative sets, our goal is the following: given two k-labeled glueable edge-disjoint graphs H_1 and H_2 and families _1 ≲_H_1Π(H_1) and _2 ≲_H_2Π(H_2) of partial solutions of H_1 and H_2, respectively, we would like to compute a family of partial solutions of H = H_1 ⊔ H_2 with ≲_H Π(H) such that has bounded size. After that, the operation _H can be applied to to obtain a smaller representative. Bergougnoux et al. have shown that for two vertex-disjoint graphs H_1 and H_2, the set of partial solutions of the graph H_1 ⊕ H_2 can be computed by simply iterating through all partial solutions _1 of H_1 and _2 of H_2 and forming their union _1 ∪_2 <cit.>. For glue-nodes our process will be analogous but there is more interaction between partial solutions. At a glue-node, multiple paths in partial solutions _1 and _2 can be glued together. First, this can result in longer paths that contain several paths of _1 and _2 as subpaths (see <ref> (a)). But also, the glueing can create cycles (see <ref> (b)) as well as vertices of degree more than two (see <ref> (c)) so that the result of gluing of two partial solutions of H_1 and H_2, respectively, is not a partial solution of H_1 ⊔ H_2 anymore. Now we formalize this. Along this section, let H_1 and H_2 be two k-labeled glueable edge-disjoint graphs. First, we show that the set of partial solutions of H_1 ⊔ H_2 can be obtained in a natural way by gluing all pairs of partial solutions of H_1 and H_2 and then filtering out graphs that are not path packings. For a family ℋ of graphs, by (ℋ) we denote the set of all path packings in ℋ. Clearly, the set (ℋ) can be computed in time polynomial in the cardinality of ℋ and the largest graph in ℋ. Let H_1 and H_2 be two edge-disjoint graphs. And let S = ({_1 ⊔_2 |_1 ∈Π(H_1), _2 ∈Π(H_2)). Then it holds that S = Π(H_1 ⊔ H_2). For the one direction, let ∈ S and let _1 ∈Π(H_1), _2 ∈Π(H_2) be such that = _1 ⊔_2. First, recall that _1 resp. _2 contains all vertices of H_1 resp. H_2 and we have V(H_1 ⊔ H_2) = V(H_1) ∪ V(H_2). So contains all vertices of H_1 ⊔ H_2. Second, since S is the result of the application of the operator, the graph is a path packing. Therefore, the graph is a maximal path packing of H, i.e., ∈Π(H_1 ⊔ H_2), and we obtain S ⊆Π(H_1 ⊔ H_2). For the other direction, consider a path packing ∈Π(H_1 ⊔ H_2). For i ∈ [2], let _i = { Q | Q is an inclusion-maximal subpath of some path in such that all edges of Q belong to H_i} Clearly, _i is a subgraph of H_i and it is a path packing due to being a subgraph of . Each vertex v of V(H_i) ⊆ V(H_1 ⊔ H_2) = V(H_1) ∪ V(H_2) lies on exactly one path, say P, in . Then there is a unique inclusion-maximal subpath Q of P containing v that uses edges of H_i. By definition, the subpath Q belongs to _i. Therefore, the set _i is a maximal path packing of H_i, i.e., _i ∈Π(H_i). It remains to show that _1 ⊔_2 =. Since _1 and _2 are maximal path packings of H_1 and H_2, respectively, the graph _1 ⊔_2 contains all vertices of H_1 ⊔ H_2, i.e., all vertices of . For i ∈ [2], every edge of E(H_i) ∩ E() is contained in a unique maximal subpath Q of some path in such that this subpath contains the edges of H_i only, i.e., Q ∈_i. Therefore, we have _1 ⊔_2 =. And since is a path packing, the operation does not discard it. So we obtain Π(H_1 ⊔ H_2) ⊆ S and this concludes the proof. As the next step, we will show that the representativity is maintained in this process, formally: Let H_1 and H_2 be two glueable edge-disjoint k-labeled graphs. Further, let _1 ≲_H_1Π(H_1) and _2 ≲_H_2Π(H_2). Then for the set S defined by S = ({_1 ⊔_2 |_1 ∈_1, _2 ∈_2}) it holds that S ≲_H_1 ⊔ H_2Π(H_1 ⊔ H_2). Further, given _1 and _2, the set S can be computed in (|_1| |_2|). This lemma will be the key component of our procedure for glue-nodes. In the remainder of this subsection we mostly concentrate on the proof of this lemma. It will follow the general idea behind the proof of Bergougnoux et al. for union-nodes <cit.> but the technicalities will become more involved. We start with some simple claims. Let H be a k-labeled graph, let i ∈ [k] be such that there exists a unique vertex v of label i in H, and let be a path packing in H that contains v. Then for the unique path P ∈ containing v, it holds that: * If P has length zero, then __H()((v)) = 2 and there is a loop at (v) in _H. * If P has non-zero length and v is an end-vertex of P, then __H()((v)) = 1. * If P has non-zero length and v is an internal vertex of P, then __H()((v)) = 0. In particular, we have __H()((v)) = 2 - _(v). We can apply this observation to glue-vertices as follows: Let H_1 and H_2 be two k-labeled glueable edge-disjoint graphs and let v ∈ V(H_1) ∩ V(H_2) be a glue-vertex of label i for some i ∈ [k]. Further let _1 and _2 be path packings in H_1 and H_2, respectively, both containing v such that the graph _1 ⊔_2 is a path packing. Then it holds that __H_1 ⊔ H_2(_1 ⊔_2)((v)) = __H_1(_1)((v)) + __H_2(_2)((v)) - 2. The vertex v is the unique vertex of label i in H_1 ⊔ H_2. Then we have __H_1 ⊔ H_2(_1 ⊔_2)((v)) <ref>= 2 - __1 ⊔_2(v) E(H_1) ∩ E(H_2) = ∅= 2 - (__1(v) + __2(v)) <ref>= 2 - ((2 - __H_1(_1)((v))) + (2 - __H_2(_2)((v)))) = __H_1(_1)((v)) + __H_2(_2)((v)) - 2. Let H be a k-labeled graph and let and ' be path packings in H with V() = V('). Further, let be a multigraph on the vertex set [k] such that each of the graphs _H() ⊎ and _H(') ⊎ admits . Finally, let v ∈ V() be a vertex of unique label in H, i.e., |_H^-1(_H(v))| = 1. Then the graphs _H() and _H(') have the same degree sequence and in particular, the following properties hold: * The vertex v is an internal vertex of a path in iff v is an internal vertex of a path in '. * The vertex v is an end-vertex of a non-zero length path in iff v is an end-vertex of a non-zero length path in '. * The vertex v forms a zero-lentgh path in iff v forms a zero-lentgh path in '. Since each of the graphs _H() ⊎ and _H(') ⊎ admits , for every i ∈ [k] we have __H()(i) = _(i) = __H(')(i) by a result of Kotzig <cit.>. Therefore, the graphs _H() and _H(') have the same degree sequence. The remainder of the claim follows by <ref>. With these technical lemmas in hand, we can now prove <ref>. In the proof we will work a lot with multigraphs on vertex set [k]. For i ∈ [k], by _i we will denote a loop at the vertex i. Similarly, for i, j ∈ [k], by e_i, j we denote an edge with end-points i and j where i = j is allowed. This edge is not necessarily unique so with a slight abuse of notation, this way we denote one fixed edge clear from the context between these vertices. If ℋ is a multigraph and e = e_i, j resp. e = _i, by ℋ∪̇{e} we denote the multigraph arising from ℋ by adding an edge with end-points i and j resp. adding a loop at i. Here, ∪̇ emphasizes that we add a new edge and in particular, increase the number of edges in the multigraph. Similarly, by ℋ - e we denote the multigraph arising from ℋ after removing the edge e and emphasize that e was present in ℋ. For simplicity, we denote D = H_1 ⊔ H_2. Along this proof no relabeling occurs so every vertex of H_1 resp. H_2 has the same label in H. For this reason we omit the subscripts of labeling functions to improve readability: the label of a vertex v is simply denoted by (v). Now let ∈Π(D) be a maximal path packing of D. By <ref>, there exist maximal path packings _1 ∈Π(H_1) and _2 ∈Π(H_2) such that = _1 ⊔_2. Further, let be a blue multigraph on the vertex set [k] such that _H() ⊎ admits , say T. To prove the lemma, we need to show that there exists a maximal path packing ' ∈ S = ({_1' ⊔_2' |_1' ∈_1, _2' ∈_2}) of D such that _H(') ⊎ admits a red-blue Eulerian trail as well, i.e., there exist maximal path packings '_1 ∈_1 and '_2 ∈_2 such that '_1 ⊔'_2 is a path packing and _H(_1' ⊔'_2) ⊎ admits a red-blue Eulerian trail. Let t = |_2| and let us fix some ordering _2 = {P^1, …, P^t} of the paths (i.e., connected components) in _2. For i ∈ [t]_0, we define ^i = _2 ∖{P^1, …, P^i}. Now we will construct blue multigraphs ^0, …, ^t with the following properties. There exist blue multigraphs ^0, …, ^t such that for every i ∈ [t]_0, the following two properties hold. First, the multigraph _D(_1 ⊔^i) ⊎^i admits , we fix one and denote it by T^i. And second, if i > 0 and _1' is a maximal path packing of H_1 such that _D(_1' ⊔^i) ⊎^i admits , then _D(_1' ⊔^i-1) ⊎^i-1 also admits . Along the proof of the claim, we will inter alia show that if the graph _1' ⊔^i (as above in the claim) is a path packing, then _1' ⊔^i-1 is also a path packing, i.e., _D(_1' ⊔^i-1) is well-defined. Recall that _1 ⊔^i-1 (a subgraph of _1 ⊔_2) is a path packing and by <ref>, _D(_1 ⊔^i-1) has the same degree sequence as _D(_1' ⊔^i-1). So to prove that _1' ⊔^i-1 is a path packing, it will suffice to show its acyclicity. The proof is by induction. Base case i = 0: Since ^0 = _2, we can use ^0 = and the statement is true by using T^0 = T. So now let 1 ≤ i ≤ t and suppose the statement holds for i - 1. We make a case distinction based on the path P := P^i. For simplicity of notation, after we construct ^i in each case, with _1' we refer to any maximal path packing of H_1 such that _D(_1' ⊔^i) ⊎^i admits . We emphasize that in every case, the construction of ^i and T^i will be independent of _1'. The cases 1.2 and 2 will be similar to a union-node as handled by Bergougnoux et al. <cit.> while the remaining cases are different. Case 1.1 First, suppose that the path P has zero length and the unique vertex, say v, of P is a glue-vertex. Since _1 and _1' are maximal path packings of H_1, both of them contain v on some path. So in this case, we simply have _1 ⊔^i = _1 ⊔ (^i-1 - P) = _1 ⊔^i - 1 and _1' ⊔^i = '_1 ⊔ (^i-1 - P) = _1' ⊔^i - 1 (here and in the analogous equalities we treat a path packing as a set of vertex-disjoint paths and the operation - P removes the path P from it in terms of set difference). Therefore, ^i := ^i-1 and T^i := T^i-1 satisfy the desired condition. Case 1.2 Now again suppose that P is a zero-length path but now the unique vertex, say v, of P is not a glue-vertex. Then we have _1 ⊔^i = _1 ⊔ (^i-1 - P) = (_1 ⊔^i-1) - P and _1' ⊔^i = _1' ⊔ (^i-1 - P) = (_1' ⊔^i-1) - P So _D(_1 ⊔^i) = _D(_1 ⊔^i-1) - _(v) and _D(_1' ⊔^i) = _D(_1' ⊔^i-1) - _(v). Since T^i-1 is of _D(_1 ⊔^i-1) ⊎^i-1, the loop _(v) occurs on it. So let e^1 resp. e^2 be the blue edge in ^i-1 preceding resp. following _(v) in T^i-1. And let a resp. b be the label such that a and (v) resp. b and (v) are the end-points of e^1 resp. e^2. We then define ^i = (^i-1 - e^1 - e^2) ∪̇{e_a, b}. Now on the one hand, we can easily obtain T^i of _D(_1 ⊔^i) ⊎^i by taking T^i-1 and replacing the subtrail e^1, _(v), e^2 with a blue edge e_a, b in it (see <ref>): by (<ref>) and (<ref>), each edge is indeed visited exactly once. For the other direction, let S^i be of _D(_1' ⊔^i) ⊎^i. Then we can obtain of _D(_1' ⊔^i-1) ⊎^i-1 by taking S^i and replacing the occurrence of the blue edge e_a, b by a subtrail e^1, _(v), e^2 (see <ref>): By (<ref>) and (<ref>), each edge is again visited exactly once. This concludes the proof for Case 1.2. Now it remains to prove the claim for the case that P has non-zero length. We further distinguish on whether P contains glue-vertices and if so, how many of them are end-vertices of P. Let v and w denote the end-vertices of P. Case 2 Suppose the path P does not contain glue-vertices. This case is very similar to Case 1.2 but we provide the proof for completeness. It again holds that _1 ⊔^i = _1 ⊔ (^i-1 - P) = (_1 ⊔^i-1) - P and _1' ⊔^i = _1' ⊔ (^i-1 - P) = (_1' ⊔^i-1) - P. So _D(_1 ⊔^i) = _D(_1 ⊔^i-1) - e_(v), (w) and _D(_1' ⊔^i) = _D(_1' ⊔^i-1) - e_(v), (w). Without loss of generality, we may assume that in T^i-1 the edge e_(v), (w) is traversed from (v) to (w): otherwise, we can use the reverse of T^i-1 instead. Let again e^1 resp. e^2 be the blue edge in ^i-1 preceding resp. following the red edge e_(v), (w) in T^i-1. And let a resp. b be the labels such that e^1 resp. e^2 has end-vertices a and (v) resp. b and (w). Then we define ^i = (^i-1 - e^1 - e^2) ∪̇{e_a, b}. Now on the one hand, we can easily obtain T^i of _D(_1 ⊔^i) ⊎^i by taking T^i-1 and replacing the subtrail e^1, e_(v), (w), e^2 with a blue edge e_a, b in it (see <ref>): By (<ref>) and (<ref>), each edge is indeed visited exactly once. For the other direction, let S^i be of _D(_1' ⊔^i) ⊎^i. Then we can obtain of _D(_1' ⊔^i-1) ⊎^i-1 by replacing the occurrence of the blue edge e_a, b in S^i by a subtrail e^1, e_(v), (w), e^2 (see <ref>): By (<ref>) and (<ref>), each edge is again visited exactly once and the claim holds. This concludes the proof for the Case 2. From now on, me may assume that P has non-zero length and contains at least one glue-vertex. Let u_1, …, u_q denote all internal glue-vertices of P for some q ∈_0. Since these vertices are internal, we have __2(u_j) = 2 for every j ∈ [q]. Since _1 ⊔_2 is a maximal path packing of D, the path packing _1 resp. _2 is maximal for H_1 resp. H_2, and the graphs H_1 and H_2 are edge-disjoint, we have 2 ≥__1 ⊔_2(u_j) = __1(u_j) + __2(u_j) = __1(u_j) + 2. Therefore, we have __1(u_j) = 0 i.e., u_j forms a zero-length path in _1. Since the path packing ^i does not contain u_j by construction, the vertex u_j also forms a zero-length path in _1 ⊔^i. Next recall that u_j is a glue-vertex so it is a unique vertex with label (u_j) in H_1 and H_2. Then if there exists a blue multigraph such that each of _D(_1 ⊔^i) ⊎ and _D(_1' ⊔^i) ⊎ admits , then by <ref>, the vertex u_i also forms a path in _1' ⊔^i (and hence, also in _1'). We will apply this property to = ^i to prove the latter direction of <ref>. As mentioned before, the previous two cases were very similar to the correctness of the procedure for a union-node in the clique-width algorithm (see <cit.>). In the remaining cases, the approach will be more involved but still natural. In the remainder of the paragraph we try to sketch this process and the main difficulty of these cases. The details will become clear in the description of the remaining cases though. As before, we will replace a certain subtrail A of T^i-1 of _D(_1 ⊔^i-1) ⊎^i-1 by a sequence B of edges to obtain T^i of _D(_1 ⊔^i) ⊎^i. In the previous cases, the sequence B consisted of a single edge. So when we then considered S^i of _D('_1 ⊔^i) ⊎^i, it was easy to replace this edge by A again to obtain S^i-1 of _D('_1 ⊔^i-1) ⊎^i-1. In the following cases, the situation might become less simple since B possibly consists of multiple edges there. For this reason, first, B does not necessarily occur as a subtrail in S^i, i.e., the edges of B do not necessarily occur consecutively. And second, some of the edges of B possibly do not even occur in _D('_1 ⊔^i). Therefore, the construction of S^i-1 from S^i is less straight-forward in these cases. Therefore, in general it is not possible to simply replace B with A to obtain S^i-1. For this reason, a more careful analysis is required to show that such a S^i-1 of _D(_1' ⊔^i-1) still exists. Now we move on to details. First, observe that if at least one of the vertices v and w is not a glue-vertex, the acyclicity of _1' ⊔^i also implies the acyclicity of _1' ⊔^i-1 so _1' ⊔^i-1 is a path packing. We will come back to this issue in Case 3.3 where both v and w are glue-vertices. Case 3.1 First, assume that neither v nor w is a glue-vertex. Then P is a path in _1 ⊔^i-1. Since we know by assumption that P contains a glue-vertex, we have q > 0 and it holds that _1 ⊔^i = ((_1 ⊔^i-1) - P) ∪̇{{u_1}, …, {u_q}}. And hence _D(^1 ⊔^i) = (_D(_1 ⊔^i-1) - e_(v), (w)) ∪̇ {_(u_1), …, _(u_q)}. As before, we may assume that the edge e_(v), (w) is traversed from (v) to (w) in T^i-1. So let again e^1 resp. e^2 be the blue edge in ^i-1 preceding resp. following e_(v), (w) in T^i-1. And let a resp. b be the label such that the end-vertices of e^1 resp. e^2 are a and (v) resp. b and (w). We then set ^i = (^i-1 - e^1 - e^2) ∪̇{e_a, (u_1), e_(u_q), b}∪̇{e_(u_d), (u_d+1)| d ∈ [q-1]}. Then we can obtain T^i of _D(_1 ⊔^i) ⊎^i by taking T^i-1 and replacing the subtrail e^1, e_(v), (w), e^2 with the sequence L given by L = e_a, (u_1), _(u_1), e_(u_1), (u_2), _(u_2), …, e_(u_q-1), (u_q) _(u_q), e_(u_q), b (see <ref>). Note that by (<ref>) and (<ref>), the trail T^i indeed contains all edges. On the other hand, recall two properties. First, every vertex in u_1, …, u_q forms a zero-length path in _1' (argued above). And second u_1, …, u_q is the set of glue-vertices on P. Thus, P forms a path in _1' ⊔^i-1 and we have _1' ⊔^i = ((_1' ⊔^i-1) - P) ∪̇{{u_1}, …, {u_q}}. Therefore _D(_1' ⊔^i) = (_D(_1' ⊔^i-1) - e_(v), (w)) ∪̇ {_(u_1), …, _(u_q)}. Now consider S^i of _D('_1 ⊔^i) ⊎^i. We make the following observation. For every j ∈ [q], since the only red edge incident with (u_j) in _D(_1' ⊔^i) is a loop, there are exactly two blue edges, say f^1 and f^2, incident with (u_j) in ^i, namely the two edges from the set {e_i, (u_1), e_(u_q), j}∪{e_(u_d), (u_d+1)| d ∈ [q-1]} that are incident with (u_j). This implies that f^1, _(u_j), and f^2 appear consecutively in S^i. Since this holds for every j ∈ [q], the sequence L or its reverse is a subtrail of S^i. As before, we may assume that L is a subtrail of S^i. Now we can obtain of _D('_1 ⊔^i-1) ⊎^i by taking S^i and replacing L with e^1, e_(v), (w), e^2 (see <ref>): By (<ref>) and (<ref>), it indeed contains all edges of _D(_1' ⊔^i-1) ⊎^i. In this case, for both directions of the claim we still were able to replace two subtrails e^1, e_(v), (w), e^2 and L with each other. In the remaining two cases, this will not be true anymore so we will work with different pairs of subtrails in the “forward” and “backward” directions of the proof (the details follow). For the remainder of the proof we may assume that at least one end-vertex of P is a glue-vertex, say v. Observe the following: since v is an end-vertex of a path P of non-zero length, its degree in _2 is exactly one. The vertex v is a glue-vertex so by <ref> the degree of (v) in _D(_2) is one as well. Since _1 ⊔_2 is a path packing containing v, we have __1 ⊔_2(v) ≤ 2. Recall that _1 is maximal in H_1 so it contains v. Together with __2(v) = 1 this implies __1(v) ≤ 1. Thus, the vertex v is an end-vertex of some path in the path packing _1. So this also holds for the family _1 ⊔^i. By <ref>, this also holds for _1' ⊔^i and therefore, also for _1'. This observation also implies that the path in _1 ⊔^i with end-vertex v has non-zero length if and only if this holds for _1 ⊔^i. If w is a glue-vertex as well, the symmetric property holds. Case 3.2 In this case, we assume that w is not a glue-vertex. Let P̂ be the path in _1 ⊔^i-1 containing P as a subpath. Then w is an end-vertex of P̂. Let r ∈ [k] denote the label of the other end-vertex of P̂. Note that r = (v) is possible. Our path P is a suffix or a prefix of P̂. Without loss of generality, we assume that P is a suffix of P̂: otherwise, we could take the reverse of P̂ instead. Let P̂ v denote the prefix of P̂ ending in v. Then the following holds _1 ⊔^i = ((_1 ⊔^i-1) - P̂) ∪̇{P̂v, {u_1}, …, {u_q}} and _D(_1 ⊔^i-1) = (_D(_1 ⊔^i) - e_r, (w)) ∪̇ {e_r, (v), _(u_1), …, _(u_q)}. As before, we may assume that in T^i-1 the edge e_r, (w) is traversed from r to (w). So let e be the blue edge following e_r, (w) in T^i-1 and let b ∈ [k] be such that (w) and b are the end-vertices of e. For simplicity of notation, we set u_0 = v. We define ^i = (^i-1 - e) ∪̇ { e_(u_d), (u_d+1)| d ∈ [q - 1]_0}∪̇{e_(u_q), b}. Then we can construct T^i by taking T^i-1 and replacing the subtrail e_r, (w), e with a sequence L where L = e_r, (v), e_(v), (u_1), _(u_1), e_(u_1),(u_2), _(u_2), …, e_(u_q-1),(u_q), _(u_q), e_(u_q), b (see <ref> (a)). Note that T^i indeed uses all edges of _D(_1 ⊔^i) ⊎^i (see (<ref>) and (<ref>)). For the other direction, we consider S^i of _D(_1' ⊔^i) ⊎^i. Above we have argued that v is an end-vertex of some path, say P^*, in _1' ⊔^i, and for every j ∈ [q], the vertex u_j forms a zero-length path in _1'. Let r' ∈ [k] be such that r' and (v) are the labels of the end-vertices of P^*. Note that r' = (v) is possible. Then it holds that _1' ⊔^i-1 = ((_1' ⊔^i) - {P^*, {u_1}, …, {u_q}}) ∪̇{P^* ⊔ P} and therefore, also _D(_1' ⊔^i-1) = (_D(_1' ⊔^i) - e_r', (v) - _(u_1) - … - _(u_q)) ∪̇ {e_r', (w)}. Now observe the following. Since v is a glue-vertex and v ∈_1' holds, there is exactly one edge, say e^*, incident with (v) in _D(_1' ⊔^i). And this edge is a loop if and only if P^* = {v} holds. So the degree of (v) in ^i is one if this edge is not a loop and two if it is. Now we can make some observations about the occurrence of e^* in S^i. First, consider the case that P^* consists of v only. Above we have argued that then in _1 ⊔^i there is also a path consisting of v only (and in particular, we have r = (v)). The existence of implies that in this case there are exactly two blue edges in ^i incident with (v) so they must precede and follow e^* in S^i. Crucial is that these edges are the same (possibly their order is swapped) that follow and precede e_t, (v) in T^i. The same holds for edges incident with a vertex u_j for every j ∈ [q]. Now consider the case that P^* has non-zero length, i.e., (v) has the red degree of exactly one in _D(_1' ⊔^i). Since S^i is , the vertex (v) also has the blue degree of exactly one in ^i. Now we may again assume that e^* is traversed from r' to (v) in S^i. Then the edge following e^* in S^i is the unique blue edge incident with (v) in ^i, i.e., the same blue edge follows e_t, (v) in T^i. Altogether, this implies that a sequence L' is a subtrail of S^i where L' = e_r', (v), e_(v), (u_1), _(u_1), e_(u_1), (u_2), _(u_2), … e_(u_q-1), (u_q), _(u_q), e_(u_q), b. Hence, it can be replaced with a sequence e_r', (w), e (see <ref> (b)) to obtain of _D(_1' ⊔^i-1) ⊎^i-1 (see (<ref>) and (<ref>) to verify that every edge is indeed used exactly once). Case 3.3 Now we remain with the case where both v and w are glue-vertices. For simplicity of notation, let us denote u_0 = v and u_q+1 = w. By Case 1.1 we may assume that v ≠ w holds. Since both v and w are glue-vertices, it also holds that (v) ≠(w). Let P̂ again be the path in _1 ⊔^i-1 that contains P as a subpath. We may assume that v occurs on both P and P̂ before w: otherwise we may use the reverse of the violating path instead. Let r ∈ [k] resp. s ∈ [k] be the label of the start- resp. end-vertex of P̂. Then it holds that _1 ⊔^i = ((_1 ⊔^i-1) - P̂) ∪̇{P̂ v, {u_1}, …, {u_q}, w P̂} where P̂ v denotes the prefix of P̂ ending in v and w P̂ denotes the suffix of P̂ starting in w. So we have _D(_1 ⊔^i) = (_D(_1 ⊔^i-1) - {e_r, s}) ∪̇{e_r, (v), _(u_1), …, _(u_q), e_(w), s}. We define ^i = ^i-1∪̇{e_(u_d) (u_d+1)| d ∈ [q]_0 }. Then we can obtain T^i by taking T^i-1 and replacing the red edge e_r, s with a subtrail L where L = e_r, (v), e_(v), (q_1), _(u_1), e_(u_1), (u_2), _(u_2), …, e_(u_q), (w), e_(w), s (see <ref> (a)). Note that by (<ref>) and (<ref>), the trail T^i indeed contains all edges. Now we prove the other direction. Let S^i be of _D(_1' ⊔^i) ⊎^i. First, to show that _D('_1 ⊔^i-1) is well-defined, we prove that '_1 ⊔^i-1 is a path packing by showing its acyclicity (above we explained why this needs to be proven in this case only). Above we have argued that for every j ∈ [q] the vertex u_j forms a path in _1' ⊔^i. Thus, the only edge incident with the vertex (u_j) in _D(_1' ⊔^i) is a loop. So the only two blue edges incident with (u_j) in ^i appear right before and after this loop in S^i. Thus, the sequence Q = e_(v), (u_1), _(u_1), e_(u_1), (u_2), _(u_2), …, e_(u_q), (w). (or its reverse) is a subtrail of S^i. As before we may assume that Q is a subtrail of S^i. Let e^1 resp. e^2 be the red edges preceding resp. following e_(v), (u_1) resp. e_(u_q), (w) in S^i. And let P_v' resp. P_w' be the path in _D(_1' ⊔^i) with an end-point v resp. w (above we have argued that such a path exists). Let r' ∈ [k] resp. s' ∈ [k] be such that r' and (v) resp. (w) and s' are the labels of end-vertices of P_v' resp. P_w'. Observe that we have e^1 = e_r', (v) and e^2 = e_(w), s' since v and w are vertices with unique labels in D. To prove the claimed acyclicity, suppose it holds that e^1 = e^2. Then S^i consists exactly of the following edges S^i = (e^1 = e^2), e_(v), (u_1), _(u_1), e_(u_1), (u_2), _(u_2), …, e_(u_q), (w) and we have r' = (w) and s' = (v). This implies that first, P_v' = P_w' (since v and w are vertices of unique labels) and second, the path packing _1' ⊔^i consists of a path from v to w and paths {u_1}, …, {u_q} only. Therefore, the multigraph _D(_1 ⊔^i) consists of loops at (u_1), …, (u_q) and an edge between (v) and (w). The degree sequences of _D(_1 ⊔^i) and _D('_1 ⊔^i) coincide (recall <ref>) so the degree sequences of _1 ⊔^i and '_1 ⊔^i coincide as well (recall <ref>). But then both _1 ⊔^i and {P} contain a path from v to w (where v and w are distinct glue-vertices) so (_1 ⊔^i) ⊔{P} = _1 ⊔^i-1 contains a cycle – this contradicts the fact that the graph _1 ⊔^i-1) (a subgraph of a path packing _1 ⊔_2) is a path packing. Hence, it holds that e^1 ≠ e^2 and therefore, P_v' ≠ P_w'. Now we have: _1' ⊔^i-1 = ((_1' ⊔^i) - {P_v', {u_1}, …, {u_1}, P_w'}) ∪̇{P_v' ⊔ P ⊔ P_w'}. Since the paths P_v' and P_w' are distinct, in particular, this implies that _1' ⊔^i-1 is acyclic and therefore, it is a path packing. We then have _D(_1' ⊔^i-1) = (_D(_1' ⊔^i) - e_r', (v) - _(u_1) - … - _(u_q), e_(w), s') ∪̇{e_r', s'}. Above we have argued that the sequence L' is a subtrail of S^i where L' = e_r', (v), e_(v), (u_1), _(u_1), e_(u_1), (u_2), _(u_2), …, e_(u_q), (w), e_(w), s'. Now this subtrail of S^i can be replaced by the red edge e_r', s' (see <ref> (b)) to obtain T_i-1' of _D(_1' ⊔^i-1) ⊎^i-1 (see (<ref>) and (<ref>) to verify that every edge is indeed used exactly once). This concludes the proof of the claim. Applied to i = t, the claim implies the existence of a blue multigraph ^t with the following two properties. First, the multigraph _D(_1 ⊔^t) ⊎^t admits a red-blue Eulerian trail T^t. Since ^t = ∅, we obtain that _D(_1 ⊔^t) ⊎^t =_D(_1) ⊎^t = _H_1(_1) ⊎^t admits a red-blue Eulerian trail T^t. Then the property _1 ≲_H_1Π(H_1) implies that there exists a maximal path packing _1' ∈_1 such that _H_1(_1) ⊎^t = _D(_1) ⊎^t admits , say S^t. Then the second part of the claim applied to i = t, …, 1 implies that the multigraph _D(_1' ⊔^0) ⊎^0 = _D(_1' ⊔^2) ⊎ admits a . Now, a symmetric argument implies that there also exists a maximal path packing _2' ∈_2 such that _D(_1' ⊔_2')⊎ admits a (and in particular, _1' ⊔_2' is a path packing so the auxiliary graph is well-defined). By definition of S, we obtain that _1' ⊔_2' belongs to S. Altogether, we have shown that for every blue multigraph and every maximal path packing ∈Π(D), if _D() ⊎ admits , then there exists a maximal path packing ' ∈ S of D such that _D() ⊎ also admits , i.e., we have S ≲_D Π(D) as desired. Computing S in time (|_1| |_2|) is trivial: we iterate over all pairs _1 ∈_1 and _2 ∈_1, compute _1 ⊔_2 in time polynomial in the size of H and then check whether this subgraph is acyclic. Now we are ready to provide the algorithm solving the problem parameterized by fusion-width. Given a fuse-k-expression of a graph H, the problem can be solved in time n^𝒪(k). First, by <ref>, in time polynomial in the size of the given fuse-k-expression and k, we can compute a reduced glue-k-expression ϕ of H whose size is polynomial in the size of H and k. Let e = uv be an arbitrary but fixed edge of H. In the following, our algorithm will decide, whether H admits a Hamiltonian cycle containing e. Then, by checking this for every edge of H, we can solve . First, we slightly transform ϕ into a reduced glue-(k+2)-expression ξ of H such that the root of ξ is a join-node that creates the edge e only. For this we proceed as follows. For the simplicity of notation, let i_u = k+1 and i_v = k+2. First, all leaves of ϕ with title u (resp. v) are replaced with u ⟨ i_u ⟩ (resp. v ⟨ i_v ⟩). After that, we iterate through all join-nodes t. Let i, j ∈ [k] be such that t is a η_i, j-node. If the vertex u belongs to G^ϕ_t and the label j_u of u in G^ϕ_t is equal to i (resp. j), we add a new η_i_u, j-node (resp. η_i_u, i-node) right above t. Similarly, if the vertex v belongs to G^ϕ_t and the label j_v of v in G^ϕ_t is equal to i (resp. j), we add a new η_i_v, j-node (resp. η_i_v, i-node) right above t. After processing all nodes, we add a new η_i_u, i_v-node above the root making it a new root. The process takes only polynomial time. By construction, this expression still creates the graph G. Moreover, it still satisfies the first two properties of a reduced glue-k-expression: no join-node creates an already existing edge and the glued graphs are always edge-disjoint (see <ref> for a formal definition). After that, we proceed as in the proof of <ref> to ensure that the last property of a reduced glue-(k+2)-expression is still satisfied. If we look into that proof, we note that this transformation does not change the root so the join-node creating the edge e is still the root. We denote the arisen glue-(k+2)-expression by ξ. By x we denote the child of the root of ξ. Note that since u and v now have unique labels, the node x creates the edge e only so we have G^ξ_x = H - e. Now given the result of Bergougnoux et al. for introduce-, join-, and relabel-nodes <cit.> as well as our <ref>, we can traverse ξ bottom-up to compute a set _x of partial solutions of G^ξ_x such that _x ≲_G^ξ_xΠ(G^ξ_x). Namely, we start with the leaves and then given a set (or sets) of partial solutions representing the set (resp. sets) of all partial solutions of the child (resp. children) of the current node, say y, we first compute a set of partial solutions of y representing all partial solutions of G^ξ_y and then apply the _G^ξ_y-operation to ensure that the number of partial solutions kept at each step is bounded by n^𝒪(k). We emphasize that the first two properties of reduced a glue-(k+2)-expression (see <ref>) ensure that every edge is created exactly once. So the expression is “irredundant” in terms of clique-width and therefore, the procedures for introduce-, relabel-, and join-nodes by Bergougnoux et al. are still correct <cit.>. Now we show to decide whether H admits a Hamiltonian cycle using the edge e given the set _x. We claim that this is the case if and only if _x contains a maximal path packing consisting of a single path P with end-vertices u and v. One direction is almost trivial: Recall that ∈_x ⊆Π(G^ξ_H) and G^ξ_H is a subgraph of H so P is a path in H. Since V(H) = V(G^ξ_x) and is maximal, the path P contains every vertex of H and together with the edge e it forms a Hamiltonian cycle of H. For the other direction, let H contain a Hamiltonian cycle using the edge e, let P' denote the path connecting u and v on this cycle and not using the edge e. Let ' denote the path packing of H consisting of P' only. Note that since we consider a Hamiltonian cycle, the path P' contains every vertex of H so ' is indeed a maximal path packing of H. Moreover, since it does not contain the edge e, ' is a path packing of G^ξ_x as well, i.e., ' ∈Π(G^ξ_x). Since the vertices u and v have unique labels and they are never relabeled, the red graph _G^ξ_x(') on the vertex set [k+2] has a single edge and this edge has end-points i_u and i_v. Consider a blue multigraph on the vertex set [k+2] whose single edge is between i_u and i_v. Trivially, the multigraph _G^ξ_x(') ⊎ admits . Then the property _x ≲_G^ξ_xΠ(G^ξ_x) implies that there is a path packing ∈_x such that _G^ξ_x() ⊎ admits as well. Since consists of a single blue edge between i_u and i_v, the red multigraph _G^ξ_x() consists of a single red edge between i_u and i_v. Again, recall that u resp. v are the unique vertices with label i_u resp. i_v so the path packing consists of a single path P containing all vertices from V(G^ξ_x) = V(H) (i.e., maximal) with end-points u and v. Recall that the number of partial solutions kept at each node is bounded by n^𝒪(k) due to the application of the reduce operator after processing every node. By <ref>, a glue-node is processed in time polynomial in the number of partial solutions kept for its children, i.e., in n^𝒪(k). Also by the results of Bergougnoux et al. <cit.>, the remaining nodes can also be handled in time n^𝒪(k). Finally, recall that a reduced glue-(k+2)-expression contains a polynomial number of nodes. So the algorithm runs in time n^𝒪(k). Fomin et al. have also shown the following lower bound: <cit.> Let H be an n-vertex graph given together with a k-expression of H. Then the Hamiltonian Cycle problem cannot be solved in time f(k) · n^o(k) for any computable function f unless the ETH fails. Since any k-expression of a graph is, in particular, its fuse-k-expression, the lower bound transfers to fuse-k-expressions as well thus showing that our algorithm is tight under ETH. Let H be an n-vertex graph given together with a fuse-k-expression of H. Then the Hamiltonian Cycle problem cannot be solved in time f(k) · n^o(k) for any computable function f unless the ETH fails. § ALGORITHMS PARAMETERIZED BY MULTI-CLIQUE-WIDTH In this section we show that for many problems we can obtain algorithms parameterized by multi-clique-width with the same running time as known SETH-tight (and for Chromatic Number even an ETH-tight) algorithms parameterized by clique-width. Due to the relation <ref>≤ + 1 ≤ + 1, the obtained running times are then (S)ETH-tight for all three parameters. For these problems, we will use known algorithms for the clique-width parameterization and describe what adaptations are needed to handle multiple labels per vertex. First, we make some simple observations to restrict ourselves to simpler expressions. Let H be a k-labeled graph, let r ∈ and let i, s_1, …, s_r ∈ [k]. Then if i ∈{s_1, …, s_r}, then it holds that ρ_i →{s_1, …, s_r} (H) = ρ_i →{i, s_r}∘…∘ρ_i →{i, s_1} (H) and if i ∉{s_1, …, s_r}, then it holds that ρ_i →{s_1, …, s_r} (H) = ρ_i →∅∘ρ_i →{i, s_r}∘…∘ρ_i →{i, s_1} (H). We also have 1 ⟨ s_1, …, s_r ⟩ = ρ_s_1 →{s_1, s_r}∘…∘ρ_s_1 →{s_1, s_2}∘ 1 ⟨ s_1 ⟩ if r > 0 and 1 ⟨∅⟩ = ρ_1 →∅ 1 ⟨ 1 ⟩. Therefore, by first applying the above rules to all relabel-nodes whose right side has size one or more than two and then suppressing relabel-nodes of form ρ_i →{i}, we may restrict ourselves to relabel-nodes that either remove some label i or add a label j to every vertex with label i. Similarly, using the last two equalities we may assume that every introduce-node uses exactly one label. Note that the length of the multi-k-expression increases by at most a factor of k after these transformations. Also we can assume that for every join-node, the joined label sets are non-empty. Finally, me may reduce the number of nodes in a multi-k-expression to polynomial as follows. First, similarly to clique-expressions, we may assume that every union-node is first, followed by a sequence of join-nodes, then a sequence of relabel-nodes, and then a union-node (if exists). Then we may assume that between any two consecutive union-nodes, there are at most k^2 join-nodes: at most one per pair i, j ∈ [k]. Now we sketch how to achieve that we also have at most k relabel-nodes between two consecutive union-nodes, namely at most one per possible left side i ∈ [k]. Suppose there are two distinct nodes x_1 and x_2 being ρ_i → S_1- and ρ_i → S_2-nodes, respectively, for S_1, S_2 ⊆ [k]. We choose x_1 and x_2 so that there is no further ρ_i → S'-nodes between them. If the label set i is empty right before the application of x_2, we simply suppress x_2. Otherwise, observe that in all vertices that had label i before x_1, this label was replaced by S_1 (with possibly i ∈ S_1). So every vertex that has label i right before x_2 got this label at some relabel-node on the path from x_1 to x_2 (including x_1). Therefore, for every ρ_j → S-node x on this path with i ∈ S, we replace the operation in x with ρ_j → (S ∖{i}) ∪ S_2. And after that we suppress x_2. Note that this is correct since no ρ_i → S'-node occurs between x_1 and x_2. By repeating this process, we obtain that for each i there is at most one ρ_i → S-node between the two consecutive nodes. As for a clique-expression, the leaves of a multi-expression are in bijection with the vertices of the arising graph (i.e., there at most n leaves). And since union-nodes are the only nodes with more than one child, there are at most 𝒪(n) union-nodes. Finally, the above argument implies that there are at most 𝒪(k^2 n) relabel- and join-nodes. Let ϕ be a multi-k-expression of a graph H on n vertices. Then given ϕ in time polynomial in |ϕ| and k, we can compute a multi-k-expression ξ of H, such that the are at most 𝒪(k^2 n) nodes and for every node t of ξ the following holds. If t is a ρ_i → S-node for some i ∈ [k] and S ⊆ [k], then we have |S| = ∅ or S = {i, j} for some j ≠ i ∈ [k]. If t is a η_i, j-node for some i ≠ j ∈ [k], then we have U^t_i ≠∅ and U^t_j ≠∅. And if t is a 1⟨ S ⟩-node for some S ⊆ [k], then we have |S| = 1. For the remainder of this section we assume that an expression has this form. Now we show how existing algorithms for clique-width can be adapted to achieve the same running time for multi-clique-width. §.§ Dominating Set In the Dominating Set problem, given a graph G = (V, E) we are asked about the cardinality of the smallest set S ⊆ V with N_G[S] = V. Bodlaender et al. have developed a (4^) algorithm <cit.>. The idea behind it is to store a pair of Boolean values for every label: the first value reflects whether the label set contains a vertex from the partial solution while the second reflects whether all vertices of the label set are dominated. Crucially (as for most problems handled below), the algorithm does not make use of the fact the every vertex holds exactly one label. So we can use almost the same algorithm to process a multi-k-expression. The procedures for introduce-, join-, and union-nodes can be reused from Bodlaender et al. And it remains to handle relabel-nodes: in this case, the state of every single vertex remains the same and we only need to represent the states with respect to the new labeling function. For a ρ_i →∅-node, all vertices of label i are now dominated and no vertex belongs to the dominating set while the other labels remain unaffected. And for a ρ_i →{i, j}-node, first, all vertices of label j are dominated iff this was true for labels i and j before the relabeling; and second, a vertex of label j belongs to a partial solution iff this was the case for the label i or the label j before; the state of other labels (in particular, of the label i remains). We omit a formal description since analogous ideas occur several times in the next problems. This yields an (4^) algorithm. Katsikarelis et al. have proven the matching lower bound for clique-width which then also applies to multi-clique-width <cit.>. Let G be a graph given together with a multi-k-expression of G. Then Dominating Set can be solved in time (4^k). Unless SETH fails, this problem cannot be solved in time ((4 - ε)^k) for any ε > 0. §.§ Chromatic Number In the Chromatic Number problem, given a graph G = (V, E) we are asked about the smallest integer q such that there exists a proper q-coloring of G, that is, a mapping ϕ V → [q] such that ϕ(u) ≠ϕ(v) for all uv ∈ E. Kobler and Rotics have developed an algorithm that given a graph and a k-expression of this graph, solves the Chromatic Number problem in time f(k) · n^2^𝒪(k) <cit.>. Later, Fomin et al. have proven the ETH-tightness of this result by showing that under ETH, there is no algorithm solving this problem in f(k) · n^2^o(k) even if a k-expression of the graph is provided <cit.>. The algorithm by Kobler and Rotics is based on dynamic programming. The records are of the form N: 2^[k]∖{∅}→ [n]_0 and for a node t of a k-expression, a mapping N is a record if there exists a proper coloring c: V(G_t) → of G_t such that for every subset ∅≠ S ⊆ [k] of labels, we have |{j ∈|∀ i ∈ S U^t_i ∩ c^-1(j) ≠∅, ∀ i ∈ [k] ∖ S U^t_i ∩ c^-1(j) = ∅}| = N[S]. Simply speaking, N[S] reflects how many colors are there that occur at each label from S but at no further label. Let T_t denote the set of records at the node t. Let r denote the root of the k-expression. In the end, their algorithm outputs the smallest number q such that there exists a record N in T_r with ∑_S ⊆ [k] N[S] = q. This sum is exactly the number of colors used by a corresponding coloring since for every color, there exists a (unique) set S of labels on which this color is used. For the multi-clique-width setting, there is a small hurdle: it might happen that at some point a vertex loses all its labels so we might thus “forget” that such a vertex also uses some color. To overcome this issue, given a multi-k-expression ϕ of a graph G, we create a multi-k+1-expression of the same graph by replacing every 1⟨ i ⟩-node (for i ∈ [k] with a 1⟨ i, k+1 ⟩-node. After that, we apply the simple transformations describe earlier to achieve that the expression satisfies <ref>. This ensures that at every sub-expression, every vertex has at least one label (namely k+1) so for any record N, the value ∑_∅≠ S ⊆ [k+1] N[S] still reflects the total number of colors used by the corresponding coloring. Apart from that, their algorithm never uses that a vertex holds exactly one label and therefore can be easily adapted to multi-clique-width as follows. Introduce-, join-, and union-nodes can be adopted from the original algorithm by Kobler and Rotics. And it remains to handle relabel-nodes. First, let t be a ρ_i →∅-node for some i ∈ [k+1] and let t' be its child. For every record N' of t', we create a record N such that for every ∅≠ S ⊆ [k+1], we have: N[S] = 0 i ∈ S N'[S] + N'[S ∪{i}] i ∉ S . Then we add N to the set of records of t. It is straight-forward to verify that this process results exactly the set of records of t. Second, let t be a ρ_i →{i, j}-node for some i ≠ j ∈ [k] and let t' be its child. We may assume that i is non-empty at t' since t can be suppressed otherwise. For every record N' of t', we create a record N as follows. If the label j is empty in t', then for all ∅≠ S ⊆ [k] we set N[S] = N'[S'] where S' is obtained from S by swapping the roles of i and j, i.e., S' = S i, j ∉ S S i, j ∈ S S ∖{i}∪{j} i ∈ S, j ∉ S S ∖{j}∪{i} i ∉ S, j ∈ S . Otherwise, we may assume that j is non-empty in t'. Then for every ∅≠ S ⊆ [k], we set: N[S] = 0 i ∈ S, j ∉ S N'[S] + N'[S ∖{j}] i ∈ S, j ∈ S N'[S] i ∉ S . Again, it is easy to verify these equalities. In the end, as in the original algorithm, we output the smallest number q for which there is a record N in T_r with ∑_∅≠ S ⊆ [k+1] N[S] = q. The process runs in time n^2^𝒪(k) for every node and since the number of nodes can be assumed to be polynomial, the whole algorithm has the same running time. Fomin et al. showed that under ETH the problem cannot be solved in time f(k) · n^2^o(k) for any computable function f. Since multi-clique-width lower-bounds clique-width, this also applies in our case. Let G be a graph given together with a multi-k-expression of G. Then Chromatic Number problem can be solved in time f(k) · n^2^𝒪(k). Unless ETH fails, this problem cannot be solved in time f(k) · n^2^o(k) for any computable function f. §.§ q-Coloring In the q-Coloring problem, given a graph G = (V, E) we are asked about existence of a proper q-coloring of G, that is, a mapping ϕ V → [q] such that ϕ(u) ≠ϕ(v) for all uv ∈ E. Now we sketch the main idea behind the SETH-tight ((2^q - 2)^) algorithm for the q-Coloring problem parameterized by clique-width by Lampis <cit.>. The naive algorithm traversing a clique-expression would store for every label, the set of colors occuring on the vertices of that label, and thus have the running time of ((2^q)^). But Lampis observed that two states can be excluded. First, the empty set of colors occurs at some label if and only if this label is empty so such information is trivial and there is no need to store it. Second, if some label i contains all q colors and later the vertices of this label obtain a common neighbor v, then such a coloring would become not proper since at least of the colors also occurs on label v. Therefore, the set of all colors can only appear on a label that would not get any new common neighbors (and in particular, this label does not participate in any join later). This led Lampis to a notion of a so-called live label. A label is live at some node t of the expression if it contains a vertex which will later get an incident edge (and hence, all vertices in the label set will get a common neighbor). In particular, a live label is non-empty. For multi-clique-width we will follow a similar idea, however we need to slightly adapt the notion of a live label. For motivation, we provide the following example. Let q = 3, we consider a multi-3-labeled graph on four vertices with label sets {1, 2}, {3}, {1}, and {1}. Suppose this graph is edgeless, and a partial solution colors these vertices with colors , , , and , respectively (see <ref>). And now a η_2, 3-operation occurs. Although all three colors appear on label 1 and a vertex of label 1 now gets a neighbor, this does not make a partial solution invalid. So although the label 1 is live as defined by Lampis, it can still hold all colors. The reason is that the edge creation happens due to some other label held by a vertex with label 1. This motivates our following definition. We say that a label i is active at a node t of a multi-k-expression ϕ if U^t_i is non-empty and there exist labels a ≠ b ∈ [k] and a η_a, b-node t' such that: * The node t' is an ancestor of t; * Let t_1, …, t_q be all inner relabel-nodes on the path from t to t' (in the order they occur on this path) and let s_1, …, s_q ∈ [k] and S_1, …, S_q ⊆ [k] be such that for every j ∈ [q], the node t_j is a ρ_s_j → S_j-node. Then it holds that a ∈σ_q ( σ_q-1 ( … (σ_1({i}) ) ) ) where for T ⊆ [k] σ_j(T) = T if s_j ∉ T (T ∖{s_j}) ∪ S_j if s_j ∈ T ; * The set U^t'_b is non-empty. Informally speaking, the set σ_q ( σ_q-1 ( …σ_1({i}) ) ) contains the labels to which the label i has been relabeled on the way to t'. Then, for any label a from this set, all vertices with label i in t contain the label a in t' so the join-node t' creates a common neighbor for these vertices. By ℓ(t) we denote the set of all active labels at node t. With this definition, we are now ready to provide an algorithm that given a graph and its multi-k-expression solves the q-Coloring problem in time ((2^q - 2)^k) by keeping track of the sets of colors used by every active label. By <ref>, we may assume that we are given a multi-k-expression of H satisfying the properties of the lemma. We will follow the dynamic programming by Lampis, provide how to handle relabel-nodes, and observe that the approach for introduce, join-, and union-nodes can be adopted from his work without changes. By we denote the set 2^[q]∖{∅, [q]} of relevant sets of colors, then we have || = 2^q - 2. The dynamic programming table A_t is indexed by ^ℓ(t), i.e., the assignments of colors to active labels. And for f ∈^ℓ(t), the value A_t[f] is the number of proper q-colorings of G_t such that for every active label i, the coloring uses exactly the colors f(i). Now let t be a ρ_i →∅-node with i ∈ [k] and let t' be its child. Observe that the label i is then inactive in both t and t'. The remaining labels are not affected so we simply have ℓ(t) = ℓ(t') and A_t = A_t'. Next let t be a ρ_i →{i, j}-node for i, j ∈ [k] and let t' be its child. We may assume that i ≠ j holds since otherwise t can be suppressed. Now we have to make a case distinction based on the activity of labels i and j. Case 1: Assume that i is inactive in t'. Then by definition of activity, both labels i and j are inactive in t. Other labels are not affected so we again have ℓ(t) = ℓ(t') and A_t = A_t'. Case 2: From now on, we assume that i is active in t'. Then by definition, at least one of labels i and j is active in t. Case 2.a: Assume that both i and j are active in t. Then label j is either active (and then ℓ(t) = ℓ(t')) or empty in t' (then ℓ(t) = ℓ(t') ∪̇{j} by definition). We will compute the entries of the table A_t in the entries of a table B_t indexed by ^ℓ(t) as follows. First, all entries of B_t are initialized with zeros. After that we iterate over all footprints f ∈^ℓ(t'). Let f' be such that for all p ∈ℓ(t) we have: f'(p) = f(p) p ≠ j f(i) ∪ f(j) p = j and j is active in t' f(i) p = j and j is empty in t' . If j is active in t' and f'(j) = f(i) ∪ f(j), then we skip f': the property that j is an active label implies that all vertices of this label will get a common neighbor and all q-colorings with such footprints will be invalidated. Otherwise, we increase the value B_t[f'] by the value A_t'[f]. This process requires a linear in the number of entries of A_t' number of arithmetic operations. As a result, the entries of B_t coincide with A_t: indeed, in this process we have simply recomputed the footprints of every q-coloring and eliminated the colorings that would be invalidated later in the expression anyway. Case 2.b: Assume that i is active in t and j is inactive in t. Then by definition j is inactive in t' as well. Then we simply have ℓ(t) = ℓ(t') and A_t = A_t' since the underlying graph has not been changed. Case 2.c: Assume that i is inactive in t and j is active in t. Then j is either active (and then ℓ(t) = ℓ(t') - {i}) or empty (then ℓ(t) = (ℓ(t') - {i}) ∪{j}) in t'. This case is similar to Case 2.a and is handled as follows. We will compute the entries of the table A_t in the entries of a table B_t indexed by ^ℓ(t) as follows. First, we iniatilize all values of B_t with zeros. After that we iterate over all footprints f ∈^ℓ(t'). Let f' ∈^ℓ(t) be such that for all p ∈ℓ(t) we have: f'(p) = f(p) p ≠ j f(i) ∪ f(j) p = j and j is active in t' f(i) p = j and j is empty in t' . If j is active in t' and f'(j) = f(i) ∪ f(j), then we skip f': the property that j is an active label implies that all vertices of this label will get a common neighbor and all q-colorings with such footprints will be invalidated. Otherwise, we increase the value B_t[f'] by the value A_t'[f]. This process requires a linear in the number of entries of A_t' number of arithmetic operations. As a result, the table B_t contains the same entries as A_t and the argument is analogous to Case 2.a. This concludes the procedure for relabel-nodes. Now let t be a η_i, j-node for some i ≠ j ∈ [k] and let t' be its child. If i would be inactive in t', then i would be empty in t' and the join-node t could be suppressed. So we may assume that i, and similarly j, is active in t'. Now we can proceed the same way as Lampis so we only sketch it very briefly. Some of the labels i and j may become inactive in t so let I = ℓ(t') ∖ℓ(t) ⊆{i, j}. Then we set A_t[f] = 0 f(i) ∩ f(j) ≠∅ ∑_c ∈^I A_t'[f × c] f(i) ∩ f(j) = ∅ . The first case reflects that every q-coloring of G_t in which labels i and j share a color is not proper. In the second case, we keep the information about the coloring and store it with the correct footprint. Crucial here is that although we use a different notion of active labels (compared to live labels by Lampis), the equality remains the same. As we see next, the same holds for union-nodes so that their fast processing can be adopted from Lampis. Let t be a union-node and let t_1 and t_2 be its children. Observe that we have ℓ(t) = ℓ(t_1) ∪ℓ(t_2). Moreover, if some label i ∈ [k] is active in t but not in t_1, then i is empty in t_1 so the set of colors used on it in any coloring is empty. This is the property that ensures that the approach of Lampis is correctly applicable in our case. The approach relies on fast subset convolution and can be described as follows. He first computes the entries B_t_1[f] and B_t_2[f] of auxiliary tables such that for label i, the value f(i) provides the set of colors allowed to be used on the label i, i.e., an upper bound on the set of colors instead of the exact value. Then the analogous table B_t for the node t can be computed as pointwise multiplication of B_t_1 and B_t_2 and finally, the reverse procedure is used to compute A_t from B_t. We refer to the paper by Lampis for all details <cit.>. The whole process requires the number of arithmetic operations linear in the number of entries of A_t. Now we are able to handle all types of nodes of a multi-k-expression and compute the table A_r where r denotes the root of the expression. Observe that no label is active in r so this table contains a unique entry equal to the number of proper q-colorings of G_r = H and this approach even solves the counting version of the problem. Each of the considered tables has at most (2^q - 2)^k entries and as argued above, at each node we carry out a number of arithmetic operations linear in the size of the table. Each entry is bounded by q^n (the largest possible number of q-colorings of H) and the number of nodes in the expression is polynomial in the size of H and k so the total running time is bounded by ((2^q - 2)^k) as claimed. Lampis also showed that his algorithm is tight under SETH and since multi-clique-width lower-bounds clique-width, this also applies in our case. Let G be a graph given together with a multi-k-expression of G. Then q-Coloring can be solved in time ((2^q - 2)^k). Unless SETH fails, this problem cannot be solved in time ((2^q - 2 - ε)^k) for any ε > 0. §.§ Connected Vertex Cover In the Connected Vertex Cover problem, given a graph G = (V, E) we are asked about the cardinality of the smallest set S ⊆ V such that G[S] is connected and for every edge uv ∈ E we have u ∈ S or v ∈ S. Although the Connected Vertex Cover problem seems to be very different from q-Coloring at first sight, for most parameterizations known (e.g., <cit.>), the SETH-tight algorithms rely on the Cut&Count technique introduced by Cygan et al. <cit.>. Very briefly they show that to decide whether a graph admits a connected vertex cover of certain size, it suffices to count the number of pairs (L, R) where L ∪̇R is a vertex cover of the desired size and there are no edges between L and R. This very rough description hides some technical details like fixing a vertex in L, assuming that a vertex cover of minimum weight is unique via Isolation lemma etc. (see <cit.>). But this shows that on a high level, solving this problem reduces to counting the number of colorings of the graph with colors L, R, and N (stands for not being in a vertex cover) where LR and NN edges are forbidden: the former pair forbids edges between L and R and the latter ensures that every edge of the input graph has at least one vertex in the vertex cover L ∪ R. Hegerfeld and Kratsch employ this observation to obtain an (6^) algorithm for Connected Vertex Cover similar to the above algorithm for q-Coloring by Lampis. There are 8 subsets of {L, R, N}, however, the empty set of colors can only occur on an empty label; while if a label contains all colors, then joining this label with some other label would necessarily lead to a LR or a NN edge <cit.>. Thus, stated in our terms, a label containing all colors from {L, R, N} is necessarily inactive. Therefore, it suffices to keep track of only six possible combinations of colors that may occur on an active label. These observations together with a sophisticated convolution at union-nodes yield their (6^) algorithm. For us, two properties are crucial. First, neither the definition of relevant colorings nor the procedure at any node-type uses the fact that any vertex holds only one label. The second property is more technical and requires a closer look at the algorithm by Kratsch and Hegerfeld. For correctness of the algorithm, they assume that a k-expression of a graph is irredundant, namely no edge of the graph is created by multiple join-nodes. It is folklore that any k-expression can be transformed into an irredundant k-expression of the same graph in polynomial time. Unfortunately, we do not know whether an analogous statement holds for multi-k-expressions. Also, instead of active labels as we define them in the previous section, they use the live labels as defined by Lampis <cit.>. However, a closer look at their procedure for the union-node and its correctness reveals that it still works if an expression is not necessarily irredundant but we work with active labels instead of live labels. Namely, they only use irredundant k-expressions to ensure that whenever there is a join-node η_i,j, the vertices of label i will later get a new common neighbor. We observe that the expression does not need to be irredundant for this: even if some edges incident with the label i and created by the join-operation are already present in the graph, the vertices of label i will still share a neighbor after this join and therefore, they cannot use all colors. With this observation, we can adapt the algorithm by Hegerfeld and Kratsch. All node types apart from relabel-nodes can be handled the same way while relabel-nodes are analogous to the previous subsection. Hegerfeld and Kratsch also showed that their algorithm is tight under SETH and since multi-clique-width lower-bounds clique-width, this also applies in our case. Let G be a graph given together with a multi-k-expression of G. Then Connected Vertex Cover problem can be solved in time (6^k). The algorithm is randomized, it does not return false positives and returns false negatives with probability at most 1/2. Unless SETH fails, this problem cannot be solved in time ((6 - ε)^k) for any ε > 0. §.§ Connected Dominating Set Another problem for which Hegerfeld and Kratsch provided an algorithm is Connected Dominating Set <cit.>. In the Connected Vertex Cover problem, given a graph G = (V, E) we are asked about the cardinality of the smallest set S ⊆ V such that G[S] is connected and N_G[S] = V. As for some other parameterizations (e.g., <cit.>), the algorithm is based on a combination of Cut&Count and the inclusion-exclusion approach. However, the idea behind the reduction of the number of states is different to Connected Vertex Cover or q-Coloring: for this problem, there is usually (e.g., <cit.>) a state, called Allowed, allowing edges to any other states. So unlike the previous algorithms, if some label i contains all states and it is joined with a label j containing only Allowed vertices, no conflict occurs. Instead, they unify multiple combinations of states into the same state and then show that the precise state combination does not matter if one is interested in counting the solutions modulo 2. We will rely on the main idea behind this algorithm, however there are multiple minor changes needed to be carried out so we provide our whole algorithm for multi-clique here. We will also try to emphasize the parts that are different from the original algorithm by Kratsch and Hegerfeld and argue why they are needed. Now we define a consistent cut of a graph to state the Cut&Count result for Connected Dominating Set and use it as a black-box later. Let G be a graph, let v^* be an arbitrary but fixed vertex of G, and let ω: V(G) → [2|V(G)|] be a weight function. We say that (L, R) is a consistent cut in G if the following properties hold: v^* ∈ L, the sets L and R are disjoint, and there are no edges between L and R in G. For c ∈_0 and w ∈_0 with |L ∪ R| = c and ω(L ∪ R) = w we say that (L, R) has weight w and cardinality c. By ^c, w_G we denote the family of all consistent cuts of cardinality c and weight w in G. We also denote ^c, w = {(L, R) ∈^c, w_G | L ∪ R is a dominating set of G}. The Cut&Count result for Connected Dominating Set by Cygan et al. <cit.> can be stated as follows: Let G = (V, E) be a graph, v^* ∈ V a fixed vertex, ω V →[2|V|] a weight function, and c ∈ [n]_0, w ∈[2|V|^2]_0 integers. If there exists an algorithm that given the above input computes the size of ^c, w modulo 2 in time (T) for some function computable non-decreasing function T, then there exists a randomized algorithm that given a graph G solves the Connected Dominating Set problem in time (T). The algorithm cannot give false positives and may give false negatives with probability at most 1/2. So from now on we concentrate on the computation of |^c, w| 2 given a multi-k-expression ϕ of G. First, we may assume that ϕ satisfies <ref>. To simplify the algorithm, at the top of ϕ we insert k relabel-nodes: a node ρ_i →∅ for each i ∈ [k]. Clearly, this does not change the underlying graph so with a slight abuse of notation, we still denote the arising expression by ϕ. As Hegerfeld and Kratsch, we define the following families of partial solutions. We say that (A, B, C) is a subpartition of G if A ∪ B ∪ C ⊆ V(G) and A, B, C are pairwise disjoint. For a node t of ϕ and , ∈, by ^, _t we denote the family of all subpartitions (L, R, F) of G_t such that (L, R) ∈^, _G_t and there is no edge between L ∪ R and F. We call such subpartitions partial solutions of size c and weight w. Let us emphasize the key difference to the partial solutions by Hegerfeld and Kratsch: they distinguish between live and dead labels and additionally require that L ∪ R dominates all vertices of dead labels. We will take care of domination in the very end. Now we introduce the signatures of partial solutions as defined by Hegerfeld and Kratsch. Unlike their work, our signatures are over all labels instead of live labels. First, for a subpartition (L, R, F) of V_t and a label i ∈ [k], we define S^i_t(L, R, F) ⊆{, , } so that * ∈ S^i_t(L, R, F) iff L ∩ U^i_t ≠∅, * ∈ S^i_t(L, R, F) iff R ∩ U^i_t ≠∅, * ∈ S^i_t(L, R, F) iff F ∩ U^i_t ≠∅. As already mentioned before, Hegerfeld and Kratsch unify the subsets of {, , } that contain at least two elements into a state so the set of used states is defined as = {{}, {}, {}, , ∅}. A signature is a mapping f [k] →. We say that a subpartition (L, R, F) of G_t is compatible with f if the following holds for every i ∈ [k]: * If |S^i_t(L, R, F)| < 2, then f(i) = S^i_t(L, R, F). * If |S^i_t(L, R, F)| ≥ 2, then f(i) =. Observe that there exists exactly one signature with which (L, R, F) is compatible. For a signature f, we define ^, _t(f) = {(L, R, F) ∈^, _t | (L, R, F) is compatible with f} and B^, _t(f) = |^, _t(f)| 2. So our goal for now is to traverse a multi-k-expression bottom-up to compute the values B^, _t(f). In the end, we will summarize how to obtain the size of ^c, w modulo 2 from it in order to apply <ref>. In the following, we assume that the values and are reasonable, namely 0 ≤≤ |V(G)| and 0 ≤≤ 2|V(G)|^2. For values outside these ranges, we implicitly treat any B^, _t(f) as zero. In the following, by ≡ we denote the equality in _2. First, let t be a 1 ⟨ i ⟩-node for some i ∈ [k] introducing a vertex v. Then it holds that B^, _t(f) = [v ≠ v^* f(i) ∈{}] · [( = = 0 f(i) ∈{∅, {}}) ( = 1 = ω(v) f(i) ∈{{}, {}}) ] · [∀ j ∈ [k] ∖{i} f(j) = ∅] Next, let t be a η_i, j-node for some i ≠ j ∈ [k] and let t' be its child. Here, we can adopt the approach of Kratsch and Hegerfeld without changes, namely: B^, _t(f) = feas(f(i), f(j)) · B^, _t' where feas×→{0, 1} is given by feas ∅ {} {} {} ∅ 1 1 1 1 1 {} 1 1 0 0 0 {} 1 0 1 0 0 {} 1 0 0 1 0 1 0 0 0 0 . In simple words, feas invalidates partial solutions with LR-, LF-, or RF-edges between i and j. Now let t be a ρ_i →∅ for some i ∈ [k] and let t' be its child. This operation does not change the set of partial solutions of the arising graph, only their signatures by making the set U^i_t empty. So it holds that B^, _t(f) ≡ [f(i) = ∅] ·∑_s ∈ B^, _t'(f[i → s]). Similarly, let t be a ρ_i →{i, j}-node for i ≠ j ∈ [k] and let t' be its child. Again, the set of partial solutions remains the same but signatures change: a signature at label j is now the union of old signatures at labels i and j. Therefore, we have B^, _t(f) ≡∑_s ∈ merge(s, f(i)) = f(j) B^, (f[j → s]) where merge× is defined by Hegerfeld and Kratsch as merge ∅ {} {} {} ∅ ∅ {} {} {} {} {} {} {} {} {} {} {} {} . It is easy to see that for the previous node types, all values B^, _t(f) for reasonable and can be computed in (5^k). Finally, let t be a union-node and let t_1 and t_2 be its children. Then, informally speaking, every partial solution of G_t corresponds to a pair of partial solutions of G_t_1 and G_t_2 by forming their pointwise union where a signature at some label is the union of signatures at this label in both partial solutions. Formally, we have B^, _t(f) ≡∑__1 + _2 = _1 + _2 = ∑_f_1, f_2 [k] → merge(f_1, f_2) = f B^_1, _1_t_1(f_1) · B^_2, _2_t_2(f_2) where merge of two functions is componentwise, i.e., merge(f_1, f_2)(i) = merge(f_1(i), f_2(i)) for all i ∈ [k]. This equality is analogous to Hegerfeld and Kratsch with the only difference that in our case, the signatures keep track of all labels and not only live ones. Similarly to their work, we may observe that there is only a polynomial number of reasonable tuples (_1, _2, _1, _2). Then we may iterate over all of them in polynomial time. Now we may assume that such a tuple is fixed and we aim to compute ∑_f_1, f_2 [k] → merge(f_1, f_2) = f B^_1, _1_t_1(f_1) · B^_2, _2_t_2(f_2) 2. By Lemma 4.6 in <cit.>, this can be accomplished in time (5^k). These equalities provide a way to compute the values B^c, w_r(f) where r denotes the root of ϕ for all signatures f. Any node is processed in time (5^k) and since we may assume that the number of nodes in ϕ is polynomial in k and |V(G)|, the values can be computed in time (5^k). At the beginning, we already mentioned that the transformation from the numbers B^c, w_r(f) to the value |^c, w| 2 has to be carried out differently for multi-clique-width. For clique-width, Hegerfeld and Kratsch do it labelwise: namely, they ensure that the vertices of a label are dominated once the label is not live anymore. This ensures that every vertex is processed exactly once like this. There are two issues related to this in our case. First, for this they rely on the existence of irredundant clique-expressions and we do not know whether this is true for multi-expressions. Second, transforming a vertex every time one of its labels is not active anymore would potentially lead to transforming this vertex multiple times resulting in an uncontrolled behaviour. To overcome these issues, we will carry out such a transformation at the very end. Let us note that although the procedure is different from the original work of Hegerfeld and Kratsch, the idea behind it remains the same. Recall that at the top of ϕ, we have a ρ_i →∅-node for every i ∈ [k]. Thus, we have U^i_r = ∅ for all i ∈ [k] and every partial solution of G_r = G has the signature f_∅ [k] → where f_∅(i) = ∅ for every i ∈ [k]. So we have |^c, w_r| ≡ B^c, w_r(f_∅). Now we claim that |^c, w| ≡ |^c, w_r| holds. For the simplicity of notation, let ^c, w = { (L, R, ∅) | (L, R) ∈^c, w}. Clearly, it holds that |^c, w| = |^c, w| so it suffices to show that |^c, w| ≡ |^c, w_r|. First, observe that ^c, w⊆^c, w_r holds: indeed, for every element (L, R, ∅) of ^c, w, the pair (L, R) is a consistent cut of G_r = G and the cardinality resp. weight of L ∪ R is c resp. w. Next we show that the cardinality of ^c, w_r ∖^c, w is even. For this, consider an arbitrary but fixed pair (L, R) such that there exists F with (L, R, F) ∈^c, w_r ∖^c, w. First, we claim that L ∪ R is not a dominating set of G = G_r. If F = ∅, then by definition of these sets, the only reason for (L, R, F) to belong to ^c, w_r but not ^c, w is that L ∪ R is not a dominating set of G; On the other hand, if there is a vertex v ∈ F, then there is no edge between L ∪ R and v ∈ F so v is undominated by L ∪ R. Let U = V(G) ∖ N_G[L ∪ R]. By our claim, the set U is non-empty. Then the sets F satisfying (L, R, F) ∈^c, w_r ∖^c, w are exactly the subsets of U since there is no edge between L ∪ R and F. Therefore there are exactly 2^|U| such sets F. Recall that U is non-empty so 2^|U| is even. Altogether, for every fixed pair (L, R) there exist either no or an even number of sets F with (L, R, F) ∈^c, w_r ∖^c, w. So the size of ^c, w_r ∖^c, w is indeed even. Altogether we obtain that |^c, w| = |^c, w| ≡ |^c, w_r|. Thus, the above algorithm computes the size of ^c, w in time (5^k) and by <ref>, the Connected Dominating Set problem can also be solved in time (5^k). Hegerfeld and Kratsch also showed that their algorithm is tight under SETH and since multi-clique-width lower-bounds clique-width, this also applies in our case. Let G be a graph given together with a multi-k-expression of G. Then Connected Dominating Set problem can be solved in time (5^k). The algorithm is randomized, it does not return false positives and returns false negatives with probability at most 1/2. Unless SETH fails, this problem cannot be solved in time ((5 - ε)^k) for any ε > 0. § CONCLUSION In this work, we studied two generalizations of clique-width, namely fusion-width and multi-clique-width, both introduced by Fürer <cit.>. First, we showed that the fusion-width of a graph is an upper bound for its multi-clique-width. For the other direction, the best upper bound we are aware of is ≤ 2^ and we leave open whether this is tight. By extending existing algorithms for clique-width, we have obtained tight algorithms parameterized by multi-clique-width for Dominating Set, Chromatic Number, q-Coloring, Connected Vertex Cover, and Connected Dominating Set. The running times are the same as for (S)ETH-optimal algorithms parameterized by clique-width. For Hamiltonian Cycle, MaxCut, and Edge Dominating Set, we were not able to achieve analogous results and these complexities remain open. Instead, we have introduced glue-expressions equivalent to fuse-expressions and then we employed them for these three problems to obtain tight algorithms parameterized by fusion-width with the same running times as ETH-optimal algorithms for clique-width. Finally, in all algorithms we assume that a multi-k-expression / fuse-k-expression is provided. However, the complexity of computing these parameters is unknown. To the best of our knowledge, the best approximation would proceed via clique-width, have FPT running time, and a double-exponential approximation ratio.
http://arxiv.org/abs/2307.10197v1
20230711173545
Solvable Neural Network Model for Input-Output Associations: Optimal Recall at the Onset of Chaos
[ "Tomoki Kurikawa", "Kunihiko Kaneko" ]
q-bio.NC
[ "q-bio.NC", "nlin.CD" ]
[email protected] Department of Physics, Kansai Medical University, Shinmachi 2-5-1, Hirakata, Osaka, Japan Department of Complex and Intelligent systems, Future University Hakodate, 116-2 Kamedanakano-cho, Hakodate, Hokkaido, Japan 041-8655 The Niels Bohr Institute, University of Copenhagen, Blegdamsvej 17, Copenhagen, 2100-DK, Denmark Center for Complex Systems Biology, Universal Biology Institute, University of Tokyo, Komaba, Tokyo 153-8902, Japan In neural information processing, an input modulates neural dynamics to generate a desired output. To unravel the dynamics and underlying neural connectivity enabling such input-output association, we proposed an exactly soluble neural-network model with a connectivity matrix explicitly consisting of inputs and required outputs. An analytic form of the response upon the input is derived, whereas three distinctive types of responses including chaotic dynamics as bifurcation against input strength are obtained depending on the neural sensitivity and number of inputs. Optimal performance is achieved at the onset of chaos, and the relevance of the results to cognitive dynamics is discussed. Solvable Neural Network Model for Input-Output Associations: Optimal Recall at the Onset of Chaos Kunihiko Kaneko August 12, 2023 ================================================================================================= Neural systems exhibit rich dynamics generated by strong recurrent connections<cit.>. For performing cognitive tasks in neural systems, sensory inputs modulate the neural dynamics to generate specific output patterns resulting in suitable behaviors. In the association task between the input signals and output choices, for instance, the signal modifies ongoing (spontaneous) neural dynamics, leading to the emergence of an appropriate attractor that guides the correct choice<cit.>, as strongly contrasted with input-output transformation in feed-forward networks<cit.> Unveiling the mechanisms behind such modulation and the type of connectivity relevant to it is essential for understanding information processing in neural systems. One widespread and powerful approach to understanding the information processing involves recurrent neural networks trained with machine learning techniques<cit.>. However, these trained networks are finely tuned for specific tasks, which masks the connectivity relevant to cognitive functions. There is a need for a simple model to bridge the gap between neural connectivity and neural dynamics in shaping the input/output transformations. Another approach, the auto-associative memory model, offers network connectivity explicitly represented by memorized patterns, as pioneered in the Hopfield network <cit.>. In this approach, different fixed-point attractors correspond to distinct memorized patterns, to which neural states converge, depending on their initial states. Thus, neural dynamics themselves are not modulated by the input. The role of spontaneous dynamics without input and the connectivity underlying the change in the dynamics to produce the output remain to be elucidated. In the present Letter, we propose a neural network model with a novel connectivity matrix designed to generate any memorized patterns when the associated inputs are applied with a certain strength. This connectivity is explicitly represented based on a set of input and output patterns. The recall to the input is given by the location of a fixed point for any strength of the input, which is analytically obtained in our model. Besides this fixed-point, a chaotic attractor also emerges depending on the input strength, the gain parameter, and the number of memorized patterns. We obtain the phase diagram on distinct recall behaviors against these parameters, which demonstrates that the highest performance in the recall is achieved at the onset of the chaos. Finally, computational roles of chaotic internal dynamics are discussed in possible relation to experimental observations. We consider a neural network model composed of N neurons. The network is required to generate target patterns ξ^μ (μ=1,2,,,M) in response to input patterns η^μ, where M = α N, and, η^μ and ξ^μ are N-element vertical vectors. Each element of these vectors takes a binary value (± 1) that is randomly generated according to the probability distribution P(ξ_i^μ = ± 1) = P(η_i^μ = ± 1) = 1/2. The neural activity x_i evolves according to the following equation: ẋ_̇i̇ =tanh(β (Σ_j J_ij x_j + γη_i^μ)) - x_i, where β and γ are the gain of the activation function and the input strength, respectively. To memorize input/output maps between η and ξ, we have designed the connectivity matrix J that is composed of η and ξ, in contrast to the Hopfield network that incorporates only ξ. Further, to mitigate the effects of potential interference across memories that could impair recall performance<cit.>, the designed connectivity is given with a pseudo-inverse matrix of the target-input matrix X as follows: J = X ( [ I I; -I -I ]) X^+ X = [ξ^1,ξ^2,…,ξ^M,η^1,η^2,…,η^M], where I is an M-dimensional identity matrix, X is an (N,2M)-matrix and X^+≜ (X^TX)^-1X^T is a pseudo-inverse matrix of X, where X^T is a transpose matrix of X. Due to the pseudo-inverse matrix, Jξ^μ + γη^μ = ξ^μ +(γ-1)η^μ and, consequently, the target ξ^μ is a fixed point under η^μ with γ=1 for β→∞, based on the properties of tanh(β x). This property applies to all μ, indicating that all ξ^μ are the fixed points under the corresponding inputs with γ=1. In other words, all associations are successfully memorized in this model. To satisfy the pseudo-inverse matrix, however, the number of vectors, 2M, that are linearly independent of each other should be less than N. As a consequence, at best, M=N/2 associations are allowed and the memory capacity is bounded by α = 0.5 at a maximum. How does the network recall to the input except for γ = 1 and β→∞? We, now, derive an analytical form of a fixed point of the neural dynamics upon input for any value of γ with finite β. For it, we consider x^fp(γ)=(a(γ) ξ + b(γ) η) and derive a(γ) and b(γ) such that satisfy the fixed point condition for any γ as follows. Below, the superscript μ is omitted for clarity unless otherwise noted since the result is not dependent on μ. By using Jξ=Jη=ξ-η, we have Jx^fp = (a+b)(ξ-η), and, subsequently, by substituting x^fp to ẋ=0 in Eq. <ref>, aξ+bη=f((a+b)(ξ-η) + γη), where f(x)=tanh(β x). Considering i-th elements such that ξ_i equals η_i, a+b=f(γ) should be satisfied and, similarly, by considering i-th elements such that ξ_i equals -η_i, a-b=f(2(a+b)-γ) should be satisfied. Thus we derive a and b as a=(f(γ)+f(2f(γ)-γ))/2, b=(f(γ)-f(2f(γ)-γ))/2, where a and b are uniquely determined and depend solely on γ for a given activation function f(x) while they are independent of N, α. It is straightforward to check that Eq.<ref> is satisfied for any ξ and η. Although not proven analytically, we have confirmed numerically that x^fp is a unique fixed-point for given μ and γ. This rigorous solution is applicable not only to tanh(·) but also to any arbitrary function, as long as ξ and η are binary vectors. As γ increases from zero, a(γ) increases and takes a peak for γ=1, while b(γ) increases more slowly as plotted in Fig.<ref>A. For γ less than 2, a(γ) is larger than b(γ) and, oppositely, beyond γ = 2, b(γ) is larger than a(γ)[When β→∞, a=1 and b=0 for 0 < γ <2, while a=0 and b=1 for 2 < γ, which are obtained from Eqs. <ref> and <ref> ]. The overlap of x^fp with ξ, termed m ≜Σ_i x_i^fpξ_i/N, is also plotted in Fig. <ref>B. For a given β, m increases up to γ=1 and, subsequently, decreases. As β increases, the curve of m is steeper so that x^fp nearly equals 1 even for the weak input. The overlap of x^fp with η slowly increases with γ, followed by a sharp rise at γ≈ 2 beyond which it approaches unity, i.e., the network just outputs the input as it is (Fig. <ref>B). Thus, in the following part, we consider the range of 0 ≤γ≤ 2. Although x^fp is a fixed point for any value of parameters, it is necessary to ascertain its stability and the existence of other attractors, to assess whether the recall of x^fp really works from any initial states. We numerically solved Eq. (1) (N=2048, unless otherwise noted), and found another chaotic attractor in addition to x^fp. By varying α and β, three types of recall behaviors are observed depending on the stability of these two attractors, which are characterized by the distinct bifurcation diagrams of m against γ (as shown in Fig. <ref>A(i)-(iii)): (i) Stable recall of x^fp for any strength of the input: x^fp is a unique attractor for any γ. (ii) Stable recall of x^fp only for a certain range of γ: x^fp is a unique attractor for γ∼ 1, whereas for smaller γ the chaotic attractor appears, which exhibits a smaller overlap with ξ compared with the overlap of x^fp[x^fp and the chaotic attractor coexist for some range of parameters and do not otherwise. Even when they coexist, the neural states converge into the chaotic attractor for almost all initial states, as shown in Figs. <ref>C, <ref>C and S1C. Thus, the type of recall is independent of their coexistence.]. For smaller γ values, the neural state fails to converge into x^fp, and instead, it converges into the chaotic attractor from most initial states, meaning that the network fails to recall the target. Still, for γ∼ 1, the neural state from any initial state converges to x^fp whose overlap with the target is close to unity, resulting in the recall of the target. (iii) No stable recall of x^fp for any γ: the chaotic attractor exists across all ranges of γ, even though x^fp coexists around γ=1. The chaotic attractor has a much larger basin of attraction than x^fp even for γ∼ 1 (Fig. <ref>C). Consequently, the recall of the target is impaired. To analyze these three behaviors, we first explored the stability of x^fp and of the chaotic attractor across a range of β with a constant α=0.38. We found that for a small value of β (β=0.8), the stable recall (i) is achieved. The neural states from any initial states for any γ converge rapidly into x^fp (as shown in Fig. <ref>A), indicating high robustness in the success recall. However, the degree of overlap with the target is notably below the unity. As β increases, x^fp approaches the target for all ranges of γ. Beyond the critical β, denoted by β_F, x^fp turns to be unstable for a certain range of γ, while the chaotic attractor emerges, corresponding to the recall type (ii) as shown in Fig. <ref>A(ii). The overlap of the chaotic attractor with the target is much lower than that of x^fp. Although, for γ=1, x^fp is the unique attractor, there exists long-term transient chaos before the neural state converges into x^fp (see Fig. <ref>B). With the further increase in β, the range of γ within which the chaotic attractor exists expands, eventually, covering all the range of 0 ≤γ≤ 2 at another critical value of β (termed β_I). Beyond β_I, the system exhibits the recall type (iii). Even for γ=1, the basin of the chaotic attractor covers the full state space, and most orbits from random initial conditions converge into it (Fig. <ref>C). Thus, the recall of the target almost fails. To comprehensively understand the recall behavior across β and γ, we draw the regions where the chaotic attractor is present in Fig. <ref>B. We also investigated the stability of x^fp, which is, however, not directly related to the type of recall and is shown in Fig. <ref>. In the area above the curve, the chaotic attractor is present. β_F is the minimum value of the curve in β≤ 1, whereas β_I is the maximum value of β on the curve at γ=1. These two critical values of β determine the phase boundary of three recall behaviors (i)-(iii). With an increase in β, x^fp approaches the target, and accordingly, the final states in all the recall trials overlap almost perfectly with the target below β=β_I at which the chaotic attractor emerges (Fig. <ref>C). As β increases beyond β_I, the basin of the x^fp attractor shrinks, while that of the chaotic attractor expands. Consequently, the overlap between the final state and the target averaged over randomly chosen initial states significantly decreases, as depicted in Fig. <ref>C. Thus, the recall performance reaches its peak (i.e., at the onset of chaos) across all ranges of γ. So far, we presented the results with the fixed number of memories α N (α=0.38). Noting that standard associative memory models such as the Hopfield network, recall fails beyond a critical number of embedded memories. We next analyze the change in the recall process with increasing α and demonstrate it exhibits similar behavior to the change with the increases in β: Three types of recall behavior emerge as α varies, as shown in Fig. <ref>A. For small α, x^fp is stable and a unique attractor for any γ (type (i) ). With the increase in α, the chaotic attractor emerges within a certain range of γ (type (ii) ), and this range expands (Fig.<ref>B). Finally, the range within which the chaotic attractor is present covers all the ranges of γ (type (iii) ). In contrast to the clear change in the recall process with increasing β, the value of x^fp remains unchanged during the increase in α. We now focus on the behavior for γ=1 and explore the number of memories recalled successfully. We found that at α = α_C (β) the chaotic attractors emerge for all embedded patterns μ (α_C(β) is obtained by solving β=β_I (α)). For α < α_C, the fixed points x^fp,μ=(a ξ^μ + b η^μ) for all μ are stable and all the embedded patterns are successfully recalled, whereas for α>α_C, almost all the recall trials fail for all patterns due to the emergence of the chaotic attractors whose basins of attraction are much larger than those of x^fp,μ. The number of successfully recalled memories increases linearly below α=α_C(β) and then drops to zero drastically (see Fig. <ref>C,) signifying that α_C(β) N is the memory capacity in this model (e.g., α_C (4) = 0.38). α_C (β) decreases towards a certain finite value α_C(∞) with the increase in β as analyzed in detail in the following. We finally show the phase diagram of the recall process against α and β by identifying β_F(α) and β_I(α) (α_C (β) is the inverse function of β_I (α)) as shown in Fig <ref>D. As α approaches zero, β_F diverges, meaning that if α is set to a sufficiently small value, x^fp is stable throughout all γ even for quite large β. For such a limit, x^fp approaches a step function; x^fp=1 for 0 < γ < 2 and x^fp=0 for otherwise. Consequently, the network perfectly recalls the target for 0 < γ < 2. β_I increases drastically as α decreases from 0.5 and diverges at α_C(∞). For α below α_C (∞), the neural state converges to x^fp for γ=1 even for β→∞. The asymptotic analysis demonstrates that α_C(∞) ∼ 0.340 for N →∞ (See Fig. <ref>), indicating that the memory capacity is α = 0.340 when β is sufficiently large. In summary, we present an analytically solvable neural network model for I/O associations in which each input stabilize the target pattern as a unique fixed-point under the limit of memory capacity. This connectivity in the network consists of both target and input patterns, by introducing the pseudo-inverse matrix, which allows for rigorous recalls of any (correlated) target patterns. This is in contrast to our previous model<cit.> valid only for mutually orthogonalized patterns. By using this model, we derive the response to the input as the analytical expression of the fixed point for any input strength, whereas the response dynamics were explored in random networks (without embedded patterns)<cit.> and low-rank networks<cit.>. We also demonstrate the emergence of the additional chaotic attractor numerically. Through exploration of the stability of these attractors, we identified three distinct recall processes. Introducing the pseudo-inverse matrix (X^+ in Eq. <ref>) into the connectivity generally requires the global information of the network, which may be difficult to implement biologically( but see<cit.> for Hopfield network). In our previous study<cit.>, however, a Hebbian and anti-Hebbian learning that only requires local information can shape the connectivity that is similar to our current connectivity. Still, filling the gap between the learning-shaped connectivity and the current connectivity needs further studies. Here, we uncovered three phases of recalls, concerning the dominance of the chaotic attractor. Interestingly, the recall performance is maximized at the onset of the chaos, where the spontaneous chaotic activity is bifurcated to the fixed point that corresponds to the target output. In fact, such transitions of the activities with changes in the stimuli are observed in many cortical areas<cit.> These are consistent with our findings of the optimal performance under spontaneous chaotic dynamics, whereas the roles of the chaotic dynamics in the response and learning need to be further elucidated. Indeed, the relevance of spontaneous chaotic (and high-dimensional) dynamics to computational neuroscience has been discussed, for instance, in the reservoir computing<cit.>, memories<cit.>, mixed-selectivity for efficient separation<cit.>, sampling<cit.>, neural avalanche<cit.> and learning<cit.>. Our study has demonstrated a new role of chaotic dynamics in recall performance. Although Hopfield networks<cit.> and their variants<cit.> have great contributions to associative memory, the modulation of the internal dynamics by external input that is essential for performing cognitive functions has not been included. Our model presents a novel prototype connectivity underlying such modulation, which will advance our understanding of neural processing. § ACKNOWLEDGMENTS T.K. and K.K. are supported by JSPS KAKENHI (No.20H00123, T.T and K.K) and Novo Nordisk Foundation (0065542, K.K) Supplemental Figures: Solvable Neural Network Model for Input-Output Associations: Optimal Recall at the Onset of Chaos
http://arxiv.org/abs/2307.04920v1
20230710220024
Enhanced Food Availability can Deteriorate Fitness through Excessive Scrounging
[ "Robin Vacus", "Amos Korman" ]
cs.GT
[ "cs.GT", "q-bio.PE" ]
Enhanced Food Availability can Deteriorate Fitness through Excessive Scrounging Robin VacusCNRS, located at the Research Institute on the Foundations of Computer Science (IRIF), Paris, France (e-mail: [email protected]).  and Amos KormanCNRS, located at the French-Israeli Laboratory on Foundations of Computer Science, UMI FILOFOCS, CNRS, UP7, TAU, HUJI, WIS International Joint Research Unit, Tel-Aviv, Israel (e-mail: [email protected]). =============================================================================================================================================================================================================================================================================================================================================================================== In group foraging situations, the conventional expectation is that increased food availability would enhance consumption, especially when animals prioritize maximizing their food intake. This paper challenges this conventional wisdom by conducting an in-depth game-theoretic analysis of a basic producer-scrounger model, in which animals must choose between intensive food searching as producers or moderate searching while relying on group members as scroungers. Surprisingly, our study reveals that, under certain circumstances, increasing food availability can amplify the inclination to scrounge to such an extent that it paradoxically leads to a reduction in animals' food consumption compared to scenarios with limited food availability. We further illustrate a similar phenomenon in a model capturing free-riding dynamics among workers in a company. We demonstrate that, under certain reward mechanisms, enhancing workers' production capacities can inadvertently trigger a surge in free-riding behavior, leading to both diminished group productivity and reduced individual payoffs. Our findings underscore the significance of contextual factors when comprehending and predicting the impact of resource availability on individual and collective outcomes. §.§ Braess's paradox, a thought-provoking result in game theory, demonstrates that in certain transportation networks, adding a road to the network can paradoxically increase traffic latency at equilibrium <cit.>. In a similar vein, our study demonstrates how improvement in underlying conditions, which may initially seem beneficial, can actually lead to degraded performance. However, instead of focusing on network flows as in Braess's paradox, our findings relate to contexts of productive groups, highlighting the impact of free-riding behavior. Productive groups, such as workers in a company, collaborating researchers, or ensembles of foraging animals, consist of individuals who not only benefit from their own resource generation or findings, but also enjoy the added advantage of reaping the rewards of others' contributions <cit.>. For example, workers in a company may receive performance-based bonuses as a reward for their productivity, while also benefiting from the collective production of their peers, through stock shares or other profit-sharing mechanisms. Similarly, in the realm of joint research projects, the success of the endeavor contributes to the collective prestige of the researchers, yet those who make substantial contributions often receive heightened recognition and prestige. Likewise, in group foraging scenarios, animals that first discover food patches often have an opportunity to feed before other group members join in, granting them the ability to directly consume a portion of the food they found and secure a larger share <cit.>. Within such productive group contexts, the pervasive occurrence of free-riding becomes apparent <cit.>. Free-riding refers to individuals exploiting collective efforts or shared resources without contributing their fair share. In team projects, for instance, free-riders neglect their assigned tasks to avoid costs or risks while still benefiting from the project's overall success <cit.>. This phenomenon is also remarkably prevalent in animal foraging contexts, where individuals opportunistically engage in scrounging or kleptoparasitism, feeding off prey discovered or captured by others <cit.>. The framework of Producer-Scrounger (PS) games is a widely used mathematical framework for studying free-riding in foraging contexts <cit.>. In a PS game, players are faced with a choice between two strategies: producer and scrounger. The interpretation of these strategies varies according to the context, but generally, a producer invests efforts in order to produce or find more resources, whereas a scrounger invests less in producing or finding resources, and instead relies more on exploiting resources produced or found by others. Based on the rules of the particular PS game, specifying the production and rewarding mechanisms, each animal chooses a strategy and the system is assumed to converge into an equilibrium state, where each animal cannot improve its own calorie intake by changing its strategy <cit.>. This paper examines the impact of individual production capacity on the resulting payoffs in equilibrium configurations. The first PS game we consider aims to model a scenario consisting of a group of foraging animals, with each animal striving to maximize its own food intake. Intuitively, as long as the group size remains unchanged, one may expect that, even if it may trigger more opportunistic behavior <cit.>, increasing food abundance should ultimately improve consumption rather than diminish it. Likewise, within a productivity-based reward system in a company, one may expect that enhancing individual productivity levels would boost group productivity and subsequently increase workers' payoffs, despite a possible increase in free-riding behavior. However, our findings uncover a more nuanced reality, unveiling a remarkably pronounced detrimental effect of free-riding behavior and emphasizing that the existence of such a positive correlation between individual productivity and payoffs is strongly contingent on the specific characteristics of the setting. §.§ Results We investigate two types of PS games: a foraging game involving animals searching for food and a company game involving a group of workers in a company. Our main objective is to analyze the effects of changes in individual production capabilities on players' payoffs, evaluated at equilibrium configurations. To facilitate comparisons across different parameter settings, we ensure that the games we examine have unique equilibrium configurations (see SI, <Ref>). We say that a PS game exhibits a Reverse-Correlation phenomenon if an increase in individuals' production capacities leads to a decrease in the players' payoff, when evaluated at equilibrium configurations (see sec:methods). We begin with the Foraging game, which is a generalization of the classical PS game in <cit.>. The main difference with the classical model is that our model considers two types of food, instead of a single type as previously assumed. The Foraging game. To illustrate our model, consider a scenario involving a group of animals engaged in fruit picking from trees (see <Ref>). Each animal aims to maximize its fitness, which is determined by the amount of food it consumes. The trees in this scenario contain both low-hanging fruit, accessible to both producers and scroungers, and high-hanging fruit, which can only be reached by producers. When an animal picks fruit, it retains a portion for its own consumption (let's say 70%), while the remaining fruit falls to the ground. Scroungers, instead of picking high-hanging fruit, focus on scanning the ground for fallen fruit. Fallen fruit is distributed equally among all scroungers and the animal that originally obtained it. More precisely, consider n≥ 2 animals, where each of which needs to choose to be either a producer or a scrounger. We assume that a producer finds an amount of food corresponding to F_ = 1+γ calories, where, adhering to the trees example above, 1 corresponds to the amount of high-hanging fruit and γ is a parameter that governs the animal's access to low-hanging fruit. In contrast, a scrounger directly finds only low-hanging fruit, corresponding to F_ = γ calories. After finding food (of any type) consisting of F calories, the animal (either producer or scrounger) consumes a fraction s ∈ [0,1] of what it found (called the “finder's share”) and shares the remaining (1-s)F calories equally with all scroungers. See <Ref> for a schematic illustration of structure of the foraging game. The payoff of a player is defined as the capacity of its calorie intake. Hence, for each 0≤ k≤ n, the payoff of each pure strategy in the presence of exactly k producers in the population is: π_^(k) = s (1+γ) + (1-s) 1+γ/1+n-k, π_^(k) = γ + k(1-s) 1+γ/1+n-k, where the second equation follows since scrounger-to-scrounger interactions compensate each-others, and hence, can be ignored in the expression of the payoff. Note that the classical model <cit.> is retrieved by setting γ=0, which essentially implies that there is only one type of food. We study what happens to the payoffs of players at equilibria configurations, denoted by π_⋆, as we let γ increase (see sec:methods). This increase aims to capture the case that the low-hanging fruit becomes more abundant in the environment. Note that for each fixed k, both π^(k)_ and π^(k)_ are increasing in γ. Hence, simply increasing γ, without changing the strategy profile, necessarily results in improved payoffs. However, allowing players to modify their strategies after such a change may potentially lead to enhanced scrounging at equilibrium, which can have a negative impact on the payoffs. Nevertheless, as mentioned earlier, one might expect that this negative effect would be outweighed by the overall improvement in fruit availability, resulting in an increase in consumption rather than a decrease. This intuition becomes apparent when comparing the scenarios with γ=0 and γ=1. As γ increases from 0 to 1, we can expect an increase in the proportion of scroungers due to the rising ratio of F_/F_=γ/(1+γ). However, even if the system with γ=1 ends up consisting entirely of scroungers, the average food consumption of a player (which equals 1) would still be at least as large as that of any strategy profile in the γ=0 case. Nonetheless, as shown here, upon closer examination within the interval γ∈[0,1], a different pattern is revealed. We combine simulations (<Ref>) with mathematical game-theoretical analysis (<Ref> in SI) to disclose a Reverse-Correlation phenomenon in the Foraging game. Specifically, for the case of n=3 players, we prove in <Ref> that for any finder's share s<1/2, there exists an interval of values for γ over which the Reverse-Correlation phenomenon occurs. In the case of 2 players, the Reverse-Correlation phenomenon does not happen over an interval, and instead, we prove in <Ref> that there exists a critical value of γ at which π_⋆ decreases locally. In our simulations, which focus on n=4 players, a noticeable decline is observed in the payoffs at equilibrium as γ increases over a relatively large sub-interval of [0,1] (<Ref>). The Company game. We consider a PS game aiming to model a scenario with a group of n≥ 2 workers of equal capabilities who collaborate to produce a common product for a company. (Alternatively, by replacing the salary received by a worker with a notion of prestige, the game can also capture a scenario where a group of researchers collaborate in a research project.) Each worker is assigned a specific part of the project and can choose between two pure behavioral strategies. A producer pays an energetic cost of c>0 units and with probability p produces a product of quality γ (otherwise, with probability 1-p, it produces nothing). In contrast, a scrounger pays no energetic cost and with probability p produces a product of lower quality γ'=aγ for some given 0≤ a<1 (similarly, with probability 1-p, nothing is produced). Let I={1,2,…,n}, and let q_i denote the quality of the product made by worker i, for i∈ I, with q_i=0 if no product is made by this player. We define the total production as: Γ=∑_i∈ I q_i. We assume that the salary σ_i of player i is proportional to a weighted average between the quality of the products made by the workers, with more weight given to q_i. If fact, by appropriately scaling the system, we may assume without loss of generality that the salary is equal to this weighted average. Formally, we set: σ_i= s q_i + 1-s/n-1∑_j∈ I∖i q_j, for some s ∈ [1/n,1]. Note that s=1 implies that the salary each worker receives is identical to the quality of its own production, and s=1/n represents equally sharing the quality of the global product between the workers. Next, we aim to translate the income salary of a player into his payoff using a utility function, denoted by ϕ(·). These quantities are expected to be positively correlated, however, the correlation may in fact be far from linear. Indeed, this is supported by the seminal work by Kahneman and Deaton <cit.> which found that the emotional well-being of people increases relatively fast as their income rises from very low levels, but then levels off at a certain salary threshold. To capture such a correlation, we assume that ϕ is both monotonically non-decreasing, concave and bounded. In addition, the payoff of worker i is decreased by its energetic investment. Finally, π_i:= ϕ(σ_i)-c_i, where the energetic investment c_i=c>0 if i is a producer and c_i=0 if i is a scrounger. See <Ref> for an illustration of the semantic structure of the game. The question of whether or not the system incurs a Reverse-Correlation phenomenon turns out to depend on the model's parameters, and, in particular, on the function ϕ(x). For example, when ϕ : x ↦ x (i.e., the case that the salary is converted entirely into payoff), there is no Reverse-Correlation phenomenon (see Section <ref> in SI). However, for some concave and bounded functions ϕ(x) the situation is different. We combine mathematical analysis with simulations considering the function (see inset in <Ref>): ϕ(x) = 1-exp(-2 x). Our mathematical analysis proves the presence of a Reverse-Correlation phenomenon for the case of two workers (Theorem <ref> in the SI). Interestingly, this result holds for every s < 1, demonstrating that the Reverse-Correlation phenomenon can occur even when the payoffs of individuals are substantially biased towards their own production compared to the production of others. Our simulations consider the case of n=4 workers and reveal (Figure <ref>) that for certain parameters, letting γ increase over a range of values results in a reduction in payoffs in equilibrium, thus indicating a Reverse-Correlation phenomenon. Moreover, as γ increases over a range of values we also observe a substantial reduction in total production at equilibria. While the general shape of the utility function ϕ(x)= 1-exp(-2 x) is justifiable, the function itself was chosen somewhat arbitrarily. To strengthen the generality of our results, we also provide in the SI (<Ref>) simulations supporting the Reverse-Correlation phenomenon under another type of non-decreasing, concave, and bounded utility function, specifically, ϕ(x)= min(1,x). A necessary condition. Finally, we identify a necessary condition for the emergence of a Reverse-Correlation phenomenon in arbitrary PS models. Specifically, we prove (SI, <Ref>) that a Reverse-Correlation phenomenon can occur only if the definition of the producer's payoff is sensitive to the fraction of scroungers in the population. An interesting consequence of this condition is that a seemingly minor change in the definition of the Foraging game can prevent the occurrence of the Reverse-Correlation phenomenon. Recall that in this game, when an animal finds food, it consumes a fraction s of it (the finder's share), and the remaining 1-s fraction falls to the ground and is then equally shared between the animal and all scroungers. If the game is changed such that when a producer finds food, it only consumes the finder's share and does not eat at all from the food that falls on the ground (i.e., only the scroungers eat from it), then the game stops satisfying the aforementioned necessary condition. Indeed, in this case, the payoff of a producer would always be 1+γ irrespective of the number of scroungers. Hence, the modified game does not exhibit a Reverse-Correlation phenomenon, regardless of the parameters involved. §.§ Discussion In foraging contexts, it is commonly anticipated that an increase in food abundance would result in higher consumption, which, in turn, would lead to population growth over time. In contrast, this paper introduces the intriguing possibility of a reversed scenario: that under certain producer/scrounger conditions, if animals have sufficient time to update their producer/scrounger strategy and reach a stable configuration before reproducing <cit.>, then an increase in food abundance can paradoxically result in reduced consumption, which, in turn, can lead to a decline in population size! Note that this idea can also be viewed from the opposite perspective, namely, that by reducing food abundance, the inclination to scrounge can decrease, resulting in improved food consumption, ultimately leading to an increase in population size. The Reverse-Correlation phenomenon corresponds to a decrease in payoffs as underlying conditions improve. The counter-intuitive aspect of it stems from the fact that players aim to maximize their payoffs, yet when conditions improve, they are driven to perform worse. Another measure of interest is the total production, defined as the sum of production over all players (<Ref>). Observe that in the Foraging game, since the animals eventually consume all food found by the group, the total production (i.e., the total food found) at equilibrium is proportional to the payoff π_⋆, and hence their dynamics are similar. This implies that whenever an increase in γ results in a decrease in payoff at equilibrium (indicating a Reverse-Correlation phenomenon), the same increase in γ also leads to a decrease in total production at equilibrium. In contrast, in the Company game, production is not fully represented in the payoffs, since some of it is “lost” when translating salaries into utilities. Additionally, the distinction between payoffs and production is further emphasized due to the energetic cost incurred by producers, which is reflected in their payoffs. Despite this distinction, as observed in <Ref>, the measure of total production also exhibits a decrease across a range of γ values. This phenomenon may carry particular importance for system designers, such as the company's principal, as it challenges a fundamental assumption underlying bottom-up approaches, namely, that as long as the system naturally progresses without external disruptions, improving individual performances should lead to enhanced group performances. We demonstrated the Reverse-Correlation phenomenon on two basic game theoretical models. As evident by these games, the occurrence of this counter-intuitive phenomenon is highly contingent on the specific details of the game. For example, the Foraging game considers two types of food: low-hanging and high-hanging fruit (instead of just one type as consider in the classical game in <cit.>). Only producers have access to high-hanging fruit, while both producers and scroungers can access low-hanging fruit. Similarly to the classical model, when an animal finds food, it consumes a portion s of it and the remaining 1-s portion is equally shared between this animal and all scroungers. The Reverse-Correlation phenomenon emerges as the abundance of low-hanging fruit increases. However, as we showed, if one modifies the model so that the remaining 1-s portion is shared only between the scroungers, then the system no longer exhibits a Reverse-Correlation phenomenon. Hence, while at first glance this change may appear minor, it has a profound impact on the dynamics. In the Company game, a key aspect of the model concerns the choice of the utility function, which captures the relationship between salary and payoff. Inspired by the work of Kahneman and Deaton <cit.>, we focused on non-decreasing, concave, and bounded utility functions. Within this family of functions, we identified two that exhibit a Reverse-Correlation phenomenon. However, we note that not all utility functions in this family enable this phenomenon. In conclusion, this paper uncovers a counter-intuitive phenomenon that can arise in productive group contexts involving rational players. It reveals that under certain conditions, increasing individual production efficiency can paradoxically lead to diminished payoffs and overall group production, due to a significant rise in free-riding behavior. These findings provide valuable insights into the complex dynamics at play, underscoring the intricate relationship between individual and group performances, as well as the detrimental impact of free-riding behavior. Moreover, our results highlight the nuanced consequences of contextual factors in understanding and predicting the impact of increased (or decreased) resource availability on both individual and collective outcomes. §.§ Methods We consider two types of PS models, for which we combine analytic investigations with computer simulations. In both models, we assume that both producers and scroungers are able to produce, but that producers are expected to produce more. In our models, the payoffs and total production are positively correlated with the number of producers. We consider a parameter γ that is positively correlated to the expected production capacities of both producers and scroungers. To check what happens as individual capabilities improve, we increase γ and observe how the payoff and total production measures change, for configurations at equilibria. We focus on the strong definition of equilibria, known as Evolutionary Stable Strategy (ESS), using the standard definition as introduced by Maynard Smith and Price <cit.>. Specifically, given a PS game, let qp denote the expected payoff of a player if it chooses to be a producer with probability q, in the case that all n-1 remaining players are producers with probability p. We say that p_⋆∈ [0,1] is an ESS if and only if for every q ∈ [0,1] such that q ≠ p_⋆, (i) either p_⋆p_⋆ > qp_⋆, (ii) or p_⋆p_⋆ = qp_⋆ and p_⋆q > qq. To be able to compare instances with different parameters, we make sure that for every value of γ, the game we consider always has a unique ESS, termed p_⋆(γ). In such a case, we write π_⋆(γ) = π_p_⋆(γ),p_⋆(γ) the payoff at the ESS, and omit the parameter γ when clear from the context. In our rigorous analysis, presented in the SI, we prove the existence and uniqueness of the ESS, for the corresponding scenarios. To determine the ESS in our simulations, we utilize simple procedures that take the values of p and q as inputs and calculate qp. Then, we search for the specific value of p that satisfies (i) 1p = 0p, (ii) for every q<p, 1q > 0q and (iii) for every q>p, 1q < 0q, which together are sufficient conditions for p to be the unique ESS (see SI, <Ref>). Both the code used in the simulations and the code employed to generate the figures were implemented in Python. For further details and access to the code, please refer to <cit.>. We say that the system incurs a Reverse-Correlation phenomenon if increasing γ over a certain interval yields decreased payoff when evaluated at (the unique) ESS. In other words, this means that π_⋆(γ) is a decreasing function of γ over this interval. Acknowledgements. The authors would like to thank Yossi Yovel, Ofer Feinerman, Yonatan Zegman and Yannick Viossat for helpful discussions. unsrt Supplementary Information § UNIQUENESS OF ESS The following sufficient condition for the existence and uniqueness of an ESS is well-known. We state and prove it below for the sake of completeness. If p_⋆∈ [0,1] is such that (i) 1p_⋆ = 0p_⋆, (ii) for every q<p_⋆, 1q > 0q and (iii) for every q>p_⋆, 1q < 0q, then p_⋆ is a unique ESS. By assumption (i), we have for every q ∈ [0,1]: qp_⋆ = q 1p_⋆ + (1-q) 0p_⋆ = p_⋆ 1p_⋆ + (1-p_⋆) 0p_⋆ = p_⋆p_⋆. Thus, to show that p_⋆ is an ESS, we need to check the second condition in the definition. We start by considering the case that q < p_⋆. By assumption (ii), it implies that 1q > 0q, so p_⋆q = p_⋆ 1q + (1-p_⋆) 0q > q 1q + (1-q) 0q = qq. Similarly, in the case that q > p_⋆, assumption (iii) implies that 1q < 0q, so p_⋆q = p_⋆ 1q + (1-p_⋆) 0q > q 1q + (1-q) 0q = qq. By the second condition in the definition, this implies that p_⋆ is an ESS. Finally, we prove the unicity property. Let p ≠ p_⋆. If p<p_⋆, then 1p > 0p by assumption (ii), and p<1. Therefore, 1p > p 1p + (1-p) 0p = pp. If p>p_⋆, then 1p < 0p by assumption (iii), and p>0. Therefore, 0p > p 1p + (1-p) 0p = pp. In both cases, p is not an ESS, which concludes the proof of <Ref>. § ANALYSIS OF THE FORAGING GAME The goal of this section is to prove <Ref>. Note that the theorem considers n=2,3. For the case of 3 players, the theorem states that as long as the finder's share satisfies s<1/2, there exists an interval of values for γ over which the Reverse-Correlation phenomenon occurs. In contrast, in the case of 2 players, the Reverse-Correlation phenomenon does not happen over an interval, and instead, there exists a critical value of γ at which π_⋆ decreases locally. In fact, this happens even when the finder's share is close to 1. Consider the Foraging game with γ≥ 0 and s<1. * If n=3, then for every γ≥ 0, there is a unique ESS. Moreover, for every s < 1/2, there exist γ_min, γ_max > 0 such that the payoff π_⋆(γ) (and hence, also the total production) at ESS is strictly decreasing in γ on the interval [γ_min,γ_max]. * If n=2, then for every γ≠γ_s, where γ_s = 1+s/1-s, there is a unique ESS. Moreover, π_⋆(γ) is increasing on [0,γ_s) and on (γ_s,+∞]. However, for every ϵ∈ (0,1/2), π_⋆(γ_s-ϵ)>π_⋆(γ_s+ϵ). §.§ Proof of Theorem <ref> Towards proving the theorem, we first establish the following lemma, which quantifies the expected payoffs of the two pure strategies, conditioning on other agents choosing to be producers with probability p. For every 0≤ p<1, 1p = s F_ + (1-s)F_·1-p^n/n(1-p), and 0p = F_ + (1-s)F_· p ·n(1-p)+p^n-1/n(1-p)^2. These expressions can be extended by continuity at p=1, giving 11 = F_ and 01 = F_ + (n-1)(1-s)F_/2. Fix a player i. Consider the case that Player i is a producer, and that each player j≠ i is a producer with probability p. Let X_p be the random variable indicating the number of scroungers in the population. By <Ref>, 1p = s F_ + (1-s) F_·1/1+X_p. By definition, X_p ∼(n-1,1-p). The first part of the claim, concerning 1p, now follows using <Ref>, that implies that (1/(1+X_p)) = 1-p^n/n(1-p). Now, consider the case that Player i is a scrounger, and that each player j≠ i is a producer with probability p. Let Y_p be the random variable indicating the number of producers in the population. By <Ref>, 0p = F_ + (1-s) F_·Y_p/1+n-Y_p. By definition, Y_p ∼(n-1,p). The second part of the claim, concerning 0p, now follows using <Ref>, that implies that Y_p/1+n-Y_p = Y_p/2+(n-1)-Y_p = p ·n(1-p)+p^n-1/n(1-p)^2. This completes the proof of Lemma <ref>. In order to characterize the (unique) ESS, we first define the following quantities: A(γ) = n(F_ - s F_)/(1-s)F_ = n(γ - s(1+γ))/(1-s)(1+γ), γ_1 = 2/(n-1)(1-s)-1, γ_2 = n/(n-1)(1-s)-1. We have A(γ_1) = -n(n-3)/2 and A(γ_2) = 1. First, we rewrite A(γ) = n(γ - s(1+γ))/(1-s)(1+γ) = n ·γ(1-s) - s/(1-s)(1+γ) = n ·1- 1/(1-s)(1+γ). Plugging in the definition of γ_1 and γ_2, we obtain A(γ_1) = n ·1- 1/2/n-1 = -n(n-3)/2 and A(γ_2) = n ·1- 1/n/n-1 = 1, as stated. Next, for every γ, the following result identifies the unique ESS.   (a) For every n ≥ 2, for every γ∈ [0,γ_1) ∪ (γ_2, +∞), there is unique ESS, termed p_⋆(γ), that satisfies p_⋆(γ) = 1 on [0,γ_1) and p_⋆(γ) = 0 on (γ_2,+∞). (b) for every n ≥ 3, for every γ∈ [γ_1,γ_2], there is unique ESS, termed p_⋆(γ). Moreover, p_⋆ is continuously differentiable on [γ_1,γ_2], p_⋆(γ_1) = 1 and p_⋆(γ_2) = 0. Define the following function for 0≤ p<1. f(p) = 1/1-p1-p^n/1-p - np . We next identify lim_p→ 1f(p). Function f can be extended to a continuous function at p=1 by setting f(1) = -n(n-3)/2. Let x = 1-p. Using Taylor expansion at x=0, we have: f(x) = 1/x1-(1-x)^n/x - n(1-x) = 1/xnx-n(n-1)/2 x^2 + o(x^3)/x - n(1-x) = 1/x n 1-n-12 x + o(x^2) - n(1-x) = n/x - n-32 x + o(x^2) = -n(n-3)/2 + o(x). Therefore, lim_p → 1 f(p) = -n(n-3)/2, which concludes the proof of the observation. To compute the ESS, we need to compare 1p and 0p. For every p ∈ [0,1], 1p > 0p f(p) > A(γ) and 1p < 0p f(p) < A(γ). By <Ref>, for every p ∈ [0,1], 1p > 0p s F_ + (1-s)F_·1-p^n/n(1-p) > F_ + (1-s)F_· p ·n(1-p)+p^n-1/n(1-p)^2 1-p^n/1-p - p ·n(1-p)+p^n-1/(1-p)^2 > n(F_ - s F_)/(1-s)F_. By definition, the right hand side is equal to A(γ). Let us rewrite the left hand side: 1-p^n/1-p - p ·n(1-p)+p^n-1/(1-p)^2 = 1/1-p (1-p) 1-p^n/1-p - np + p 1-p^n/1-p = 1/1-p1-p^n/1-p - np = f(p), which concludes the proof of the first equivalence in <Ref>. The second equivalence is obtained similarly. f is non-increasing in p. Moreover, if n ≥ 3, then f is strictly decreasing in p. First, consider the case that n=2. Then, f(p) = 1/1-p1-p^2/1-p - 2p = 1/1-p (1+p) - 2p = 1, so f is non-increasing. Now, consider the case that n ≥ 3. Let us write f(p) = u(p)/v(p), with u(p) = 1-p^n/1-p - np, v(p) = 1-p. We have u'(p) = -n p^n-1(1-p) + (1-p^n)/(1-p)^2 - n, v'(p) = -1, so u'(p) · v(p) = -n p^n-1 + 1-p^n/1-p - n(1-p), u(p) · v'(p) = - 1-p^n/1-p + np. Therefore, u'(p) · v(p) - u(p) · v'(p) = -n p^n-1 + 2 1-p^n/1-p - n = 2 1-p^n/1-p - n(1+p^n-1). Finally, f'(p) = u'(p) · v(p) - u(p) · v'(p)/v(p)^2 = - n(1-p)(1+p^n-1) - 2(1-p^n)/(1-p)^3. Let us define the ratio: g_0(p) = n(1-p)(1+p^n-1)/2(1-p^n). Next, we show that g_0 is strictly greater than 1. To this aim, we study g_0 by differentiating it several times. Define: g_1(p) = (n-1)p^n-2(1-p^2)+p^2n-2-1, g_2(p) = 2p^n-np^2+n-2, g_3(p) = -2np(1-p^n-2). Since n ≥ 3, g_3(p) < 0. We have g_2'(p) = g_3(p) < 0, so g_2 is strictly decreasing, and hence g_2(p) > g_2(1) = 0. We have g_1'(p) = (n-1)p^n-3 g_2(p) > 0, so g_1 is strictly increasing, and g_1(p) < g_1(1) = 0. Eventually, we have: g_0'(p) = n/2·-1-p^n-1+(n-1)p^n-2(1-p)·(1-p^n)+(1-p)(1+p^n-1)· n p^n-1/(1-p^n)^2 = n/2· -1-p^n-1+(n-1)p^n-2(1-p)+p^n+p^2n-1-(n-1)p^2n-2(1-p)+n(1-p)p^n-1+n(1-p)p^2n-2/(1-p^n)^2 = n/2· p^n-2 -p+(n-1)(1-p)+p^2+np(1-p) +p^2n-2-1 /(1-p^n)^2 = n/2·g_1(p)/(1-p^n)^2 < 0, so g_0 is strictly decreasing, and g_0(p) > g_0(1) = 1. Therefore, n(1-p)(1+p^n-1) > 2(1-p^n). By <Ref>, this implies that f'(p) < 0, which concludes the proof of <Ref>. Function A is (strictly) increasing in γ. Using <ref>, we obtain dA(γ)/dγ = n/(1-s)(1 + γ)^2 > 0, from which <Ref> follows. See <Ref> for an overview of the following arguments. Proof of (a). Assume n≥ 2, and consider the case that γ < γ_1 (<Ref>). By Claims <ref>, <ref> and <ref>, and <Ref>, for every p ∈ [0,1], f(p) ≥ f(1) = -n(n-3)/2 = A(γ_1) > A(γ). By <Ref>, this implies that 1p > 0p. Thus, for every p<1 and every q ∈ [0,1], pq = p 1q + (1-p) 0q < 1q. On the one hand, <Ref> implies that for every p < 1, p cannot satisfy neither condition (i) nor (ii) in the definition of ESS. On the other hand, <Ref> implies that p_⋆ = 1 will always satisfy condition (i) in the definition of ESS. Finally, we conclude that on [0,γ_1), p_⋆(γ) = 1 is the only ESS. Next, consider the case that γ > γ_2 (<Ref>). By Claims <ref>, <ref> and <ref>, for every p ∈ [0,1], f(p) ≤ f(0) = 1 = A(γ_2) < A(γ). By <Ref>, this implies that 1p < 0p. Similarly, we conclude that on (γ_2,+∞], p_⋆(γ) = 0 is the only ESS. Proof of (b). Consider the case that n≥ 3 and γ_1 ≤γ≤γ_2 (<Ref>). By Claim <ref>, f : [0,1] ↦ [f(1),f(0)] is a bijection, and we can consider the inverse function f^-1 : [f(1),f(0)] ↦ [0,1]. Moreover, by Claims <ref>, <ref> and <ref>, and <Ref>, f(1) = A(γ_1) ≤ A(γ) ≤ A(γ_2) = f(0). Therefore, there is a unique p_⋆∈ [0,1] such that f(p_⋆) = A(γ). By <Ref>, we have f(p_⋆) = A(γ) 1p_⋆ = 0p_⋆, for every q < p_⋆, f(q) > f(p_⋆) 1p_⋆ > 0p_⋆, for every q > p_⋆, f(q) < f(p_⋆) 1p_⋆ < 0p_⋆. By <Ref>, this implies that p_⋆ is the unique ESS. As a function of γ on the interval [γ_1,γ_2], p_⋆ satisfies p_⋆(γ) = f^-1(A(γ)). Function f is continuously differentiable, and the derivative is non-zero by <Ref>, so f^-1 is continuously differentiable. Moreover, A is also continuously differentiable. Therefore, p_⋆ is continuously differentiable. Finally, p_⋆ verifies p_⋆(γ_1) = f^-1(A(γ_1)) = f^-1(f(1)) = 1, and p_⋆(γ_2) = f^-1(A(γ_2)) = f^-1(f(0)) = 0. If 3 ≤ n < min{ 1+1/s, 1+2/1-s}, then there exists 0 ≤γ_min < γ_max such that π_⋆(γ) is decreasing on the interval [γ_min,γ_max]. Since n ≥ 3, by definition, γ_1 < γ_2 and so [γ_1 , γ_2] is a non-empty interval. We have that n ≤ 1+2/1-s2/(n-1)(1-s)-1 ≥ 0 γ_1 ≥ 0. Moreover, n < 1+1/s2/(n-1)(1-s) > n/(n-1)(1-s)-1 1+γ_1 > γ_2. Next, note that by definition: π_⋆(γ) = p_⋆(γ) ·1p_⋆(γ)(γ) + (1-p_⋆(γ)) ·0p_⋆(γ)(γ), By assumption on n, we know that γ_1≥ 0 and 1+γ_1>γ_2. Since both 1p(γ) and 0p(γ) are continuously differentiable in p and in γ (from their expression in <Ref>), and since p_⋆(γ) is continuously differentiable in γ on [γ_1,γ_2] (by statement (b) in <Ref>), then π_⋆(γ) is continuously differentiable in γ on [γ_1,γ_2]. Moreover, it satisfies π_⋆(γ_1) = F_ = 1+γ_1 (since p_⋆(γ_1) = 1), and π_⋆(γ_2) = F_ =γ_2 < π_⋆(γ_1) (since p_⋆(γ_2) = 0). Therefore, we can find an interval [γ_min,γ_max] ⊆ [γ_1,γ_2] on which π_⋆(γ) is decreasing, which concludes the proof of <Ref>. When n=3, n < min{ 1+1/s, 1+2/1-s} 0 < s < 1/2, and the first item in <Ref> follows as a special case of <Ref>. When n=2, γ_1 = γ_2 = γ_s = 1+s/1-s. By statement (a) in <Ref>, for every γ < γ_s, there is a unique ESS satisfying p_⋆(γ) = 1 and so π_⋆(γ) = 1+γ. Similarly, for every γ > γ_s, there is a unique ESS satisfying p_⋆(γ) = 0 and so π_⋆(γ) = γ. Therefore, π_⋆ is increasing on [0,γ_s) and on (γ_s,+∞). Moreover, let ϵ∈ (0,1/2). Since s ≥ 0, we have γ_s ≥ 1, and π_⋆(γ_s-ϵ) = 1+γ_s-ϵ > γ_s+1/2 > γ_s+ϵ = π_⋆(γ_s+ϵ), which establishes the second item in <Ref>, and thus concludes the proof of theorem. §.§ Technical Claims Let X ∼(n,p). If 0 < p ≤ 1, then 1/1+X = 1-(1-p)^n+1/(n+1)p. Moreover, if p = 0, then 1/1+X = 1. The claim holds trivially for p=0. Consider the case that p>0. 1/1+X = ∑_k=0^n 1/1+k(X=k) = ∑_k=0^n 1/1+k·nk p^k (1-p)^n-k = 1/(n+1)p∑_k=0^n n+1k+1 p^k+1 (1-p)^(n+1)-(k+1) using nk = n+1k+1·k+1/n+1. By setting k' = k+1, we can rewrite the sum ∑_k=0^n n+1k+1 p^k+1 (1-p)^(n+1)-(k+1) = ∑_k'=0^n+1n+1k' p^k' (1-p)^(n+1)-k' - (1-p)^n+1 = 1-(1-p)^n+1, which concludes the proof of <Ref>. Let X ∼(n,p). If 0 ≤ p < 1, then X/2+n-X = p ·(n+1)(1-p)+p^n+1-1/(n+1)(1-p)^2. Moreover, if p = 1, then X/2+n-X = n/2. The claim holds trivially for p=1. Consider the case that p<1. Let q = 1-p > 0. We have X/2+n-X = ∑_k=0^n  k/n-k+2(X=k) = ∑_k=1^n  k/n-k+2·nk p^k (1-p)^n-k = ∑_k=0^n-1 n-k/k+2·nk q^k (1-q)^n-k k ↦ n-k,  q =1-p = ∑_k=1^n  n-k+1/k+1·nk-1 q^k-1 (1-q)^n-k+1 k ↦ k-1 = ∑_k=1^n  k/k+1·nk q^k-1 (1-q)^n-k+1 using nk-1 = nk·k/n-k+1 = ∑_k=1^n  k/n+1·n+1k+1 q^k-1 (1-q)^n-k+1 using nk = n+1k+1·k+1/n+1 = ∑_k=2^n+1 k-1/n+1·n+1k q^k-2 (1-q)^n-k+2 k ↦ k-1 = 1-q/q^2 (n+1)·∑_k=1^n+1 (k-1) n+1k q^k (1-q)^(n+1)-k. By expectation of the binomial distribution, ∑_k=1^n+1 k n+1k q^k (1-q)^(n+1)-k = (n+1)q. Moreover, by the binomial theorem, ∑_k=1^n+1n+1k q^k (1-q)^(n+1)-k = 1-(1-q)^n+1. Putting every equation together, we obtain X/2+n-X = (1-q) ·(n+1)q+(1-q)^n+1-1/(n+1)q^2, which concludes the proof of <Ref>. § ANALYSIS OF THE COMPANY GAME §.§ Preliminaries Following classical notations from game theory, we define, for n=2 players: (Reward) R(γ,s,c,p,a)   = 11 (Sucker) S(γ,s,c,p,a)   = 10 (Temptation) T(γ,s,c,p,a)   = 01 (Punishment) P(γ,s,c,p,a)   = 00 For simplicity, we do not mention (γ,s,c,p,a) when there is no risk of confusion. The following result is well-known in game theory folklore. However, we provide a proof here for the sake of completeness. If n=2 and T>R>S>P, then there is a unique ESS, that satisfies π_⋆ = ST-RP/S+T-R-P. We have, by definition 1p = p R + (1-p) S and 0p = p T + (1-p) P. Note that 1p = 0p p = S-P/S+T-R-P. Define p_⋆ = S-P/S+T-R-P = 1/1+T-R/S-P with T-R > 0, S - P > 0, so p_⋆∈ [0,1]. Let π_⋆ = 1p_⋆ = 0p_⋆. By assumption in the theorem, d/dp1p = R-S < T-P = d/dp0p, and hence, by Eq. (<ref>), for every q < p_⋆, 1q > 0q, and for every q > p_⋆, 1q < 0q. By <Ref>, this, together with Eq. (<ref>), implies that p_⋆ is a unique ESS. To conclude the proof of <Ref>, we just check that π_⋆ satisfies <Ref>. §.§ There is no Reverse-Correlation phenomenon for ϕ : x ↦ x Consider the case that there are n=2 players and that ϕ : x ↦ x. Let c≥ 0, s∈[1/2,1], p,a∈ [0,1], and set γ_0 = c/(p s (1-a)). Then for every γ≠γ_0, there is a unique ESS. Moreover, the payoff π_⋆ and the total production Γ_⋆ corresponding to the ESS are both strictly increasing functions of γ. Recall that in the Company game, π_1 = ϕ(s q_1 + (1-s) q_2) - c_1. Consider the case that ϕ : x ↦ x. For every s∈ [1/2,1], <Ref> gives (π_1) = s (q_1) + (1-s) (q_2) - c_1. Therefore, R(γ,s,c,p,a) = s ·(p γ) + (1-s) ·(p γ) - c = p γ - c, S(γ,s,c,p,a) = s ·(p γ) + (1-s) ·(p a γ) - c = p γ (s+a-s a) - c, T(γ,s,c,p,a) = s ·(p a γ) + (1-s) ·(p γ) = p γ (1-s+s a), P(γ,s,c,p,a) = s ·(p a γ) + (1-s) ·(p a γ) = p a γ. Recall that γ_0 = c/(p s (1-a)). * If γ < γ_0, then T>R and P>S, in which scrounger is a dominant strategy. Therefore, there is a unique ESS, and we have: π_⋆=Γ_⋆=P = paγ. In particular, these values are increasing in γ. * If γ > γ_0, then R>T and S>P, and hence producer is a dominant strategy. Therefore, there is a unique ESS, and π_⋆=R = p γ - c, Γ_⋆=pγ. In particular, both these values are increasing in γ. * If γ = γ_0, then R=T and P=S, which implies that no player can unilaterally change its payoff. Indeed, for every p,q ∈ [0,1], pq = pqR + (1-p)qT + p(1-q)S + (1-p)(1-q)P = qR + (1-q)S, so for every p,p',q ∈ [0,1], pq = p'q. In this degenerate case, neither condition (i) nor (ii) in the definition of ESS can be satisfied, so there is no ESS. To conclude the proof of <Ref>, we only need to show that π_⋆ and Γ_⋆ do not decrease at the discontinuity point γ = γ_0. Since γ_0 ≥ c/(p(1-a)), we have lim_ϵ→ 0^+Γ_⋆(γ_0-ϵ) = lim_ϵ→ 0^+π_⋆(γ_0-ϵ) = apγ_0 ≤ pγ_0-c = lim_ϵ→ 0^+π_⋆(γ_0+ϵ) ≤lim_ϵ→ 0+Γ_⋆(γ_0+ϵ). Thus, overall, the payoffs of players at equilibrium, π_⋆, and the total production, Γ_⋆, are both increasing in γ. §.§ The Reverse-Correlation phenomenon in the Company game In this section, we demonstrate the Reverse-Correlation phenomenon in the Company game for two utility functions. Specifically, we first prove that the Reverse-Correlation phenomenon can occur when assuming the utility function ϕ : x ↦ 1-exp(-2 x). Then, in <Ref> we provide simulations that demonstrate the Reverse-Correlation phenomenon assuming the utility function ϕ : x ↦min(1,x). Consider the Company game with n=2, ϕ : x ↦ 1-exp(-2 x), p=1/2, a=1/2. For every s < 1, there exist c_0>0, and γ_min , γ_max > 1, for which there is a unique ESS such that π_⋆ is decreasing in γ on the interval [γ_min,γ_max]. Let γ_i = γ Player i is a producer, γ/2 otherwise. By definition, if player i succeeds in producing a product (which happens with probability p=1/2) then the quality of its product is γ_i. Hence, <Ref> gives (π_1) = 1/4ϕ(s γ_1 + (1-s) γ_2) + ϕ(s γ_1) + ϕ((1-s) γ_2) - c_1. Plugging in ϕ(x) = 1-exp(-2x), we obtain R(γ,s,c) = 1/4 3 - e^-2γ - e^-2s γ - e^-2(1-s)γ - c, S(γ,s,c) = 1/4 3 - e^-(1+s)γ - e^-2s γ - e^-(1-s)γ - c, T(γ,s,c) = 1/4 3 - e^-(2-s)γ - e^-s γ - e^-2(1-s)γ, P(γ,s,c) = 1/4 3 - e^-γ - e^-s γ - e^-(1-s)γ. Note that R, S, T and P are all increasing functions of γ. Consequently, if the strategies of Players 1 and 2 remain unchanged, then (π_1) is also increasing in γ. However, we will show that at equilibrium, the tendency of the player to be a scrounger increases in γ to such an extent that ultimately reduces (π_1). The next step towards proving <Ref> is to show that for some specific values of s and c, the Company game is in fact a game of chicken. For every s < 1, there exists a value c_0 = c_0(s) and an interval [γ_min,γ_max] such that for every γ∈ [γ_min,γ_max], T(γ,s,c_0) > R(γ,s,c_0) > S(γ,s,c_0) > P(γ,s,c_0). In particular, by <Ref>, this implies that for every γ∈ [γ_min,γ_max], there is a unique ESS satisfying π_⋆(γ,s,c_0) = ST-RP/S+T-R-P. Before proving <Ref>, we need two preliminary technical results. The next claim implies, in particular, that if c=0 then a producer is a dominant strategy. For all γ,s such that s < 1, R(γ,s,0) > S(γ,s,0),T(γ,s,0), and S(γ,s,0),T(γ,s,0) > P(γ,s,0). By pairwise comparison of the terms in <Ref>. The next claim implies that T-P > R-S, or in other words, that scroungers lose more than producers when the other player switches from producer to scrounger. For all γ,s,c such that s < 1, S(γ,s,c)-P(γ,s,c) > R(γ,s,c)-T(γ,s,c). We have 4(R-S) = e^-(1+s)γ - e^-2γ + e^-(1-s)γ - e^-2(1-s)γ, and 4(T-P) = e^-γ - e^-(2-s)γ + e^-(1-s)γ - e^-2(1-s)γ. Factoring by e^-γ, this gives (T-P) - (R-S) = e^-γ/4 1 + e^-γ - e^-sγ - e^-(1-s)γ. Factoring again by e^-γ, and using the convexity of the function e^x+e^γ-x, we obtain (T-P) - (R-S) = e^-2γ/4 e^γ + 1 - e^s γ + e^(1-s) γ > 0, which concludes the proof of <Ref>. Let us fix γ_0 > max(1,ln(1+√(2)) / s). By <Ref>, we can take c_0 = c_0(s) such that 0 < R(γ_0,s,0)-T(γ_0,s,0) < c_0 < S(γ_0,s,0)-P(γ_0,s,0). As a consequence, R(γ_0,s,c_0) = R(γ_0,s,0) - c_0 < T(γ_0,s,0)=T(γ_0,s,c_0), and S(γ_0,s,c_0) = S(γ_0,s,0) - c_0 > P(γ_0,s,0)=P(γ_0,s,c_0). Finally, by <Ref>, we have that T(γ_0,s,c_0) > R(γ_0,s,c_0) > S(γ_0,s,c_0) > P(γ_0,s,c_0). By continuity, there exist γ_min,γ_max such that max(1,ln(1+√(2)) / s) < γ_min < γ_0 < γ_max and for every γ∈ [γ_min,γ_max], <Ref> holds, which concludes the proof of <Ref>. For every γ∈ [γ_min,γ_max], π_⋆(γ,s,c_0) = 1 - c_0 e^s γ·e^s γ+1/e^s γ-1 = 1 - c_0 e^s γs γ/2. We start from the expression of π_⋆(γ,s,c_0) given by <Ref>. First, we compute ST-RP using <Ref>. For that purpose, we expand ST and RP separately, and then simplify. We have (each line corresponds to one term of S multiplied by all the terms of T): 16 · ST = 3 - e^-(1+s)γ - e^-2s γ - e^-(1-s)γ - 4c_0·3 - e^-(2-s)γ - e^-s γ - e^-2(1-s)γ = 9 - 3e^-(2-s)γ - 3e^-sγ - 3e^-2(1-s)γ - 3e^-(1+s)γ + e^-3γ + e^-(1+2s)γ + e^-(3-s)γ - 3e^-2sγ + e^-(2+s)γ + e^-3sγ + e^-2γ - 3e^-(1-s)γ + e^-(3-2s)γ + e^-γ + e^-3(1-s)γ -12c_0 + 4c_0e^-(2-s)γ + 4c_0e^-sγ + 4c_0 e^-2(1-s)γ. Similarly, 16 · RP = 3 - e^-2γ - e^-2s γ - e^-2(1-s)γ - 4c_0· 3 - e^-γ - e^-s γ - e^-(1-s)γ = 9 - 3e^-γ - 3e^-sγ - 3e^-(1-s)γ - 3e^-2γ + e^-3γ + e^-(2+s)γ + e^-(3-s)γ - 3e^-2sγ + e^-(1+2s)γ + e^-3sγ + e^-(1+s)γ - 3e^-2(1-s)γ + e^-(3-2s)γ + e^-(2-s)γ + e^-3(1-s)γ -12c_0 + 4c_0e^-γ + 4c_0e^-sγ + 4c_0 e^-(1-s)γ. When computing the difference, many terms disappear, leaving us with: 16 · (ST-RP) = 4 e^-γ + e^-2γ - e^-(1+s)γ - e^-(2-s)γ - c_0 e^-γ + e^-(1-s)γ - e^-(2-s)γ - e^-2(1-s)γ. Factoring the right hand side by e^-γ, we obtain ST-RP = e^-γ/4 1 + e^-γ - e^-sγ - e^-(1-s)γ - c_0 1 + e^sγ - e^-(1-s)γ - e^-(1-2s)γ. Using <Ref>, we obtain ST-RP/S+T-R-P = 1-c_0 ·1 + e^sγ - e^-(1-s)γ - e^-(1-2s)γ/1 + e^-γ - e^-sγ - e^-(1-s)γ. Factoring the numerator of the fraction by e^sγ and rearranging, we get 1-c_0 e^sγ·1 - e^-(1-s)γ + e^-sγ - e^-γ/1 - e^-(1-s)γ - e^-sγ - e^-γ. Dividing both the numerator and denominator by e^-sγ - e^-γ, and using the fact that 1-e^-(1-s)γ/e^-sγ - e^-γ = e^s γ·e^-sγ - e^-γ/e^-sγ - e^-γ = e^s γ, we finally get ST-RP/S+T-R-P = 1-c_0 e^sγ·e^s γ+1/e^s γ-1, which concludes the proof of <Ref>. For every γ∈ [γ_min,γ_max], ∂/∂γπ_⋆(γ,s,c_0) = c_0 s e^s γ/2· 1 - sinh(s γ)/sinhs γ/2^2 . We start from the expression of <Ref>, and derive using the fact that d/dx(x) = -1/sinh(x)^2. More precisely, by <Ref>, ∂/∂γπ_⋆(γ,s,c_0) = ∂/∂γ1 - c_0 e^s γs γ/2 = -c_0 s e^s γs γ/2 + c_0 s e^s γ/2 sinhs γ/2^2 = c_0 s e^s γ/2 sinhs γ/2^2·1 - 2 s γ/2sinhs γ/2^2. Then, we observe that for every x ∈, 2 (x) sinh(x)^2 = 2 e^x+e^-x/e^x-e^-xe^x-e^-x/2^2 = (e^x+e^-x)(e^x-e^-x)/2 = e^2x-e^-2x/2 = sinh(2x). Plugging this in the last equation concludes the proof of <Ref>. By definition, γ > γ_min > ln(1+√(2))/s = sinh^-1(1)/s, so sinh(s γ) > 1. By <Ref>, this implies that ∂/∂γπ_⋆(γ,s,c_0) < 0 on the interval [γ_min,γ_max], which concludes the proof of <Ref>. § A NECESSARY CONDITION FOR THE REVERSE-CORRELATION PHENOMENON We assume that the payoffs are positively correlated with the number of producers in the group, that is, for every q ∈ [0,1] and every γ≥ 0, p ↦qp(γ) is non-decreasing in p. In addition, we assume that the payoffs are positively correlated with the parameter γ, that is, for every q ∈ [0,1] and every p ∈ [0,1], γ↦qp(γ) is non-decreasing in γ. Under these assumptions, we identify the following necessary condition for the emergence of a Reverse-Correlation phenomenon. For any PS model in which the payoff of producers does not depend on the strategies of other players, there is no Reverse-Correlation phenomenon. More precisely, if there are two values γ_1,γ_2 such that γ_1 < γ_2 and two ESS denoted p_⋆(γ_1) and p_⋆(γ_2), then the corresponding payoffs satisfy π_⋆(γ_1) ≤π_⋆(γ_2). Fix a PS model. By assumption in <Ref>, For every γ≥ 0, p ↦1p(γ) does not depend on p. In what follows, we will simply write 1p(γ) = π_(γ). By definition of ESS, and by <Ref>, we have for every i ∈{1,2}: equation1 p_⋆(γ_i) = 0 π_⋆(γ_i) = 00(γ_i) ≥π_(γ_i), .a p_⋆(γ_i) = 1 π_⋆(γ_i) = π_(γ_i) ≥01(γ_i), .b p_⋆(γ_i) ∉{0,1} π_⋆(γ_i) = π_(γ_i) = 0p_⋆(γ_i)(γ_i), .c where <Ref> holds because 00(γ_i) ≥10(γ_i)=π_(γ_i). As a consequence of <Ref>, we have p_⋆(γ_i) ≠ 0 π_⋆(γ_i) = π_(γ_i) ≥0p_⋆(γ_i)(γ_i). Now, let us show that π_⋆(γ_1) ≤π_⋆(γ_2). * If p_⋆(γ_1) = p_⋆(γ_2) = 0, then π_⋆(γ_1) (<ref>)=00(γ_1) (<ref>)≤00(γ_2) (<ref>)=π_⋆(γ_2). * If p_⋆(γ_1) ≠ 0 and p_⋆(γ_2) ≠ 0, then π_⋆(γ_1) (<ref>)=π_(γ_1) (<ref>)≤π_(γ_2) (<ref>)=π_⋆(γ_2). * If p_⋆(γ_1) ≠ 0 and p_⋆(γ_2) = 0, then π_⋆(γ_1) (<ref>)=π_(γ_1) (<ref>)≤π_(γ_2) (<ref>)≤π_⋆(γ_2). * If p_⋆(γ_1) = 0 and p_⋆(γ_2) ≠ 0, then π_⋆(γ_1) (<ref>)=00(γ_1) (<ref>)≤0p_⋆(γ_2)(γ_1) (<ref>)≤0p_⋆(γ_2)(γ_2) (<ref>)≤π_⋆(γ_2). This concludes the proof of <Ref>.
http://arxiv.org/abs/2307.03904v1
20230708052256
Long-range interacting Stark many-body probes with Super-Heisenberg precision
[ "Rozhin Yousefjani", "Xingjian He", "Abolfazl Bayat" ]
quant-ph
[ "quant-ph", "cond-mat.quant-gas", "cond-mat.str-el" ]
[email protected] Institute of Fundamental and Frontier Sciences, University of Electronic Science and Technology of China, Chengdu 610051, China Institute of Fundamental and Frontier Sciences, University of Electronic Science and Technology of China, Chengdu 610051, China [email protected] Institute of Fundamental and Frontier Sciences, University of Electronic Science and Technology of China, Chengdu 610051, China In contrast to interferometry-based quantum sensing, where interparticle interaction is detrimental, quantum many-body probes exploit such interactions to achieve quantum-enhanced sensitivity. In most of the studied quantum many-body probes, the interaction is considered to be short-ranged. Here, we investigate the impact of long-range interaction at various filling factors on the performance of Stark quantum probes for measuring a small gradient field. These probes harness the ground state Stark localization phase transition which happens at an infinitesimal gradient field as the system size increases. Our results show that while super-Heisenberg precision is always achievable in all ranges of interaction, the long-range interacting Stark probe reveals two distinct behaviors. First, by algebraically increasing the range of interaction, the localization power enhances and thus the sensitivity of the probe decreases. Second, as the interaction range becomes close to a fully connected graph its effective localization power disappears and thus the sensitivity of the probe starts to enhance again. The super-Heisenberg precision is achievable throughout the extended phase until the transition point and remains valid even when the state preparation time is incorporated in the resource analysis. As the probe enters the localized phase, the sensitivity decreases and its performance becomes size-independent, following a universal behavior. In addition, our analysis shows that lower filling factors lead to better precision for measuring weak gradient fields. Long-range interacting Stark many-body probes with Super-Heisenberg precision Abolfazl Bayat August 12, 2023 ============================================================================= § INTRODUCTION Quantum sensors can achieve unprecedented precision in measuring time <cit.>, electric <cit.>, magnetic <cit.>, and gravitational fields <cit.>, way beyond the capability of their classical counterparts. They can be manufactured in atomic scales and have found applications in a wide range of fields, from cosmology <cit.> to biology <cit.>. The precision of estimating an unknown parameter h, encoded in a quantum density matrix ρ(h), is fundamentally bounded by Cramér-Rao inequality as Δ h ≥ 1/√(Mℱ), where Δ h is the standard deviation that quantifies the accuracy of the estimation, M is the number of repetitions and ℱ is a positive quantity called Fisher information. The scaling of Fisher information with respect to sensing resources, such as the probe size L, is a figure of merit that can be used for comparing the precision of different sensors. Typically, Fisher information scales algebraically with the size of the resource, namely ℱ∝L^β. In the absence of quantum features, classical sensing at best results in β=1, known as the standard limit. Quantum sensors, however, can achieve super-linear scaling with β>1 through exploiting quantum features such as entanglement <cit.>. Originally, enhancement in precision has been discovered for a special form of entangled states, known as GHZ states <cit.>, which results in β=2 also known as Heisenberg limit <cit.>. Although there are several experimental demonstrations of GHZ-based quantum sensors <cit.>, their scalability is challenging due to the sensitivity of such delicate quantum states to decoherence. In addition, the interaction between particles in these probes is detrimental to their precision <cit.>. Strongly correlated many-body systems are very resourceful for realizing quantum technology tasks, such as sensing. These quantum probes, which harness the interaction between particles, are naturally scalable and expected to be more robust against decoherence. In particular, various forms of phase transitions in such systems have been used for achieving quantum-enhanced sensitivity, including first-order <cit.>, second-order <cit.>, Floquet <cit.>, dissipative <cit.>, time crystals <cit.>, topological <cit.>, many-body <cit.> and Stark localization <cit.> phase transitions. Other types of may-body probes profit from diverse measurement methods including adaptive <cit.>, continuous <cit.>, and sequential <cit.> measurements. Since most of the sensing proposals in many-body probes have been dedicated to short-range interactions, a key open problem is whether long-range interactions can provide more benefits for sensing tasks? Long-range interactions naturally arise in certain quantum devices, such as ion traps <cit.> and Rydberg atoms <cit.>. The nature of these interactions prevents the systematic study of their interesting physics and except for some models such as Lipshin-Meshkov-Glick (LMG) <cit.>, and long-range Kitaev chain <cit.>, the effect of long-range interaction on sensing precision remains almost untouched. Gradient field sensing is of major importance in various fields, including biological imaging <cit.> and gravitometry <cit.>. In the former, the ultra-precise sensing of a weak gradient magnetic field increases imaging resolution, enabling the visualization of smaller tumors for early cancer detection. In the latter, precise gravity measurement is essential for detection of gravitational waves <cit.>, investigating the equivalence principle <cit.>, obtaining the fine-structure <cit.> and measuring Newton’s gravitational constant <cit.>. Recently, we have shown that Stark probes can be exploited for measuring weak gradient fields with super-Heisenberg precision <cit.>, in which the scaling exponent β can be as large as β≅6. This sensor relies on Stark localization transition which could even happen in the presence of an infinitesimal gradient field in single- and multi-particle quantum systems. The effect of a longer range of interaction on this sensor has not yet been explored. Addressing this issue is essential since the physical platforms for experimental realization of Stark localization, including ion traps <cit.> and Rydberg atoms <cit.> are naturally governed by long-range interactions. In this paper, we systematically study the effects of long-range interaction on the sensing capability of Stark probes. We show that the strong super-Heisenberg scaling of the Stark probes persists even in the presence of long-range interaction and is achievable throughout the extended phase of the system until the transition point. Our results show that various range of interaction leaves distinct imprints on the scaling of the Fisher information. Making the interaction more long-ranged enhances the localization and, hence, decreases the value of the Fisher information and β. The localization effect disappears as the system gets closer to a fully connected graph and thus the sensitivity enhances again. The achievable super-Heisenberg scaling remains valid even when the state preparation time is taken into account in resource analysis. Moreover, we provide a comprehensive investigation of the critical properties of long-range Stark probes and establish a concrete relationship between critical exponents of the system through an extensive finite-size scaling analysis. Finally, we analyze the effect of filling factor (i.e., the number of excitations per site) on the sensing power of our Stark probes. While super-Heisenberg scaling is achievable for all studied filling factors, lower filling factors provide better precision. This paper is organized as follows. We start by presenting the tools for assessing a quantum probe in section <ref>. After introducing our long-range Stark many-body probe in section <ref>, we present the numerical results of sensing with the probe in the half-filling sector in section <ref>. In the subsections of section <ref>, the scaling behavior of the probe, its critical properties, and the resource analysis are studied. Section <ref> contains the analysis of the filling factor and the paper is summarized in section <ref>. § ULTIMATE PRECISION LIMIT In this section, we briefly review the implications of Cramér-Rao inequality for quantum sensing problems. In order to estimate an unknown parameter h encoded in a probe, described by density matrix ρ(h), one has to perform a measurement which is described by a set of projectors {Π_i}. Each measurement outcome appears with the probability p_i(h)=Tr[Π_iρ(h)]. For this classical probability distribution one can show that the Fisher information can be obtained from ℱ_C(h)=∑_i 1/p_i(h)(∂ p_i(h)/∂ h)^2, which is known as Classical Fisher Information (CFI). In order to get rid of the measurement dependence, one can maximize the CFI with respect to all possible measurements to obtain Quantum Fisher Information (QFI), namely ℱ_Q(h)=max_{Π_i}ℱ_C(h) <cit.>. By definition, the QFI is an upper bound for the CFI and thus is called ultimate precision limit for which the Cramér-Rao inequality is updated as Δ h≥1/√(M ℱ_C(h))≥1/√(M ℱ_Q(h)). While the maximization with respect to measurement in the definition of the QFI seems notoriously challenging, it has been shown that alternative methods can provide computationally friendly methods for calculating the QFI. In particular, it turns out that the QFI is related to a quantity called fidelity susceptibility χ(h) as ℱ_Q=4χ(h). The fidelity susceptibility is defined as χ(h) = 2 ( 1 - √(Tr[ρ(h)^1/2ρ(h+δ h)ρ(h)^1/2]) )δ h^2, with δ h being an infinitesimal variation in h. It has been shown that for systems that go through a second-order quantum phase transition, the fidelity susceptibility and, hence, QFI show non-analytic behavior in the vicinity of the critical point <cit.>. This reflects the tremendous sensitivity of the system with respect to the control parameter h which drives the system into the phase transition. In this paper, we rely on Eq. (<ref>) for investigating the sensing power of a Stark many-body probe with long-range interaction. § STARK MANY-BODY PROBE We consider a one-dimensional spin-1/2 chain of L sites that is affected by a gradient field h. While spin tunneling is restricted to nearest-neighbor sites, the interaction between particles is taken to be long-range which algebraically decays by exponent η>0. The Hamiltonian reads H(h) = J∑_i=1^L-1(σ_i^xσ_i+1^x+σ_i^yσ_i+1^y)+ ∑_i<j1|i-j|^ησ_i^zσ_j^z + h∑_i=1^L i σ_i^z, where J is the exchange coupling, σ_i^(x,y,z) are Pauli operators acting on site i, and h is the amplitude of the applied gradient field, which has to be estimated. By varying the power-law exponent η, one can smoothly interpolate between a fully connected graph (η=0) and a standard nearest-neighbor one-dimensional chain (η→∞). Inherently, many interactions are long-range. Coulomb and dipole-dipole interactions are notable examples of this interaction that can be modeled in certain quantum simulators, e.g., ion traps <cit.> and Rydberg atoms <cit.>. The Hamiltonian Eq. (<ref>) conserves the number of excitations in the z direction, namely [H,S_z]=0, where S_z=1/2∑_iσ_i^z. This implies that the Hamiltonian is block-diagonal with respect to the number of excitations N. Hence, each block can be described by a filling factor of n=N/L. Here, we focus on the sensing power of our probe assuming that the filling factor n is fixed and the probe is prepared in the lowest energy eigenstate of the relevant sector. Note that the true ground state of the Hamiltonian lies in the sector with n=0 (i.e., N=0 excitations). Nonetheless, throughout the paper, for the sake of convenience, we call the lowest eigenstate of the Hamiltonian for any given filling factor n the ground state which should not be mistaken by the true ground state of the Hamiltonian at filling factor n=0. Regardless of the range of interaction, by increasing the strength of the field h, the probe undergoes a quantum phase transition from an extended phase to a many-body localized one <cit.>. It is known that the many-body localization (MBL) transition occurs across the entire spectrum, in contrast to conventional quantum phase transition which occurs only at the ground state <cit.>. Detecting and characterizing the MBL transition across the whole spectrum usually rely on exact diagonalization which severely restricts the numerical simulations to small systems <cit.>. For analyzing the sensing power of a probe, one requires large system size behavior which is not accessible through exact diagonalization. Therefore, we exploit Matrix Product State (MPS) simulation <cit.> to capture the behavior of QFI in large system sizes. While this allows us to extract a precise scaling analysis, it comes with the price that we will be limited to the ground state in each filling factor and cannot analyze the sensing power of excited states. § SENSING AT HALF-FILLING SECTOR (N=1/2) We first focus on the half-filling sector of the Hamiltonian in which we have N=L/2 excitations. In Fig. <ref>(a), we plot ℱ_Q as a function of Stark field h/J for a probe of size L=30 with various choices of η. Several interesting features can be observed. First, by increasing h/J the QFI shows a dramatic change in its behavior from being almost constant in the extended phase to a decreasing function in the localized regime. During this transition, the QFI peaks at some h_max(η), which asymptotically converges to the transition point h_c in the thermodynamic limit <cit.>. Second, various η's leave distinct imprints on the QFI. By moving from a fully connected probe (η=0) to a nearest-neighbor one (η→∞), the peaks of the QFI first decrease and then show a revival behavior. This is because as η decreases (i.e., interaction becomes more long-range) each spin configuration induces a different Zeeman energy splitting at any given site. This effect is like random disorder potential, which helps the system to localize and thus reduces the QFI. The observed behavior continues until the system becomes close to a fully connected graph (for η∼ 0.1) in which all spin configurations induce almost the same energy splitting and thus the localization effect from off-resonant energy separations gradually disappears. Third, strong long-range interaction indeed enhances the sensitivity of the probe by providing the highest value of ℱ_Q in both the extended phase (i.e., h<h_max) and at the transition point (i.e., h=h_max). To explore the behavior of the QFI in the thermodynamic limit, namely for L→∞, one can study the QFI for various system sizes. In Figs. <ref>(b)-(d), we plot the ground state QFI as a function of Stark field h/J for various system sizes L and selected η=0,1 and 5, respectively. Regardless of the range of the interaction, by enlarging the probe size, the peak of the QFI increases and h_max gradually approaches zero, signaling the divergence of ℱ_Q in the thermodynamic limit for a vanishing transition point h_c→0. While the finite-size effect can be seen in the extended phase, in the localized regime one deals with a size-independent algebraic decay of the QFI which can be perfectly described by F_Q∝|h-h_max|^-α(η) (dashed lines). From Figs. <ref>(b)-(d), one can see that the exponent α takes the values α(η=0)=4.00, α(η=1)=4.94 and α(η=5)=3.97, respectively. §.§ Super-Heisenberg sensitivity To characterize the scaling of the QFI with the probe size, in Figs. <ref>(a) and (b), we plot ℱ_Q versus L for some values of η both at the transition point, i.e., h=h_max, and in the extended phase, i.e., h/J=10^-4, respectively. In both panels, the markers represent the QFI obtained by numerical simulation and the lines are the best fitting function of the form ℱ_Q(h,η)∝L^β(h,η). The best obtained exponent β(h,η) has been plotted as a function of η in Figs. <ref>(c) and (d), for h=h_max and h/J=10^-4, respectively. Some interesting observations can be highlighted. First, regardless of the interaction range η, one can obtain super-Heisenberg sensitivity for our probe (i.e., β>2) both at the transition point and in the extended regime. Second, as discussed before, by decreasing η (i.e., making interaction more long-range) the effective Zeeman energy splitting enhances the localization and thus reduces the QFI as well as the exponent β. As η further decreases, the probe becomes effectively fully connected, implying that all spin configurations induce equal energy splitting that does not contribute to the localization anymore. Therefore, β changes its behavior and starts rising as η decreases towards zero. §.§ Finite-size scaling analysis The observed trend of the QFI in Figs. <ref>(b)-(d) (shown with dashed lines) strongly implies the algebraic divergence of the QFI in the thermodynamic limit as ℱ_Q∝|h-h_max|^-α. For the sake of the abbreviation, we drop the dependency of the parameters on η and h. This behavior which is attributed to all second-order phase transitions in the thermodynamic limit is accompanied by the emergence of a diverging length scale as ξ∼|h-h_c|^-ν, with ν known as the critical exponent. To extract the parameters α and ν in finite-size systems one needs to establish finite-size scaling analysis. In this technical method, the QFI is rescaled as ℱ_Q=L^α/νg(L^1/ν(h-h_c)), where, g(·) is an arbitrary function. Plotting the rescaled QFI, namely L^-α/νℱ_Q, versus L^1/ν(h-h_c) collapses all the curves of different probe sizes and the best data collapse can be obtained for accurate selection of critical properties, i.e., (h_c, α, ν). Figs. <ref>(a) and (b) illustrate the best-achieved data collapse for probes of size L=20,⋯,30 for selected η=0, and η=1, respectively. The critical properties for both panels, obtained using PYTHON package PYFSSA <cit.>, are (h_c, α, ν) =(1.04× 10^-5, 4.00, 1.01), and (h_c, α, ν) =(0.70× 10^-5, 4.94, 1.39). For the sake of completeness, in Table <ref> we report the exponents α and ν for different values of η. Since in the finite-size systems, the peaks of the QFI at h_max are cutoff by the system size, one has ℱ_Q∝L^β. The two expected behaviors of the QFI, namely ℱ_Q∝|h-h_c|^-α in the thermodynamic limit and ℱ_Q(h_max)∝L^β for finite systems at the transition point, suggest a unified ansatz for the QFI as ℱ_Q∝1L^-β + A|h-h_max|^-α, where A is a constant. One can indeed retrieve the two behaviors from the above ansatz by either choosing L→∞ or h=h_max. Note that, the two ansatzes of Eqs. (<ref>) and (<ref>) describe the same quantity and thus have to match with each other. A simple factorization of L^-β from the denominator of Eq. (<ref>) shows that the two ansatzes are the same provided that the exponents satisfy β = α/ν. The validity of the above equation for all the considered η's is evidenced in the presented data in Table <ref> in which α/ν, obtained from finite-size scaling analysis of Eq. (<ref>), matches closely with β, obtained from scaling analysis in Fig. <ref>(a). §.§ Resource analysis Up to now, we showed that quantum criticality can indeed offer significant advantages for quantum sensing. Nevertheless, this advantage is usually hindered by the time required to prepare the ground state close to the critical points. Initializing a probe in its ground state via, for instance, adiabatic evolution <cit.>, demands a time that scales with the probe size as t∝L^z <cit.>, in which the exponent z is known as dynamical exponent and determines the rate of the energy gap closing, namely Δ E∝L^-z, for a system approaching to its criticality. Taking initialization time into consideration offers the normalized QFI, i.e., ℱ_Q/t as a new figure of merit <cit.>. Since ℱ_Q(h_max)∝ L^β one can easily show that the normalized QFI scales as ℱ_Q/t∝ L^β-z. In order to estimate the dynamical exponent z, one has to numerically compute the energy gap Δ E versus the system size L. In Fig. <ref>(a), we plot energy gap Δ E obtained through exact diagonalization as a function of L for a fully connected probe (η=0) in the extended phase (i.e., 0.0001⩽h⩽0.1), at the transition point (i.e., h=h_max) and in the localized phase (i.e., h/J=1). An algebraic decay as a function of L for energy gap is observed in the extended phase, with z=0.91, at the transition point, with z=1.04, and in the localized phase, with z=0. In Fig. <ref>(b), we plot the dynamical exponent z as a function of η for a probe in the extended phase (h/J=10^-4) and at the transition point (h=h_max). As the results show, the exponent z qualitatively behaves similarly to the exponent β as the interaction range η varies. It is worth emphasizing that even by considering time into the resource analysis, the exponent β-z remains larger than 2 in all interaction ranges. This super-Heisenberg scaling can indeed provide a significant advantage for weak-field sensing. § FILLING FACTOR ANALYSIS Having described the many-body Stark probe in a half-filling sector of the Hilbert space, we now focus on the effect of the filling factor n on the performance of our sensor. In Figs. <ref>(a) and (b) we plot the QFI at the transition point h=h_max as a function of η for filling factors n=1/4 and n=1/8, respectively. Clearly, analogs to the scenario of n=1/2 (see Fig. <ref>(a)) as η decreases (the interaction becomes more long-range) the QFI goes down and then revives as the effective localization impact disappears. Interestingly, for larger filling factors (e.g. n=1/2 and somehow n=1/4), a fully connected probe with η=0 outperforms the other choices of η. As the filling factor reduces, the best performance belongs to the nearest-neighbor probe with η→∞. In addition, our results evidence that decreasing n can remarkably boost the achievable QFI. This can be observed in Fig. <ref>(c) which represents ℱ_Q(h_max) in a probe of size L=32 prepared in various sectors of n=1/2, 1/4 and 1/8. These results are in line with our previous results in which the highest advance was obtained for a Stark probe with single excitation <cit.>. To characterize the impact of the filling factor on the scaling of the QFI with respect to L, similar to the scenario of the n=1/2, we fit the obtained QFI for different probe size L with function ℱ_Q∝L^β(h,η). The best fits result in reported β's as a function of η in Figs. <ref>(a) and (b) for n=1/4 and n=1/8, respectively. In each panel, we report the obtained β at the transition point (h=h_max) as well as in the extended phase (h/J=10^-4). As the Figs. <ref>(a) and (b) show, the exponent β shows qualitatively similar behavior to the half-filling case as the interaction becomes more long-ranged. Importantly, for all interaction ranges the exponent β shows super-Heisenberg scaling, and the best performance is always obtained for a nearest-neighbor probe. By decreasing the filling factor n, the performance of the probe in the extended phase gets closer to the one at the transition point. This is in full agreement with our previous results obtained for the Stark probe with single particle <cit.> in which for the nearest-neighbor probe both cases yield the same β. § CONCLUSION Stark localization transition in many-body systems, as a result of applying a gradient field in the lattice, has been harnessed to generate an ultra-precise sensor for measuring weak gradient fields. In this paper, we addressed the effect of long-range interactions on the capability of these probes. Our study showed that strong super-Heisenberg precision of the Stark probe can be obtained in all ranges of interaction in the extended phase until the transition point. However, as the interaction becomes more long-range two different behaviors can be observed. Initially, by making the system more long-ranged the sensing power, quantified by QFI and its exponent β, decreases. Then, around η∼ 0.1, where the system becomes effectively a fully connected graph, the sensitivity enhances again which can be seen in the rise of both QFI and β. These different trends can be explained through long-range interaction induced localization. In long-range interacting systems, keeping the filling factor fixed, every given spin configuration induces a different Zeeman energy splitting at each site. This energy splitting behaves like an effective random disorder that enhances localization and decreases the sensing power. When the interaction becomes almost fully connected, the energy splitting of all spin configurations becomes equal and effective localization disappears, which boosts the sensitivity of the probe. Interestingly, even by incorporating state preparation time in our resource analysis, the super-Heisenberg scaling still remains valid. In the localized phase, the system becomes size-independent and QFI follows a universal function. Several critical exponents governing the localization transition as well as their relationship have been extracted through extensive finite-size scaling analysis. Finally, we have shown that the sensitivity decreases by increasing the filling factor. § ACKNOWLEDGMENT A.B. acknowledges support from the National Key R&D Program of China (Grant No. 2018YFA0306703), the National Science Foundation of China (Grants No. 12050410253, No. 92065115, and No. 12274059), and the Ministry of Science and Technology of China (Grant No. QNJ2021167001L). R.Y. thanks the National Science Foundation of China for the International Young Scientists Fund (Grant No. 12250410242). 118 natexlab#1#1bibnamefont#1#1bibfnamefont#1#1citenamefont#1#1url<#>1urlprefixURL [Cacciapuoti and Salomon(2009)]cacciapuoti2009space authorL. Cacciapuoti and authorC. Salomon, journalEur. Phys. J.: Spec. Top. volume172, pages57 (year2009). [Ludlow et al.(2015)Ludlow, Boyd, Ye, Peik, and Schmidt]ludlow2015optical authorA. D. Ludlow, authorM. M. Boyd, authorJ. Ye, authorE. Peik, and authorP. O. Schmidt, journalRev. Mod. Phys. volume87, pages637 (year2015). [Dolde et al.(2011)Dolde, Fedder, Doherty, Nöbauer, Rempp, Balasubramanian, Wolf, Reinhard, Hollenberg, Jelezko et al.]dolde2011electric authorF. Dolde, authorH. Fedder, authorM. W. Doherty, authorT. Nöbauer, authorF. Rempp, authorG. Balasubramanian, authorT. Wolf, authorF. Reinhard, authorL. C. Hollenberg, authorF. Jelezko, et al., journalNat. Phys. volume7, pages459 (year2011). [Facon et al.(2016)Facon, Dietsche, Grosso, Haroche, Raimond, Brune, and Gleyzes]facon2016sensitive authorA. Facon, authorE.-K. Dietsche, authorD. Grosso, authorS. Haroche, authorJ.-M. Raimond, authorM. Brune, and authorS. Gleyzes, journalNature volume535, pages262 (year2016). [Budker and Romalis(2007)]budker2007optical authorD. Budker and authorM. Romalis, journalNat. Phys. volume3, pages227 (year2007). [Taylor et al.(2008)Taylor, Cappellaro, Childress, Jiang, Budker, Hemmer, Yacoby, Walsworth, and Lukin]taylor2008high authorJ. M. Taylor, authorP. Cappellaro, authorL. Childress, authorL. Jiang, authorD. Budker, authorP. Hemmer, authorA. Yacoby, authorR. Walsworth, and authorM. Lukin, journalNat. Phys. volume4, pages810 (year2008). [Tanaka et al.(2015)Tanaka, Knott, Matsuzaki, Dooley, Yamaguchi, Munro, and Saito]tanaka2015proposed authorT. Tanaka, authorP. Knott, authorY. Matsuzaki, authorS. Dooley, authorH. Yamaguchi, authorW. J. Munro, and authorS. Saito, journalPhys. Rev. Lett. volume115, pages170801 (year2015). [Tino et al.(2019)Tino, Bassi, Bianco, Bongs, Bouyer, Cacciapuoti, Capozziello, Chen, Chiofalo, Derevianko et al.]tino2019sage authorG. M. Tino, authorA. Bassi, authorG. Bianco, authorK. Bongs, authorP. Bouyer, authorL. Cacciapuoti, authorS. Capozziello, authorX. Chen, authorM. L. Chiofalo, authorA. Derevianko, et al., journalEur. Phys. J. D volume73, pages1 (year2019). [Aasi et al.(2013)Aasi, Abadie, Abbott, Abbott, Abbott, Abernathy, Adams, Adams, Addesso, Adhikari et al.]aasi2013enhanced authorJ. Aasi, authorJ. Abadie, authorB. Abbott, authorR. Abbott, authorT. Abbott, authorM. Abernathy, authorC. Adams, authorT. Adams, authorP. Addesso, authorR. Adhikari, et al., journalNat. Photon. volume7, pages613 (year2013). [Dailey et al.(2021)Dailey, Bradley, Jackson Kimball, Sulai, Pustelny, Wickenbrock, and Derevianko]Cosmology1 authorC. Dailey, authorC. Bradley, authorD. F. Jackson Kimball, authorI. A. Sulai, authorS. Pustelny, authorA. Wickenbrock, and authorA. Derevianko, journalNat. Astron. volume5, pages150 (year2021). [Tsai et al.(2023)Tsai, Eby, and Safronova]Cosmology2 authorY.-D. Tsai, authorJ. Eby, and authorM. S. Safronova, journalNat. Astron. volume7, pages113 (year2023). [Xiong et al.(2021)Xiong, Wu, Leng, Li, Duan, Kong, Huang, Li, Gao, Rong et al.]xiong2021searching authorF. Xiong, authorT. Wu, authorY. Leng, authorR. Li, authorC.-K. Duan, authorX. Kong, authorP. Huang, authorZ. Li, authorY. Gao, authorX. Rong, et al., journalPhys. Rev. Research volume3, pages013205 (year2021). [Aslam et al.(2023)Aslam, Zhou, Urbach, Turner, Walsworth, Lukin, and Park]Biology1 authorN. Aslam, authorH. Zhou, authorE. K. Urbach, authorM. J. Turner, authorR. L. Walsworth, authorM. D. Lukin, and authorH. Park, journalNat. Rev. Phys. volume5, pages157 (year2023). [Schirhagl et al.(2014)Schirhagl, Chang, Loretz, and Degen]Biology2 authorR. Schirhagl, authorK. Chang, authorM. Loretz, and authorC. L. Degen, journalAnnu. Rev. Phys. Chem. volume65, pages83 (year2014). [Shi et al.(2018)Shi, Kong, Zhao, Zhang, Chen, Chen, Zhang, Wang, Ye, Wang et al.]shi2018single authorF. Shi, authorF. Kong, authorP. Zhao, authorX. Zhang, authorM. Chen, authorS. Chen, authorQ. Zhang, authorM. Wang, authorX. Ye, authorZ. Wang, et al., journalNat. Methods volume15, pages697 (year2018). [Paris(2009)]paris2009quantum authorM. G. Paris, journalInt. J. Quantum Inf. volume7, pages125 (year2009). [Degen et al.(2017)Degen, Reinhard, and Cappellaro]degen2017quantum authorC. L. Degen, authorF. Reinhard, and authorP. Cappellaro, journalRev. Mod. Phys. volume89, pages035002 (year2017). [Greenberger et al.(1989)Greenberger, Horne, and Zeilinger]greenberger1989going authorD. M. Greenberger, authorM. A. Horne, and authorA. Zeilinger, in booktitleBell’s theorem, quantum theory and conceptions of the universe (publisherSpringer, year1989), pp. pages69–72. [Giovannetti et al.(2004)Giovannetti, Lloyd, and Maccone]giovannetti2004quantum authorV. Giovannetti, authorS. Lloyd, and authorL. Maccone, journalScience volume306, pages1330 (year2004). [Leibfried et al.(2004)Leibfried, Barrett, Schaetz, Britton, Chiaverini, Itano, Jost, Langer, and Wineland]leibfried2004toward authorD. Leibfried, authorM. D. Barrett, authorT. Schaetz, authorJ. Britton, authorJ. Chiaverini, authorW. M. Itano, authorJ. D. Jost, authorC. Langer, and authorD. J. Wineland, journalScience volume304, pages1476 (year2004). [Boixo et al.(2007)Boixo, Flammia, Caves, and Geremia]boixo2007generalized authorS. Boixo, authorS. T. Flammia, authorC. M. Caves, and authorJ. M. Geremia, journalPhys. Rev. Lett. volume98, pages090401 (year2007). [Giovannetti et al.(2006)Giovannetti, Lloyd, and Maccone]giovannetti2006quantum authorV. Giovannetti, authorS. Lloyd, and authorL. Maccone, journalPhys. Rev. Lett. volume96, pages010401 (year2006). [Banaszek et al.(2009)Banaszek, Demkowicz-Dobrzański, and Walmsley]banaszek2009quantum authorK. Banaszek, authorR. Demkowicz-Dobrzański, and authorI. A. Walmsley, journalNat. Photonics volume3, pages673 (year2009). [Giovannetti et al.(2011)Giovannetti, Lloyd, and Maccone]giovannetti2011advances authorV. Giovannetti, authorS. Lloyd, and authorL. Maccone, journalNat. photonics volume5, pages222 (year2011). [Fröwis and Dür(2011)]frowis2011stable authorF. Fröwis and authorW. Dür, journalPhys. Rev. Lett. volume106, pages110402 (year2011). [Wang et al.(2018)Wang, Wang, Zhan, Bian, Li, Sanders, and Xue]wang2018entanglement authorK. Wang, authorX. Wang, authorX. Zhan, authorZ. Bian, authorJ. Li, authorB. C. Sanders, and authorP. Xue, journalPhys. Rev. A volume97, pages042112 (year2018). [Kwon et al.(2019)Kwon, Tan, Volkoff, and Jeong]kwon2019nonclassicality authorH. Kwon, authorK. C. Tan, authorT. Volkoff, and authorH. Jeong, journalPhys. Rev. Lett. volume122, pages040503 (year2019). [Demkowicz-Dobrzański et al.(2012)Demkowicz-Dobrzański, Kołodyński, and Guţă]demkowicz2012elusive authorR. Demkowicz-Dobrzański, authorJ. Kołodyński, and authorM. Guţă, journalNat. Commun. volume3, pages1063 (year2012). [Albarelli et al.(2018)Albarelli, Rossi, Tamascelli, and Genoni]albarelli2018restoring authorF. Albarelli, authorM. A. Rossi, authorD. Tamascelli, and authorM. G. Genoni, journalQuantum volume2, pages110 (year2018). [Nagata et al.(2007)Nagata, Okamoto, O'Brien, Sasaki, and Takeuchi]GHZexp1 authorT. Nagata, authorR. Okamoto, authorJ. L. O'Brien, authorK. Sasaki, and authorS. Takeuchi, journalScience volume316, pages726 (year2007). [Benjamin K. Malia(2022)]GHZexp2 authorJ. M.-R. . M. A. K. Benjamin K. Malia, Yunfan Wu, journalNature volume612, pages661–665 (year2022). [Marciniak et al.(2022)Marciniak, Feldker, Pogorelov, Kaubruegger, Vasilyev, van Bijnen, Schindler, Zoller, Blatt, and Monz]GHZexp3 authorC. D. Marciniak, authorT. Feldker, authorI. Pogorelov, authorR. Kaubruegger, authorD. V. Vasilyev, authorR. van Bijnen, authorP. Schindler, authorP. Zoller, authorR. Blatt, and authorT. Monz, journalNature volume603, pages604 (year2022). [De Pasquale et al.(2013)De Pasquale, Rossini, Facchi, and Giovannetti]de2013quantum authorA. De Pasquale, authorD. Rossini, authorP. Facchi, and authorV. Giovannetti, journalPhys. Rev. A volume88, pages052117 (year2013). [Pang and Brun(2014)]PhysRevA.90.022117 authorS. Pang and authorT. A. Brun, journalPhys. Rev. A volume90, pages022117 (year2014). [Skotiniotis et al.(2015)Skotiniotis, Sekatski, and Dür]skotiniotis2015quantum authorM. Skotiniotis, authorP. Sekatski, and authorW. Dür, journalNew J. Phys. volume17, pages073032 (year2015). [Raghunandan et al.(2018)Raghunandan, Wrachtrup, and Weimer]raghunandan2018high authorM. Raghunandan, authorJ. Wrachtrup, and authorH. Weimer, journalPhys. Rev. Lett. volume120, pages150501 (year2018). [Heugel et al.(2019)Heugel, Biondi, Zilberberg, and Chitra]heugel2019quantum authorT. L. Heugel, authorM. Biondi, authorO. Zilberberg, and authorR. Chitra, journalPhys. Rev. Lett. volume123, pages173601 (year2019). [Yang and Jacob(2019)]yang2019engineering authorL.-P. Yang and authorZ. Jacob, journalJ. Appl. Phys. volume126 (year2019). [Ding et al.(2022)Ding, Liu, Shi, Guo, Mølmer, and Adams]ding2022enhanced authorD.-S. Ding, authorZ.-K. Liu, authorB.-S. Shi, authorG.-C. Guo, authorK. Mølmer, and authorC. S. Adams, journalNat. Phys. volume18, pages1447 (year2022). [Zanardi and Paunković(2006)]zanardi2006ground authorP. Zanardi and authorN. Paunković, journalPhys. Rev. E volume74, pages031123 (year2006). [Zanardi et al.(2007)Zanardi, Quan, Wang, and Sun]zanardi2007mixed authorP. Zanardi, authorH. Quan, authorX. Wang, and authorC. Sun, journalPhys. Rev. A volume75, pages032109 (year2007). [Gu et al.(2008)Gu, Kwok, Ning, Lin et al.]gu2008fidelity authorS.-J. Gu, authorH.-M. Kwok, authorW.-Q. Ning, authorH.-Q. Lin, et al., journalPhys. Rev. B volume77, pages245109 (year2008). [Zanardi et al.(2008)Zanardi, Paris, and Venuti]zanardi2008quantum authorP. Zanardi, authorM. G. Paris, and authorL. C. Venuti, journalPhys. Rev. A volume78, pages042105 (year2008). [Invernizzi et al.(2008)Invernizzi, Korbman, Venuti, and Paris]invernizzi2008optimal authorC. Invernizzi, authorM. Korbman, authorL. C. Venuti, and authorM. G. Paris, journalPhys. Rev. A volume78, pages042106 (year2008). [Gu(2010)]gu2010fidelity authorS.-J. Gu, journalInt. J. Mod. Phys. B volume24, pages4371 (year2010). [Gammelmark and Mølmer(2011)]gammelmark2011phase authorS. Gammelmark and authorK. Mølmer, journalNew J. Phys. volume13, pages053035 (year2011). [Rams et al.(2018)Rams, Sierant, Dutta, Horodecki, and Zakrzewski]rams2018limits authorM. M. Rams, authorP. Sierant, authorO. Dutta, authorP. Horodecki, and authorJ. Zakrzewski, journalPhys. Rev. X volume8, pages021022 (year2018). [Wei(2019)]wei2019fidelity authorB.-B. Wei, journalPhys. Rev. A volume99, pages042117 (year2019). [Chu et al.(2021)Chu, Zhang, Yu, and Cai]chu2021dynamic authorY. Chu, authorS. Zhang, authorB. Yu, and authorJ. Cai, journalPhys. Rev. Lett. volume126, pages010502 (year2021). [Liu et al.(2021)Liu, Chen, Jiang, Yang, Wu, Li, Yuan, Peng, and Du]liu2021experimental authorR. Liu, authorY. Chen, authorM. Jiang, authorX. Yang, authorZ. Wu, authorY. Li, authorH. Yuan, authorX. Peng, and authorJ. Du, journalnpj Quantum Inf. volume7, pages170 (year2021). [Montenegro et al.(2021)Montenegro, Mishra, and Bayat]montenegro2021global authorV. Montenegro, authorU. Mishra, and authorA. Bayat, journalPhys. Rev. Lett. volume126, pages200501 (year2021). [Mirkhalaf et al.(2021)Mirkhalaf, Orenes, Mitchell, and Witkowska]mirkhalaf2021criticality authorS. S. Mirkhalaf, authorD. B. Orenes, authorM. W. Mitchell, and authorE. Witkowska, journalPhys. Rev. A volume103, pages023317 (year2021). [Di Candia et al.(2023)Di Candia, Minganti, Petrovnin, Paraoanu, and Felicetti]di2023critical authorR. Di Candia, authorF. Minganti, authorK. Petrovnin, authorG. Paraoanu, and authorS. Felicetti, journalnpj Quantum Inf. volume9, pages23 (year2023). [Mishra and Bayat(2021)]mishra2021driving authorU. Mishra and authorA. Bayat, journalPhys. Rev. Lett. volume127, pages080504 (year2021). [Mishra and Bayat(2022)]mishra2022integrable authorU. Mishra and authorA. Bayat, journalSci. Rep. volume12, pages14760 (year2022). [Baumann et al.(2010)Baumann, Guerlin, Brennecke, and Esslinger]baumann2010dicke authorK. Baumann, authorC. Guerlin, authorF. Brennecke, and authorT. Esslinger, journalNature volume464, pages1301 (year2010). [Baden et al.(2014)Baden, Arnold, Grimsmo, Parkins, and Barrett]baden2014realization authorM. P. Baden, authorK. J. Arnold, authorA. L. Grimsmo, authorS. Parkins, and authorM. D. Barrett, journalPhys. Rev. Lett. volume113, pages020408 (year2014). [Klinder et al.(2015)Klinder, Keßler, Wolke, Mathey, and Hemmerich]klinder2015dynamical authorJ. Klinder, authorH. Keßler, authorM. Wolke, authorL. Mathey, and authorA. Hemmerich, journalProc. Natl. Acad. Sci. U.S.A. volume112, pages3290 (year2015). [Rodriguez et al.(2017)Rodriguez, Casteels, Storme, Zambon, Sagnes, Le Gratiet, Galopin, Lemaître, Amo, Ciuti et al.]rodriguez2017probing authorS. Rodriguez, authorW. Casteels, authorF. Storme, authorN. C. Zambon, authorI. Sagnes, authorL. Le Gratiet, authorE. Galopin, authorA. Lemaître, authorA. Amo, authorC. Ciuti, et al., journalPhys. Rev. Lett. volume118, pages247402 (year2017). [Fitzpatrick et al.(2017)Fitzpatrick, Sundaresan, Li, Koch, and Houck]fitzpatrick2017observation authorM. Fitzpatrick, authorN. M. Sundaresan, authorA. C. Li, authorJ. Koch, and authorA. A. Houck, journalPhys. Rev. X volume7, pages011016 (year2017). [Fink et al.(2017)Fink, Dombi, Vukics, Wallraff, and Domokos]fink2017observation authorJ. M. Fink, authorA. Dombi, authorA. Vukics, authorA. Wallraff, and authorP. Domokos, journalPhys. Rev. X volume7, pages011012 (year2017). [Ilias et al.(2022)Ilias, Yang, Huelga, and Plenio]ilias2022criticality authorT. Ilias, authorD. Yang, authorS. F. Huelga, and authorM. B. Plenio, journalPRX Quantum volume3, pages010354 (year2022). [Montenegro et al.(2023)Montenegro, Genoni, Bayat, and Paris]montenegro2023quantum authorV. Montenegro, authorM. Genoni, authorA. Bayat, and authorM. Paris, journalarXiv:2301.02103 (year2023). [Iemini et al.(2023)Iemini, Fazio, and Sanpera]iemini2023floquet authorF. Iemini, authorR. Fazio, and authorA. Sanpera, journalarXiv:2306.03927 (year2023). [Budich and Bergholtz(2020)]budich2020non authorJ. C. Budich and authorE. J. Bergholtz, journalPhys. Rev. Lett. volume125, pages180403 (year2020). [Sarkar et al.(2022)Sarkar, Mukhopadhyay, Alase, and Bayat]sarkar2022free authorS. Sarkar, authorC. Mukhopadhyay, authorA. Alase, and authorA. Bayat, journalPhys. Rev. Lett. volume129, pages090503 (year2022). [Koch and Budich(2022)]koch2022quantum authorF. Koch and authorJ. C. Budich, journalPhys. Rev. Research volume4, pages013113 (year2022). [Yu et al.(2022)Yu, Li, Chu, Mera, Ünal, Yang, Liu, Goldman, and Cai]yu2022experimental authorM. Yu, authorX. Li, authorY. Chu, authorB. Mera, authorF. N. Ünal, authorP. Yang, authorY. Liu, authorN. Goldman, and authorJ. Cai, journalarXiv:2206.00546 (year2022). [Sahoo et al.(2023)Sahoo, Mishra, and Rakshit]sahoo2023localization authorA. Sahoo, authorU. Mishra, and authorD. Rakshit, journalarXiv:2305.02315 (year2023). [He et al.(2023)He, Yousefjani, and Bayat]he2023stark authorX. He, authorR. Yousefjani, and authorA. Bayat, journalPhys. Rev. Lett. volume131, pages010801 (year2023). [Wiseman(1995)]wiseman1995adaptive authorH. M. Wiseman, journalPhys. Rev. Lett. volume75, pages4587 (year1995). [Armen et al.(2002)Armen, Au, Stockton, Doherty, and Mabuchi]armen2002adaptive authorM. A. Armen, authorJ. K. Au, authorJ. K. Stockton, authorA. C. Doherty, and authorH. Mabuchi, journalPhys. Rev. Lett. volume89, pages133602 (year2002). [Fujiwara(2006)]fujiwara2006strong authorA. Fujiwara, journalJ. Phys. A Math. Gen. volume39, pages12489 (year2006). [Higgins et al.(2007)Higgins, Berry, Bartlett, Wiseman, and Pryde]higgins2007entanglement authorB. L. Higgins, authorD. W. Berry, authorS. D. Bartlett, authorH. M. Wiseman, and authorG. J. Pryde, journalNature volume450, pages393 (year2007). [Berry et al.(2009)Berry, Higgins, Bartlett, Mitchell, Pryde, and Wiseman]berry2009perform authorD. W. Berry, authorB. L. Higgins, authorS. D. Bartlett, authorM. W. Mitchell, authorG. J. Pryde, and authorH. M. Wiseman, journalPhys. Rev. A volume80, pages052114 (year2009). [Said et al.(2011)Said, Berry, and Twamley]said2011nanoscale authorR. Said, authorD. Berry, and authorJ. Twamley, journalPhys. Rev. B volume83, pages125410 (year2011). [Okamoto et al.(2012)Okamoto, Iefuji, Oyama, Yamagata, Imai, Fujiwara, and Takeuchi]okamoto2012experimental authorR. Okamoto, authorM. Iefuji, authorS. Oyama, authorK. Yamagata, authorH. Imai, authorA. Fujiwara, and authorS. Takeuchi, journalPhys. Rev. Lett. volume109, pages130404 (year2012). [Bonato et al.(2016)Bonato, Blok, Dinani, Berry, Markham, Twitchen, and Hanson]bonato2016optimized authorC. Bonato, authorM. S. Blok, authorH. T. Dinani, authorD. W. Berry, authorM. L. Markham, authorD. J. Twitchen, and authorR. Hanson, journalNat. Nanotechnol. volume11, pages247 (year2016). [Okamoto et al.(2017)Okamoto, Oyama, Yamagata, Fujiwara, and Takeuchi]okamoto2017experimental authorR. Okamoto, authorS. Oyama, authorK. Yamagata, authorA. Fujiwara, and authorS. Takeuchi, journalPhys. Rev. A volume96, pages022124 (year2017). [Fernández-Lorenzo and Porras(2017)]fernandez2017quantum authorS. Fernández-Lorenzo and authorD. Porras, journalPhys. Rev. A volume96, pages013817 (year2017). [Albarelli et al.(2017)Albarelli, Rossi, Paris, and Genoni]albarelli2017ultimate authorF. Albarelli, authorM. A. Rossi, authorM. G. Paris, and authorM. G. Genoni, journalNew J. Phys. volume19, pages123011 (year2017). [Gammelmark and Mølmer(2014)]gammelmark2014fisher authorS. Gammelmark and authorK. Mølmer, journalPhys. Rev. Lett. volume112, pages170401 (year2014). [Rossi et al.(2020)Rossi, Albarelli, Tamascelli, and Genoni]rossi2020noisy authorM. A. Rossi, authorF. Albarelli, authorD. Tamascelli, and authorM. G. Genoni, journalPhys. Rev. Lett. volume125, pages200505 (year2020). [Yang et al.(2022a)Yang, Huelga, and Plenio]yang2022efficient authorD. Yang, authorS. F. Huelga, and authorM. B. Plenio, journalarXiv:2209.08777 (year2022a). [Burgarth et al.(2015)Burgarth, Giovannetti, Kato, and Yuasa]burgarth2015quantum authorD. Burgarth, authorV. Giovannetti, authorA. N. Kato, and authorK. Yuasa, journalNew J. Phys. volume17, pages113055 (year2015). [Montenegro et al.(2022)Montenegro, Jones, Bose, and Bayat]montenegro2022sequential authorV. Montenegro, authorG. S. Jones, authorS. Bose, and authorA. Bayat, journalPhys. Rev. Lett. volume129, pages120503 (year2022). [Morong et al.(2021)Morong, Liu, Becker, Collins, Feng, Kyprianidis, Pagano, You, Gorshkov, and Monroe]morong2021observation authorW. Morong, authorF. Liu, authorP. Becker, authorK. Collins, authorL. Feng, authorA. Kyprianidis, authorG. Pagano, authorT. You, authorA. Gorshkov, and authorC. Monroe, journalNature volume599, pages393 (year2021). [Smith et al.(2016)Smith, Lee, Richerme, Neyenhuis, Hess, Hauke, Heyl, Huse, and Monroe]smith2016many authorJ. Smith, authorA. Lee, authorP. Richerme, authorB. Neyenhuis, authorP. W. Hess, authorP. Hauke, authorM. Heyl, authorD. A. Huse, and authorC. Monroe, journalNat. Phys. volume12, pages907 (year2016). [Rajabi et al.(2019)Rajabi, Motlakunta, Shih, Kotibhaskar, Quraishi, Ajoy, and Islam]rajabi2019dynamical authorF. Rajabi, authorS. Motlakunta, authorC.-Y. Shih, authorN. Kotibhaskar, authorQ. Quraishi, authorA. Ajoy, and authorR. Islam, journalnpj Quantum Inf. volume5, pages32 (year2019). [Choi et al.(2016)Choi, Hild, Zeiher, Schauß, Rubio-Abadal, Yefsah, Khemani, Huse, Bloch, and Gross]choi2016exploring authorJ.-y. Choi, authorS. Hild, authorJ. Zeiher, authorP. Schauß, authorA. Rubio-Abadal, authorT. Yefsah, authorV. Khemani, authorD. A. Huse, authorI. Bloch, and authorC. Gross, journalScience volume352, pages1547 (year2016). [Rispoli et al.(2019)Rispoli, Lukin, Schittko, Kim, Tai, Léonard, and Greiner]rispoli2019quantum authorM. Rispoli, authorA. Lukin, authorR. Schittko, authorS. Kim, authorM. E. Tai, authorJ. Léonard, and authorM. Greiner, journalNature volume573, pages385 (year2019). [Garbe et al.(2022)Garbe, Abah, Felicetti, and Puebla]garbe2022critical authorL. Garbe, authorO. Abah, authorS. Felicetti, and authorR. Puebla, journalQuantum Sci. Technol. volume7, pages035010 (year2022). [Yang et al.(2022b)Yang, Pang, del Campo, and Jordan]Kitaev authorJ. Yang, authorS. Pang, authorA. del Campo, and authorA. N. Jordan, journalPhys. Rev. Research volume4, pages013133 (year2022b). [Waddington et al.(2020)Waddington, Boele, Maschmeyer, Kuncic, and Rosen]waddington2020high authorD. E. Waddington, authorT. Boele, authorR. Maschmeyer, authorZ. Kuncic, and authorM. S. Rosen, journalSci. Adv. volume6, pageseabb0998 (year2020). [Koonjoo et al.(2021)Koonjoo, Zhu, Bagnall, Bhutto, and Rosen]koonjoo2021boosting authorN. Koonjoo, authorB. Zhu, authorG. C. Bagnall, authorD. Bhutto, and authorM. Rosen, journalSci. Rep. volume11, pages8248 (year2021). [Snadden et al.(1998)Snadden, McGuirk, Bouyer, Haritos, and Kasevich]snadden1998measurement authorM. Snadden, authorJ. McGuirk, authorP. Bouyer, authorK. Haritos, and authorM. Kasevich, journalPhys. Rev. Lett. volume81, pages971 (year1998). [Griggs et al.(2017)Griggs, Moody, Norton, Paik, and Venkateswara]griggs2017sensitive authorC. Griggs, authorM. Moody, authorR. Norton, authorH. Paik, and authorK. Venkateswara, journalPhys. Rev. Appl. volume8, pages064024 (year2017). [Stray et al.(2022)Stray, Lamb, Kaushik, Vovrosh, Rodgers, Winch, Hayati, Boddice, Stabrawa, Niggebaum et al.]stray2022quantum authorB. Stray, authorA. Lamb, authorA. Kaushik, authorJ. Vovrosh, authorA. Rodgers, authorJ. Winch, authorF. Hayati, authorD. Boddice, authorA. Stabrawa, authorA. Niggebaum, et al., journalNature volume602, pages590 (year2022). [Phillips et al.(2022)Phillips, Wright, Riou, Maddox, Maskell, and Ralph]phillips2022position authorA. M. Phillips, authorM. J. Wright, authorI. Riou, authorS. Maddox, authorS. Maskell, and authorJ. F. Ralph, journalAVS Quantum Sci. volume4 (year2022). [Goda et al.(2008)Goda, Miyakawa, Mikhailov, Saraf, Adhikari, McKenzie, Ward, Vass, Weinstein, and Mavalvala]GravitionalWave1 authorK. Goda, authorO. Miyakawa, authorE. E. Mikhailov, authorS. Saraf, authorR. Adhikari, authorK. McKenzie, authorR. Ward, authorS. Vass, authorA. J. Weinstein, and authorN. Mavalvala, journalNature Physics volume4, pages472 (year2008). [Dimopoulos et al.(2009)Dimopoulos, Graham, Hogan, Kasevich, and Rajendran]GravitionalWave2 authorS. Dimopoulos, authorP. W. Graham, authorJ. M. Hogan, authorM. A. Kasevich, and authorS. Rajendran, journalPhys. Lett. B volume678, pages37 (year2009). [Asenbaum et al.(2020)Asenbaum, Overstreet, Kim, Curti, and Kasevich]asenbaum2020atom authorP. Asenbaum, authorC. Overstreet, authorM. Kim, authorJ. Curti, and authorM. A. Kasevich, journalPhys. Rev. Lett. volume125, pages191101 (year2020). [Parker et al.(2018)Parker, Yu, Zhong, Estey, and Müller]parker2018measurement authorR. H. Parker, authorC. Yu, authorW. Zhong, authorB. Estey, and authorH. Müller, journalScience volume360, pages191 (year2018). [Rosi et al.(2014)Rosi, Sorrentino, Cacciapuoti, Prevedelli, and Tino]rosi2014precision authorG. Rosi, authorF. Sorrentino, authorL. Cacciapuoti, authorM. Prevedelli, and authorG. Tino, journalNature volume510, pages518 (year2014). [Meyer(2021)]meyer2021fisher authorJ. J. Meyer, journalQuantum volume5, pages539 (year2021). [Campos Venuti and Zanardi(2007)]FidelitySuscep1 authorL. Campos Venuti and authorP. Zanardi, journalPhys. Rev. Lett. volume99, pages095701 (year2007). [Schwandt et al.(2009)Schwandt, Alet, and Capponi]FidelitySuscep2 authorD. Schwandt, authorF. Alet, and authorS. Capponi, journalPhys. Rev. Lett. volume103, pages170501 (year2009). [Albuquerque et al.(2010)Albuquerque, Alet, Sire, and Capponi]FidelitySuscep3 authorA. F. Albuquerque, authorF. Alet, authorC. Sire, and authorS. Capponi, journalPhys. Rev. B volume81, pages064418 (year2010). [You et al.(2007)You, Li, and Gu]FidelitySuscep4 authorW.-L. You, authorY.-W. Li, and authorS.-J. Gu, journalPhys. Rev. E volume76, pages022101 (year2007). [Kolovsky(2008)]kolovsky2008interplay authorA. R. Kolovsky, journalPhys. Rev. Lett. volume101, pages190602 (year2008). [van Nieuwenburg et al.(2019)van Nieuwenburg, Baum, and Refael]van2019bloch authorE. van Nieuwenburg, authorY. Baum, and authorG. Refael, journalProc. Natl. Acad. Sci. U.S.A. volume116, pages9269 (year2019). [Schulz et al.(2019)Schulz, Hooley, Moessner, and Pollmann]schulz2019stark authorM. Schulz, authorC. Hooley, authorR. Moessner, and authorF. Pollmann, journalPhys. Rev. Lett. volume122, pages040606 (year2019). [Yao and Zakrzewski(2020)]yao2020many authorR. Yao and authorJ. Zakrzewski, journalPhys. Rev. B volume102, pages104203 (year2020). [Chanda et al.(2020)Chanda, Yao, and Zakrzewski]chanda2020coexistence authorT. Chanda, authorR. Yao, and authorJ. Zakrzewski, journalPhys. Rev. Research volume2, pages032039 (year2020). [Luitz et al.(2015)Luitz, Laflorencie, and Alet]luitz2015many authorD. J. Luitz, authorN. Laflorencie, and authorF. Alet, journalPhys. Rev. B volume91, pages081103 (year2015). [Hauschild and Pollmann(2018)]tenpy authorJ. Hauschild and authorF. Pollmann, journalSciPost Phys. Lect. Notes p. pages5 (year2018). [Melchert(2009)]melchert2009autoscalepy authorO. Melchert, journalarXiv:0910.5403 (year2009). [Sorge(2015)]andreas_sorge_2015_35293 authorA. Sorge, titlepyfssa 0.7.6 (year2015), <10.5281/zenodo.35293>.
http://arxiv.org/abs/2307.04099v1
20230709052131
GNP Attack: Transferable Adversarial Examples via Gradient Norm Penalty
[ "Tao Wu", "Tie Luo", "Donald C. Wunsch" ]
cs.LG
[ "cs.LG", "cs.CR", "cs.CV" ]
Visible and infrared self-supervised fusion trained on a single example Nati Ofir August 12, 2023 ======================================================================= Adversarial examples (AE) with good transferability enable practical black-box attacks on diverse target models, where insider knowledge about the target models is not required. Previous methods often generate AE with no or very limited transferability; that is, they easily overfit to the particular architecture and feature representation of the source, white-box model and the generated AE barely work for target, black-box models. In this paper, we propose a novel approach to enhance AE transferability using Gradient Norm Penalty (GNP). It drives the loss function optimization procedure to converge to a flat region of local optima in the loss landscape. By attacking 11 state-of-the-art (SOTA) deep learning models and 6 advanced defense methods, we empirically show that GNP is very effective in generating AE with high transferability. We also demonstrate that it is very flexible in that it can be easily integrated with other gradient based methods for stronger transfer-based attacks. Adversarial machine learning, Transferability, Deep neural networks, Input gradient regularization § INTRODUCTION Deep Neural Networks (DNNs) are the workhorse of a broad variety of computer vision tasks but are vulnerable to adversarial examples (AE), which are data samples (typically images) that are perturbed by human-imperceptible noises yet result in odd misclassifications. This lack of adversarial robustness curtails and often even prevents deep learning models from being deployed in security or safety critical domains such as healthcare, neuroscience, finance, and self-driving cars, to name a few. Adversarial examples are commonly studied under two settings, white-box and black-box attacks. In the white-box setting, adversaries have full knowledge of victim models, including model structures, parameters and weights, and loss functions used to train the models. Therefore, they can directly obtain the gradients of the victim models and seek adversarial examples by misleading the loss function toward incorrect predictions. White-box attacks are important for evaluating and developing robust models and serve as the backend method for many black-box attacks, but is limited in use due to its requirement of having to know the internal details of target models. In the black-box setting, adversaries do not need specific knowledge about victim models other than their external properties (type of input and output). Two types of approaches, query-based and transfer-based, are commonly studied for black-box attacks. The query-based approach attempts to estimate the gradients of a victim model by querying it with a large number of input samples and inspecting the outputs. Due to the large number of queries, it can be easily detected and defended. The transfer-based approach uses surrogate models to generate transferable AE which can attack a range of models instead of a single victim model. Hence it is a more attractive approach to black-box attacks. This paper takes the second approach and focuses on designing a new and effective method to improve the transferability of AE. Several directions for boosting adversarial transferability have appeared. Dong et al. <cit.> proposed momentum based methods. Attention-guided transfer attack (ATA) <cit.> uses attention maps to identify common features for attacking. Diverse Input Method (DIM) <cit.> calculates the average gradients of augmented images. <cit.> generates transferable AE using an ensemble of multiple models. Despite the efforts of previous works, there still exists a large gap of attack success rate between the transfer-based setting and the ideal white-box setting. In this paper, we propose a novel method to boost adversarial transferability from an optimization perspective. Inspired by the concept of “flat minima” in the optimization theory <cit.> which improves the generalization of DNNs, we seek to generate AE that lie in flat regions where the input gradient norm is small, so as to “generalize” to other victim models that AE are not generated on. In a nutshell, this work makes the following contributions: * We propose a transfer-based black-box attack from a new perspective that seeks AE in a flat region of loss landscape by penalizing the input gradient norm. * We show that our method, input gradient norm penalty (GNP), can significantly boost the adversarial transferability for a wide range of deep networks. * We demonstrate that GNP can be easily integrated with existing transfer-based attacks to produce even better performance, indicating a highly desirable flexibility. § METHOD Given a classification model f(x): x ∈𝒳→ y ∈𝒴 that outputs a label y as the prediction for an input x, we aim to craft an adversarial example x^* which is visually indistinguishable from x but will be misclassified by the classifier, i.e., f(x^*) ≠ y. The generation of AE can be formulated as the following optimization problem: max _x^*ℓ(x^*, y), s.t. x^*-x_p ≤ϵ, where the loss function ℓ(·, ·) is often the cross-entropy loss, and the ł_p-norm measures the discrepancy between x and x^*. In this work, we use p=∞ which is commonly adopted in the literature. Optimizing Eq. (<ref>) needs to calculate the gradient of the loss function, but this is not feasible in the black-box setting. Therefore, we aim to create transferable AE on a source model yet can attack many other target models. We develop a new method to boost adversarial transferability from a perspective inspired by “flat optima” in optimization theory. See Fig. <ref>. If an AE is located at a sharp local maximum, it will be sensitive to the difference of decision boundaries between the source model and target models. In contrast, if it is located at a flat maximum region, it is much more likely to result in a similar high loss on other models (which is desired). Thus, we seek to generate AE in flat regions. To this end, we introduce a gradient norm penalty (GNP) term into the loss function, which penalizes the gradient norm of the loss function with respect to input. The reason is that flat regions are characterized by small gradient norms, hence penalizing the gradient norm will encourage the optimizer to find an AE that lies in a flat region. We thus enhance the adversarial transferability since a minor shift of decision boundary will not significantly change the loss value (prior work has shown that different networks often share similar decision boundaries). §.§ Baseline Attacks GNP is a very flexible method in that it can be easily incorporated into any existing gradient based method to boost its strength. We consider the following existing, gradient based attacks to demonstrate the effect of GNP. Later in sec:experiments, we will also show how GNP works effectively on state-of-the-art transfer-based attacks as well. Fast Gradient Sign Method (FGSM). FGSM <cit.> is the first gradient-based attack which crafts an AE x^adv by attempting to maximize the loss function J(x^adv, y; θ) with a one-step update: x^adv=x+ϵ·sign(∇_x ℓ(x, y; θ)), where ∇_x J(x, y; θ) is the gradient of loss function with respect to x, and sign(·) denotes the sign function. Iterative Fast Gradient Sign Method (I-FGSM). I-FGSM extends FGSM to an iterative version: x_t+1^adv = x_t^adv + α·sign(∇_x_t^advℓ(x_t^adv, y; θ)), x_0^adv = x, where α=ϵ / T is a small step size and T is the number of iterations. Momentum Iterative Fast Gradient Sign Method (MI-FGSM). MI-FGSM <cit.> integrates a momentum term into I-FGSM and improves transferability by a large margin: g_t+1 = μ· g_t + ∇_x_t^advJ(x_t^adv, y; θ)/∇_x_t^advJ(x_t^adv, y; θ)_1, x_t+1^adv = x_t^adv + α·sign(g_t+1), where g_0 = 0 and μ is a decay factor. §.§ GNP Attack As explained in sec:method, we aim to guide the loss function optimization process to move into a flat local optimal region. To this end, we introduce GNP to penalize large gradient norm, as L(x, y) = ℓ(x, y)-λ∇_x ℓ(x, y)_2 where ℓ(·) is the original loss function of the source model, and the regularization term is our GNP, which encourages small gradient norm when finding local maxima. For gradient based attacks (e.g., FGSM, I-FGSM, MI-FGSM, etc.), we need to calculate the gradient of the new loss (<ref>). To simplify notation, we omit y in the loss function since we are calculating gradient with respect to x. Using the chain rule, we have ∇_x L(x)=∇_xℓ_(x)-λ∇_x^2 ℓ_(x) ∇_xℓ_(x)/∇_xℓ_(x) This equation involves the calculation of Hessian matrix H = ∇_x^2 ℓ_(x). This is often infeasible because of the curse of dimensionality (such a Hessian matrix in DNNs tends to be too large due to the often large input dimension). Therefore, we take the first-order Taylor expansion together with the finite difference method (FDM) to approximate the following gradient: ∇_x L_(x+rΔx)≈∇_xℓ_(x)+H rΔx where Δx=∇_x ℓ(x)/∇_xℓ(x), and r is the step length to control the neighborhood size. Thus we obtain the regularization term of (<ref>) as: H∇_xℓ_(x)/∇_xℓ_(x)≈∇_xℓ(x+r ∇_xℓ_(x)/∇_xℓ_(x))-∇_xℓ(x)/r Inserting (<ref>) back into (<ref>), we obtain the gradient of the regularized loss function as: ∇_x L(x)=(1+β) ∇_xℓ_(x) -β∇_xℓ_(x+r ∇_xℓ_(x)/∇_xℓ_(x)) where β=λ/r is the regularization coefficient. We summarize the algorithm of how GNP is integrated into I-FGSM in Algorithm <ref>, but I-FGSM can be replaced by any gradient based attack. § EXPERIMENTS §.§ Experiment Setup Dataset and models. We randomly sample 5,000 test images that can be correctly classified by all the models, from the ImageNet <cit.> validation set. We consider 11 SOTA DNN-based image classifiers: ResNet50 <cit.>, VGG-19 <cit.>, ResNet-152 <cit.>, Inc v3 <cit.>, DenseNet <cit.>, MobileNet v2 <cit.>, SENet <cit.>, ResNeXt <cit.>, WRN <cit.>, PNASNet <cit.>, and MNASNet <cit.>. Following the work in <cit.>, we choose ResNet50 as the source model and the remining 10 models as target models. Implementation Details. In experiments, the pixel values of all images are scaled to [0, 1]. The adversarial perturbation is restricted by 3 scales ϵ=4/255,8/255,16/255. The step length is set as r=0.01 and regularization coefficient β=0.8, we run 100 iterations for all attacks and evaluate model misclassification as attack success rate. §.§ Experimental Results §.§.§ Integration with baseline attacks We first evaluate the performance of GNP by integrating it with baseline attacks including I-FGSM and MI-FGSM. The results are shown in tab:1. We use a pre-trained ResNet50 as the source model and evaluate the attack success rate (ASR) of the generated AE on a variety of target models under different scales of perturbation ϵ. GNP achieves significant and consistent improvement in all the cases. For instance, taking the average ASR of all the 10 target models under perturbation ϵ = 8/255, GNP outperforms I-FGSM and MI-FGSM by 26.51% and 13.67%, respectively. In addition, the improvements of the attack success rates on a single model can be achieved by a large margin of 33.06%. §.§.§ Integration with existing transfer-based attacks Here we also evaluate the effectiveness of GNP when incorporated into other transfer-based attacks such as DIM <cit.> and TIM <cit.>. The results are given in tab:2 and show that DIM+GNP and TIM+GNP are clear winners over DIM and TIM alone, respectively. Specifically, DIM+GNP achieves an average success rate of 91.95% under ϵ = 16/255 for the 10 target models, and TIM+GNP outperform TIM by a large margin of 16.28% under ϵ = 8/255. We note that we only present the integration of GNP with two typical methods here, but our method also apply to other more powerful gradient-based attack methods. §.§.§ Attacking “secured” models For a more thorough evaluation, we also investigate how GNP will perform when attacking DNN models that have been adversarially trained (and hence are much harder to attack). We choose three such advanced defense methods to attack, namely, JPEG <cit.>, R&P <cit.> and NRP <cit.>. In addition, we choose another three ensemble adversarially trained (AT) models, which are even harder than regular AT models, and attack them: Inc-v3_ens3, Inc-v3_ens4 and IncRes-v2_ens1 <cit.>. We craft AE on the ResNet50 surrogate model with ϵ=16/255, and use DIM+TIM as the “backbone” to apply GNP. The results are presented in tab:3, where we can see that GNP again boosts ASR significantly against the six “secured” models, achieving consistent performance improvements of 11.46–14.37%. §.§ Ablation Study We conduct ablation study on the hyper-parameters of the proposed GNP attack, i.e., step length r and regularization coefficient β. Since r represents the radius of neighborhood that is flat around current AE, a larger r is preferred; on the other hand, setting it too large will increase the approximation error of Taylor expansion and thus mislead the AE update direction. The β is to balance the goal of fooling the surrogate model and finding flat optima. fig:ablation reports the results of our ablation study, where ASR is averaged over 10 target models (excluding the source ResNet50) attacked by I-FGSM + GNP with ϵ=8/255. We observe that adding the GNP regularization term clearly improves performance (as compared to β=0) and the performance gain is rather consistent for β in a wide range of 0.6–1.6. The step length r does not affect the performance gain too much either, and r=0.01 seems to be the most stable. Thus, the ablation study reveals that GNP is not hyper-parameter sensitive and works well in a variety of conditions. § CONCLUSION In this paper, we have proposed a new method for improving the transferability of AE from an optimization perspective, by seeking AE located at flat optima. We achieve this by introducing an input gradient norm penalty (GNP) which guides the AE search toward flat regions of the loss function. This GNP method is very flexible as it can be used with any gradient based AE generation methods. We conduct comprehensive experimental study and demonstrate that our method can boost the transferability of AE significantly. This paper focuses on untargeted attacks, but GNP can be rather easily applied to targeted attacks as well, by making a small change to the loss function. We plan to have a thorough investigation in future work. IEEEbib
http://arxiv.org/abs/2307.10200v1
20230709023156
Disentangling Societal Inequality from Model Biases: Gender Inequality in Divorce Court Proceedings
[ "Sujan Dutta", "Parth Srivastava", "Vaishnavi Solunke", "Swaprava Nath", "Ashiqur R. KhudaBukhsh" ]
cs.CY
[ "cs.CY", "cs.AI", "cs.CL", "cs.LG" ]
Age of FGK Dwarfs Observed with LAMOST and GALAH: Considering the Oxygen Enhancement Jinghua Zhang Received August 12, 2023; accepted August 12, 2023 ==================================================================================== Divorce is the legal dissolution of a marriage by a court. Since this is usually an unpleasant outcome of a marital union, each party may have reasons to call the decision to quit which is generally documented in detail in the court proceedings. Via a substantial corpus of 17,306 court proceedings, this paper investigates gender inequality through the lens of divorce court proceedings. While emerging data sources (e.g., public court records) on sensitive societal issues hold promise in aiding social science research, biases present in cutting-edge natural language processing (NLP) methods may interfere with or affect such studies. We thus require a thorough analysis of potential gaps and limitations present in extant NLP resources. In this paper, on the methodological side, we demonstrate that existing NLP resources required several non-trivial modifications to quantify societal inequalities. On the substantive side, we find that while a large number of court cases perhaps suggest changing norms in India where women are increasingly challenging patriarchy, AI-powered analyses of these court proceedings indicate striking gender inequality with women often subjected to domestic violence. § INTRODUCTION The 2011 decennial census in India gave its citizens the following choices to select their marital status – never married, separated, divorced, widowed, married. Based on the census data, a study reported some startling facts <cit.>: 1.36 million of the Indian population is divorced which accounts for 0.24% of the married population, and 0.11% of the total population. More women were separated or divorced than men, and the number of separation was almost three times as high as the number of divorce. Divorce, a historically taboo topic in India for ages <cit.>, seldom features in mainstream Indian discourse <cit.>. Recent indications of changing social acceptance of divorcees notwithstanding <cit.>, divorce in India still carries a considerable social stigma <cit.>. How do we quantify gender inequality in Indian divorce? Surveys about divorce often have limited participation and a small sample size <cit.>, perhaps due to the social stigma attached. A vulnerable community – Indian women under conjugal distress – had limited visibility to social scientists. Via a substantial corpus of 17,306 divorce court proceedings, this paper conducts the first-ever computational analysis of gender inequality in Indian divorce based on public court records. Even though written in English, legal texts are often domain-specific <cit.>. The considerable variation of legal jargon across countries and courts makes domain-specific analysis important. In that vein, Indian legal NLP is an emerging field <cit.>. Most NLP research on legal texts thus far has focused on building robust tools to analyze legal text. Recent research, however, on in-group bias <cit.> and sexual harassment <cit.>, and <Ref> and <Ref> suggest that automated methods to glean social insights from large-scale, legal texts merit investigation. Barring few recent lines of work <cit.>, there is surprisingly little literature on large-scale linguistic analysis of gender bias in India, let alone on legal text zeroing in on divorce. While emerging data sources (e.g., public court records available on the web) offer opportunities for social scientists to study important and sensitive social issues that previously had limited survey data, applying cutting-edge NLP methods to newer domains requires careful evaluation of the critical question: How much of the (perceived) gender inequality as quantified by the methods truly reflects the corpus and how much of it is due to the inherent biases of the employed NLP methods? In this paper, we show that the subtleties present in legal text present unique challenges. Unless we consider them and make non-trivial changes to existing methods, we may end up drawing inaccurate social conclusions. We further show that sophisticated NLP methods built on top of large language models (LLMs) need scrutiny when applied to social inference tasks involving genders. We, in fact, conduct a much broader bias audit of these systems. Our audit reveals well-known LLMs often exhibit gender bias even on simple subject-verb-object sentence completion tasks. Through a corpus-specific text entailment analysis, we demonstrate that downstream applications such as natural language inference (NLI) systems also exhibit sensitivity to gender. We finally, present a novel inconsistency sampling method to mitigate this bias and present our social findings. To summarize, our contributions are the following: Social: We create a substantial corpus of 17,306 divorce court proceedings and conduct the first-ever analysis of gender inequality through the lens of divorce proceedings. While a large number of court cases perhaps suggest changing norms in India where women are increasingly challenging patriarchy <cit.>, our analyses reveal widespread domestic violence, dowry demands, and torture of the bride. Methodological: We address extant gaps and limitations in multiple NLP frameworks. We propose non-trivial modifications to the framework <cit.> to make it suitable for legal text. We demonstrate a novel application of text entailment <cit.> in quantifying gender inequality. We investigate several potential sources for model bias in NLP resources that can interfere with quantifying gender inequality. We present a novel inconsistency sampling method exploiting counterfactuals to mitigate this bias. § DATASET §.§ Collection We scrape all the publicly available court proceedings with the word between January 1, 2012 to December 31, 2021 from <https://indiankanoon.org/> (hereafter ), an Indian law search engine launched in 2008 and the largest free online repository of the court proceedings of different courts and tribunals of India. Prior computational law research  <cit.> and gender focused social science studies <cit.> have used as source of data. We download 86,911 case proceedings containing the word from using its advanced search feature. Filtering based on the keyword is a high-recall approach to obtain relevant cases with precedence in computational social science research <cit.>. However, the presence of the keyword may not always indicate a divorce court proceeding; for instance, the keyword can be used to describe the marital status of any of the litigants. It can also be used in an altogether different context (e.g., divorced from reality). We use the following heuristic to further refine our dataset. We also look for other words (e.g., , , ) and phrases (e.g., ), and check if such occurrences repeat for a minimum threshold (set to 5). On a random sample of 100 cases after we apply this cleaning method, a manual inspection reveals that 94 are divorce cases. Hence, our keyword-based filtering is reasonably precise. This pruning step retains 25,635 cases. §.§ Data Pre-processing To quantify gender inequality in court proceedings, we must disambiguate the legal parties – the plaintiff and the defendant – and accurately identify of the husband and the wife, who plays which role. Indian legal documents use a wide range of legal terms to denote the plaintiff (e.g., appellant, applicant, complainant, petitioner) and the defendant (e.g., respondent, nonapplicant, opponent). We observe different courts have different formats (sometimes, multiple formats) to summarize the proceedings. The documents also specify which party in marriage represents which role in several different ways (e.g., respondent/wife, respondent-wife, respondent aggrieved wife). We write a regular-expression-based pipeline and consolidate such information to identify the gender of the plaintiff and the defendant across all the states. The names and salutations (e.g., , , , ) of the plaintiff and defendant also provide gender information. Subcultural naming conventions played a key role in assigning gender to the litigants in some of the cases. For instance, , meaning princess, is a Punjabi last name only for females <cit.>. Or , meaning sister, is solely used in many female names in Gujarat <cit.>. Dependence information of the litigants also provides gender information (e.g., , , ).[We did not find a single mention of in our dataset.] Of the 25,635 cases, we could unambiguously assign gender to 17,306 cases. For each case, we replace each mention of the litigants as or accordingly. For example, a proceeding snippet “The plaintiff/wife has filed for a divorce. The plaintiff was married to the defendant for three years.”, will be modified to “The wife has filed for a divorce. The wife was married to the husband for three years.” This data set, _divorce, consists of 30,615,754 (30 million) tokens. § BRIEF OVERVIEW OF INDIAN LEGAL SYSTEM Indian Judicial System is largely based on the English Common Law system (where, the law is developed by judges through their decisions, orders, and judgments). The nation has 28 states and 8 union territories (UT), and a total of 25 high courts (some high courts have jurisdiction of more than a state or UT). The federal structure has a supreme court coupled with the high courts that roughly handle the cases in a state or UT. The legal cases of divorce are usually handled by the family or district courts. However, some unresolved cases or sometimes fresh cases are also heard by the high courts. Since the court proceedings are public records and are digitally made available freely by , we found this dataset to be quite appropriate for a large-scale study on gender equality in court proceedings. § DOWRY IN DIVORCE PROCEEDINGS The dowry system involves a transaction of financial assets between the bride's family and the bridegroom's family with the latter being the recipient of the financial assets. While legally prohibited in India since 1961 <cit.>, this practice has continued well after its legal prohibition and has a strong link to social crises such as female feticide <cit.>, domestic abuse and violence <cit.>, and dowry deaths <cit.>. In order to protect the bride from marital cruelty and domestic violence, Indian Penal Code introduced Section 498 in 1983 <cit.>. Figure <ref> reflects relative proportions of divorce cases containing the text tokens and . For each state, we report the fraction of divorce cases that contain at least one mention of these two tokens. A higher intensity color indicates a larger proportion of such cases. We observe that overall, 24.38% of all cases and 21.86% of all cases mention and , respectively. Jacob and Chattopadhyay, <cit.> reported that divorce in India does not follow any one-size-fits-all pattern across different states; there exists sufficient interstate variation even for the rate of divorce. We notice a considerable variation in mentions of dowry and section 498-A across different states indicating variance in reported cases of dowry or domestic violence. Among the states and the union territories, the top three entries in terms of dowry mentions are Telangana, Delhi, and Bihar while the top three entries in terms of Section 498-A mentions are Bihar, Telangana, and Andhra Pradesh. Bihar and Telangana have social science literature documenting dowry and domestic violence <cit.>. Apart from the overlap in the top three entries, the statewise dowry and 498-A mentions are moderately correlated (correlation coefficient: 0.67). We next conduct a qualitative analysis of (alleged) dowry demands [This analysis follows the statements made by the plaintiffs]. On a random sample of 100 court proceedings where the (alleged) dowry demand is explicitly recorded, we observe that the estimated demanded amount is 393,100 ± 544,876. We observe demanded amounts as low as 5,000 to as high as 3,000,000 which explains the staggeringly high variance in our estimation. This also indicates the broad economic spectrum present in India and how far and wide the system of dowry (allegedly) persists. We further observe that cash is not always the solely demanded financial asset. Gold is the second-most commonly demanded asset. Out of the 100 cases, 34 cases report gold demands (71.2 ± 84.6 gm). When we adjust the valuation of demanded gold replacing it with the historical average gold price in India across 2012 and 2021 [Obtained from <https://www.bankbazaar.com/gold-rate/gold-rate-trend-in-india.html>], the estimated (alleged) demanded dowry is 474,798 ± 567,219. § METHODS OVERVIEW We use two NLP methods to quantify gender inequality: (1) Word Embedding Association Test; and (2) a text entailment framework. A brief description follows. §.§ Word Embedding Based Methods The first metric is called ord mbedding ssociation est () introduced by <cit.>. To calculate the metric, the words are embedded and the vectors a and b are obtained for the words a and b respectively. The cosine similarity of these words are denoted by cos(a,b). The metric considers two sets of target words given by and , and two sets of attribute words Å and . Then, the score is defined as (, , Å, ) = (_x ∈σ(x, Å, ) - _y ∈σ(y, Å, ))/_w ∈∪σ(w, Å, ), where, σ(w, Å, ) = _a ∈Åcos(w,a) - _b ∈cos(w,b). Intuitively, σ(w, Å, ) measures the association of w with the attribute sets, and the score measures the differential association of the two sets of target words with the attribute sets. A positive score implies that the target words in is more associated with the attribute words in Å than and the words in is more associated with than Å. §.§ Text Entailment Based Methods Quantifying gender inequality relying on the distributed representation of words presents a diffused, bird's-eye view of the larger trends. Also, these methods are known to be data-hungry <cit.>. Data availability often becomes a limiting factor to conducting contrastive studies at different spatio-temporal granularity. In what follows, we present a novel application of text entailment, a natural language inference (NLI) task <cit.> that bypasses the data size requirement and equips us with a finer lens through which we can compare and contrast gender inequality with respect to individual verbs. An NLI system take a premise 𝒫 and a hypothesis ℋ as input and outputs entailment, contradiction, or semantic irrelevance. For instance, the hypothesis some men are playing a sport is entailed by the premise a soccer game with multiple males playing <cit.>. As one can see, textual entailment is more relaxed than pure logical entailment and it can be viewed as a human reading 𝒫 would infer most likely ℋ is true. This framework has gained traction in several recent social inference tasks that include estimating media stance on policing <cit.>, aggregating social media opinion on election fairness <cit.>, and detecting COVID-19 misinformation <cit.>. Formally, let NLI(𝒫,ℋ) takes a premise 𝒫 and a hypothesis ℋ as inputs and outputs o ∈{entailment, contradiction, neutral}. Following <cit.>, we define entailment ratio (denoted by ent(𝒟, ℋ)) for given corpus 𝒟 and a hypothesis ℋ, as the fraction of the individual sentences present in 𝒟 that entails ℋ: ent(𝒟, ℋ) = ∑_𝒫∈𝒟I(NLI(𝒫, ℋ) = entailment)/|𝒟|, where I is the indicator function. A larger value of ent(𝒟, ℋ) indicates greater support for ℋ in the corpus. Consider we are interested in learning how often the husband and the wife are accused of torture (physical or emotional) in our corpus. We analyze this research question in the following way. We first construct a sub-corpus 𝒟_torture from the divorce court proceedings consisting of sentences that (1) mention or at least once; and (2) mention as a verb at least once. We next construct two hypotheses – ℋ_,torture and ℋ_,torture – using a and a as victims and perpetrators interchangeably. ℋ_,torture is A woman tortures a man and ℋ_,torture is A man tortures a woman. We next compute the entailment gap defined as gap(𝒟_torture,torture) = ent(𝒟_torture,ℋ_,torture) - ent(𝒟_torture,ℋ_,torture) Effectively, this means we compute the fraction of sentences that entail A woman tortures a man in 𝒟_torture and subtract it from the fraction of sentences that entail A man tortures a woman in 𝒟_torture. An overall positive number indicates that the male has been described as the torturer more often than the female in court proceedings. A negative value would indicate the opposite way. Similar analysis can be extended to other verbs such as , , or . § DESIGN CONSIDERATIONS Adapting the and entailment frameworks to quantify gender inequality in our domain requires careful consideration of several aspects described in what follows. §.§ Verbs for Target Sets Traditionally, score is used to quantify gender or racial stereotypes. Majority of the elements present in those attribute sets would be nouns and adjectives (e.g., criminals, terrorists, doctors, police) <cit.> and seldom verbs <cit.>. We are interested in understanding the action space of the two parties fighting a divorce case; we want to know if the court described that one party tortured or abused the other. Hence, verbs are a natural choice for our target set. We inspect the list of high-frequency verbs in the corpus and narrow down to the following ten verbs: _unpleasant = {, , , , , , , , , }. A small subset of these words are already present in the list of unpleasant stimuli presented in <cit.>. We further compute the average valence score of these words as per the lexicon presented in <cit.>. We find the average valence score of _unpleasant is 2.7, comparable to the average valence score (2.16) of unpleasant stimuli presented in <cit.>. Divorce being a bitterly fought family situation, we observe a sparse presence of pleasant verbs such as , , or in our corpus. Since infrequent words in the corpus do not have reliable embeddings <cit.>, in contrast with traditional applications of score, we choose the target set to be an empty set. §.§ The Torturer and the Tortured The attribute sets Å and as defined in the score represents the identifiers used for the plaintiff and defendant in our data (e.g., Å consisting of , , , and consisting of , , etc.). However, notice that score is agnostic about whether the identifier is the contributor or the receptor of target words. For example, torture does not happen in isolation; it requires a torturer and one who is tortured. Unlike nouns, verbs are typically associated with two entities – the subject and the object. To disambiguate between “the husband tortured the wife” and “the wife tortured the husband”, a word embedding needs to understand this nuance. Otherwise, the embedding is likely to place both the plaintiff and defendant identifiers equidistant to the verb. To disambiguate these two situations, we run the corpus through the POS tagger <cit.> to find out the subject and object of the sentences and whether the statements are in active or passive voice. Based on this, we classify the subjects and objects as `male perpetrator', `female perpetrator', `male victim', or `female victim', in the sentences that has the target verbs. We replace these four cases with four unique words (denoted by , ,, and , respectively) so that those words do not occur anywhere else in any of the documents. We call this new dataset _replaced. § WORD EMBEDDING BASED ANALYSIS We are interested in two research questions: RQ 1: How does gender inequality manifest in divorce court proceedings with respect to unpleasant verbs in 𝒳? RQ 2: Is our careful disambiguation of the torturer and the tortured necessary at all? In order to answer these two questions, we run two sets of experiments with identical training configurations. First, we run experiments on _replaced using the target and attribute sets as defined in the previous section. We train the word embedding model 10 times and calculate the scores for each of the following two cases: when both genders are (a) perpetrators, i.e., when Å={}, ={}, and (b) victims, i.e., when Å={}, ={}. We use the default parameters for training our FastText <cit.> Skip-gram embedding with the dimension set to 100 for all word-embeddings in this paper. Second, we run a baseline experiment with the original text data without replacing them with the four unique words (_divorce) and use the attribute sets as Å={} and ={}. The number of runs and the embedding method are the same in both experiments. The results are shown in <Ref>. As already described, a negative score indicates is more associated with the target set as compared to Å. Hence, if we look from the perspective of the victim, we find that women are more associated with the unpleasant verbs than men. In contrast, when viewed from the perpetrator's perspective, a positive score implies that men are more associated with the unpleasant verbs. Hence, our results indicate that in our corpus, women are more often the victims while men are more often the perpetrators. Our baseline experiments that do not make any distinction between the perpetrator and the victim give a score close to zero indicating near-perfect gender equality. This inaccurate result, while highly surprising from a social science perspective, is not unexpected given how the original framework functions. The two entities (husband and wife) are present around the unpleasant verbs with nearly equal frequency. If the method does not make any distinction between the roles of victim and perpetrator, will give inaccurate results. We thus carefully use the score to elicit the correct gender bias when applied to legal texts for our social science research question. § SOCIETAL INEQUALITY AND MODEL BIAS Our word embeddings are computed from scratch while our next set of experiments relies on downstream applications built on top of large language models. Large language models (LLMs) are known to have a wide range of biases due to the train data <cit.> and extant literature has examined gender bias in the form of occupational stereotypes present in NLI systems <cit.>. We thus need to disentangle societal inequalities that are potentially reflected in our corpus and model biases that are potentially present in the NLP applications. Essentially, for a premise/hypothesis pair ⟨𝒫,ℋ⟩, the NLI system estimates the probability P(ℋ |𝒫). However, how LLMs encode the probability P(ℋ) when the hypotheses primarily consists of the two genders (male and female) and a set of verbs is understudied. A thorough investigation first reveals that the masked word prediction probability of several well-known LLMs is sensitive to gender. We next present a measure to quantify gender bias sensitivity of NLI frameworks and present mitigating strategies. Finally, we use a bias-mitigated NLI system on our corpus and report findings. §.§ Implicit Bias in Agent and Theme in LLMs Unlike existing literature that primarily target occupational stereotypes to quantify and analyze gender bias <cit.>, we focus on a very basic unit in a sentence – the verbs. Following <cit.>, let in a sentence X verbs Y, X represent the agent and Y represent the theme. Many verbs imply the relative authority levels between the agent and the theme. For example, in the sentence The football coach instructed the players to play a conservative game, the agent (the football coach) has more authority than the theme (the players). In contrast, the agent has less authority than the theme in the sentence The football coach honored the players' suggestion to play a conservative game. First proposed in <cit.>, the connotation relation of power captures this notion of power differential between an agent and a theme with respect to a given verb. While the connotation relation of power has been analyzed in the context of gender inequality in movie scripts <cit.> and follow-on research focused on editorial fixes to remove bias <cit.>, little or no literature exists that documents the implicit gender bias present towards the agent and the theme when specific verbs are considered. This research is important and has a broader impact beyond our current social inference task. For instance, if an LLM encodes that it is less likely for a woman to inspire or guide someone than a man, this bias may percolate to downstream tasks leading to erroneous social conclusions when applied to large-scale data for other social inference tasks. We use cloze tests to evaluate this implicit bias. A brief description of cloze test follows. Cloze test: When presented with a sentence (or a sentence stem) with a missing word, a cloze task <cit.> is essentially a fill-in-the-blank task. For instance, in the following cloze task: In the , it snows a lot, is a likely completion for the missing word. Word prediction as a test of LLM's language understanding has been explored in <cit.>. Bias Evaluation Framework: We describe our proposed testing framework for gender bias. Let _𝑐𝑙𝑜𝑧𝑒 (w, 𝒮) denote the completion probability of the word w with a masked cloze task 𝒮 as input. For a given verb v, we consider the following four cloze tests: * A [MASK] v a woman (denoted by v_womanAsTheme) * A [MASK] v a man (denoted by v_manAsTheme) * A man v a [MASK] (denoted by v_manAsAgent) * A woman v a [MASK] (denoted by v_womanAsAgent) In an ideal world where the LLM treats men and women equally, _𝑐𝑙𝑜𝑧𝑒 (man, v_womanAsTheme) and _𝑐𝑙𝑜𝑧𝑒 (woman, v_manAsTheme) should be equal. However, our preliminary exploratory analysis indicates that is not the case. For example, when v is set to inspire, _𝑐𝑙𝑜𝑧𝑒 (man, v_womanAsTheme) is 0.20 whereas _𝑐𝑙𝑜𝑧𝑒 (woman, v_manAsTheme) is 0.16. When we set v to guide, the gap widens – _𝑐𝑙𝑜𝑧𝑒 (man, v_womanAsTheme) is 0.71 whereas _𝑐𝑙𝑜𝑧𝑒 (woman, v_manAsTheme) is 0.36. Again, in an ideal world where the LLM treats men and women equally, _𝑐𝑙𝑜𝑧𝑒 (man, v_womanAsAgent) and _𝑐𝑙𝑜𝑧𝑒 (woman, v_manAsAgent) should be equal. Let 𝒱 denote the set of all verbs listed in <cit.> where the agent has more power than the theme. Our overall measures of implicit bias are: (a) (1/|𝒱|) ·( ∑_v ∈𝒱 (_𝑐𝑙𝑜𝑧𝑒 (man, v_womanAsTheme) - . . _𝑐𝑙𝑜𝑧𝑒 (woman, v_manAsTheme)) ), and (b) (1/|𝒱|) ·(∑_v ∈𝒱 (_𝑐𝑙𝑜𝑧𝑒 (man, v_womanAsAgent) - . . _𝑐𝑙𝑜𝑧𝑒 (woman, v_womanAsAgent)) ). Measure (a) quantifies bias_agent. A positive value indicates that the LLM encodes a man being in the position of agent likelier than a woman on expectation. Measure (b) quantifies bias_theme. A positive value indicates that the LLM encodes a man being in the position of theme likelier than a woman on expectation. We investigate three well-known LLMs for this audit:  <cit.>;  <cit.>; and  <cit.>. We consider 1,222 verbs listed in <cit.>. We also consider verbs in 𝒳_unpleasant for this study. Table <ref> summarizes our gender bias audit of LLMs with respect to verbs implying more power to the agent than the theme. We first note that for both verb sets, bias_agent is substantially larger than bias_theme. This result indicates that men are considerably more likely to be considered as the agent when women is the theme and the verb implies that the agent has greater power than the theme. We also note that the completions favor mildly men over women even for the theme, however, the values are closer to 0. §.§ Implicit Bias in NLI Systems We describe our approach to quantify model bias in our NLI framework specific to our task. Consider we modify the sub-corpus 𝒟_torture to 𝒟_torture^flipped where the gender identifiers in each premise sentence are flipped to the equivalent identifier of the opposite gender. For instance, the premise The wife tortured the husband both mentally and physically will be modified as The husband tortured the wife both mentally and physically. Flipping gendered words to test bias through counterfactuals in the context of coreference resolution has been previously explored in <cit.>. We argue that if a premise in 𝒟_torture entails A man tortures a woman, the flipped premise in 𝒟_torture^flipped should entail A woman tortures a man instead in a gender-neutral NLI system. Hence the entailment gap for computed on 𝒟_torture should be equal in magnitude and opposite in polarity as the entailment gap computed on 𝒟_torture^flipped. The NLI system's (ℳ) overall bias score with respect to verbs present in 𝒳_unpleasant is thus computed as NLI_bias(ℳ, 𝒳_unpleasant) = ∑_v ∈𝒳_unpleasantabs( (gap(𝒟_v, v) + gap(𝒟_v^flipped, v))/|𝒳_unpleasant|. In simple words, for each verb, we compute the entailment gap (value_1) for the relevant sub-corpus and the flipped sub-corpus (value_2). We subtract value_2 from value_1 and take the absolute value of the sum. The bias score is the average value of this sum across all verbs: a score close to 0 indicates that the NLI system has a minimal bias, whereas larger values indicate greater bias. Our baseline is an off-the-shelf NLI system from Allen NLP trained using (denoted by ℳ_base). We find that NLI_bias(ℳ_base, 𝒳_unpleasant) is 0.27 [We note that a bias-aware NLI variant from Allen NLP has a better starting point (bias score 0.20) than the base model. However, the bias-aware model exhibits slower convergence than the base model when we conduct our active learning steps as discussed in Section 7.3. With identical experimental setting, after iteration 3, the bias-aware model improves its bias score to 0.133.]. §.§ Bias Mitigation Via Inconsistency Sampling Active Learning is a powerful and well-established form of supervised machine learning technique <cit.> characterized by the interaction between the learner, aka the classifier, and the teacher (oracle or annotator). Each interaction step consists of the learner requesting the teacher the label of an unlabeled instance sampled using a given sampling strategy and augmenting the data set with the newly acquired label. Next, the classifier is retrained on the augmented data set. This sequential label-requesting and re-training process continues until some halting condition is reached (e.g., exceeded annotation budget or the desired classifier performance). At this point, the algorithm outputs a classifier, and the objective for this classifier is to closely approximate the (unknown) target concept in the future. The key goal of active learning is to reach a strong performance at the cost of fewer labels. Some of the well-known sampling methods include uncertainty sampling <cit.>, certainty sampling <cit.>, and density-based sampling <cit.>. Beyond a static strategy, more complex strategies such as adapting strategy selection parameters based on estimated future residual error reduction or combining multiple sampling strategies to balance the label distribution in the procured data set have been explored in <cit.> and <cit.>, respectively. Inconsistency Sampling. First introduced in Dutta et al. <cit.>, this sampling technique exploits the underlying logical structure of the ⟨ premise, hypothesis ⟩ space. For instance, a premise cannot both entail (or contradict) a given hypothesis and its negation. In our work, we extend this idea and exploit a ⟨ premise, hypothesis ⟩ space richer than Dutta et al. <cit.> for logical inconsistency. Consider the premise/hypothesis pair Continuously her husband used to harass and torture her everyday/A man tortures a woman. We argue that if this premise entails the hypothesis (which it does), the modified premise/hypothesis pair with replacing every gendered word with the opposite gender – i.e., Continuously his wife used to harass and torture him everyday/A woman tortures a man – should also entail. If not, it signals a logical inconsistency. For each sampling iteration, we add 60 samples giving equal weightage to the verbs present in 𝒳_unpleasant. Table <ref> summarizes our active learning results. For both models, ℳ_base and ℳ_bias-aware, we conduct three rounds of active learning using inconsistency sampling and stop when the performance improvement becomes indiscernible (≤ 0.01). All annotations are independently conducted by two annotators. Since legal documents are typically written in clear, unambiguous language, we observe a near-perfect agreement (Cohen's κ value 0.96). The remaining disagreements are resolved through a post-annotation adjudication step. Table <ref> indicates that with subsequent active learning steps, our NLI system exhibits lesser bias. Given that the maximum possible bias score is 2, we achieve substantial improvement in mitigating the bias. Now that we are more confident that our model inferences are less sensitive to gender, we evaluate the societal bias present in our corpus. Figure <ref> summarizes our text entailment results. Barring , for all other verbs, men are identified as perpetrators more often than women. We further note that verbs that indicate physical abuse, such as and , particularly stand out with larger values. The average entailment gap for verbs unambiguously indicating physical harm – , , , , and – is much higher (0.41) than verbs that may or may not indicate physical harm (0.19) such as , , , , and . A manual inspection of randomly sampled 200 ⟨ premise, hypothesis⟩ pairs aligns with our automated method's overall findings. § DISCUSSIONS AND LIMITATIONS In this paper, we present the first-ever computational analysis (to our knowledge) of gender inequality in divorce court proceedings in India. Based on the documented allegations of parties involved in the divorce, our analyses indicate a striking gender inequality as described in these public records. While documented evidence of marital distress in India exists in social science literature, how such factors play out in divorce has limited understanding. Our study sheds light on a vulnerable and vulnerable and practically invisible community in India. Methodologically, we identify and address several gaps and limitations of existing NLP techniques to quantify gender inequality. We believe our finding specific to legal text is new, and our method to address it is simple, effective, and intuitive. Casting the problem of quantifying gender inequality as a text entailment task is also new. Our results on text entailment results suggest that NLI can be a viable tool to computational social science researchers to analyze similar research questions (e.g., who gets the child custody can be estimated with hypotheses the husband gets the custody of the child and the wife gets the custody of the child). Moreover, our bias mitigation strategy exploiting a novel inconsistency sampling technique using counterfactuals holds promise. Our work has the following limitations. Sentence level processing: An important point to keep in mind, however, is that our analyses operate at the sentence level. If in a court proceeding, a sentence records that the plaintiff accuses the defendant of wrongdoing which the defendant denies in a subsequent sentence, how these two contradicting claims are resolved in the court cannot be inferred without language models that can handle document-level contexts. We believe our research will open the gates for investigation with newer-age LLMs that can handle broader contexts. Archival limitation: The sparse presence of the North-Eastern region in our dataset is most likely due to archival limitation as some of these states record the highest rate of divorce <cit.>. Our study is also limited by the overall archival extent of . Economic independence: Some of the court proceedings mention the litigants' occupations. We annotated randomly 100 sampled occupations for women. While an overwhelming majority of the sampled occupations are homemakers, compared to World Bank Data on labor force participation of women in India (23%), 32% of the women are working women in our sampled occupations. Economic independence and divorce merit a deeper exploration. Out-of-court settlements, separation, abandonment: Finally, not all unhappy marriages end up in divorce and reach court for dissolution. Many out-of-court settlements happen. As documented in <cit.>, the number of separated women in 2011 is almost three times the number of divorced women. Since divorce is still looked at as a social stigma <cit.> and family institutions are highly valued in India, there could be many women who continue with their dysfunctional marriages while unhappy. The court does not know their stories. § ETHICAL STATEMENT We work with public court records. Prior studies exist on Indian court proceedings <cit.>. We conduct aggregate analysis refraining from presenting any personally identifiable information in the paper. Hence, we do not see any ethical concern. Rather, we believe our findings and methods can be valuable to policymakers and social scientists. A study on binary gender inequality runs the risk of oversimplifying gender, which we acknowledge lies on a spectrum. Same-sex marriage is yet not legal in India. Further nuances will be needed to extend our work to other cultures allowing same-sex marriages. We are also sensitive to previous studies that point out the potential harms of the erasure of gender and sexual minorities <cit.>. 10 jacob2016marriage Suraj Jacob and Sreeparna Chattopadhyay. Marriage dissolution in india: Evidence from census 2011. Economic and Political Weekly, 51(33):25–27, 2016. dommaraju2016divorce Premchand Dommaraju. Divorce and separation in india. Population and Development Review, pages 195–223, 2016. goode1962marital William J. Goode. Marital satisfaction and instability-a cross-cultural class analysis of divorce rates. International social science journal, 14(3):507–526, 1962. mani2017study A Santhosh Mani and Bhanu Priya. A study on the recent trends of divorce in india. ZENITH International Journal of Multidisciplinary Research, 7(8):25–32, 2017. belliappa2013gender Jyothsna Belliappa. Gender, class and reflexive modernity in India. Springer, 2013. vasudevan2015causes Bindhu Vasudevan, Devi M. Geetha, Anitha Bhaskar, Binu Areekal, Anupa Lucas, et al. Causes of divorce: a descriptive study from central kerala. Journal of evolution of medical and dental sciences, 4(20):3418–3427, 2015. bhattacharya2019comparative Paheli Bhattacharya, Kaustubh Hiware, Subham Rajgaria, Nilay Pochhi, Kripabandhu Ghosh, and Saptarshi Ghosh. A comparative study of summarization algorithms applied to legal case judgments. In ECIR, pages 413–428. Springer, 2019. kalia2022classifying Arvind Kalia, Naveen Kumar, and Nischay Namdev. Classifying case facts and predicting legal decisions of the indian central information commission: a natural language processing approach. In Advances in Deep Learning, Artificial Intelligence and Robotics, pages 35–45. Springer, 2022. ash2021group Elliott Ash, Sam Asher, Aditi Bhowmick, Sandeep Bhupatiraju, Daniel Chen, Tanaya Devi, Christoph Goessmann, Paul Novosad, and Bilal Siddiqi. In-group bias in the Indian judiciary: Evidence from 5 million criminal cases. Technical report, Working paper, August, 2021. kumar2020sexual Anil Kumar. Sexual harassment of women at workplace: How far is indian law protective? International Academic Journal of Law, 1(1):35–39, 2020. madaan2018analyze Nishtha Madaan, Sameep Mehta, Taneea Agrawaal, Vrinda Malhotra, Aditi Aggarwal, Yatin Gupta, and Mayank Saxena. Analyze, detect and remove gender stereotyping from bollywood movies. In MAccT, pages 92–105. PMLR, 2018. DBLP:conf/acl-trac/BhattacharyaSKB20 Shiladitya Bhattacharya, Siddharth Singh, Ritesh Kumar, Akanksha Bansal, Akash Bhagat, Yogesh Dawer, Bornini Lahiri, and Atul Kr. Ojha. Developing a multilingual annotated corpus of misogyny and aggression. In Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying, pages 158–168, 2020. khadilkar2021gender Kunal Khadilkar, Ashiqur R. KhudaBukhsh, and Tom M. Mitchell. Gender bias, social bias, and representation in Bollywood and Hollywood. Patterns, 3(4):100486, 2022. rao1973dowry R. Jaganmohan Rao. Dowry system in India — a socio-legal approach to the problem. Journal of the Indian Law Institute, 15(4):617–625, 1973. ahmad2008dowry Nehaluddin Ahmad. Dowry deaths (bride burning) in India and abetment of suicide: a socio-legal appraisal. JE Asia & Int'l L., 1:275, 2008. sonawat2001understanding Reeta Sonawat. Understanding families in india: A reflection of societal changes. Psicologia: Teoria e Pesquisa, 17:177–186, 2001. caliskan2017semantics Aylin Caliskan, Joanna J. Bryson, and Arvind Narayanan. Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334):183–186, 2017. maccartney2008modeling Bill MacCartney and Christopher D. Manning. Modeling semantic containment and exclusion in natural language inference. In COLING 2008, pages 521–528, 2008. mandal2021unsupervised Arpan Mandal, Kripabandhu Ghosh, Saptarshi Ghosh, and Sekhar Mandal. Unsupervised approaches for measuring textual similarity between legal court case reports. Artificial Intelligence and Law, 29(3):417–451, 2021. HaltermanKSO21Policing Andrew Halterman, Katherine A. Keith, Sheikh Muhammad Sarwar, and Brendan O'Connor. Corpus-Level Evaluation for Event QA: The IndiaPoliceEvents Corpus Covering the 2002 Gujarat Violence. In ACL/IJCNLP 2021, volume ACL/IJCNLP 2021 of Findings of ACL, pages 4240–4253, 2021. DuttaPolice Sujan Dutta, Beibei Li, Daniel S. Nagin, and Ashiqur R. KhudaBukhsh. A murder and protests, the capitol riot, and the chauvin trial: Estimating disparate news media stance. In IJCAI, pages 5059–5065, 2022. kaur2019gap Harjnder Kaur-Aulja, Farzana Shain, and Alison Lilley. A Gap Exposed: What is Known About Sikh Victims of Domestic Violence Abuse (DVA) and Their Mental Health? European Journal of Mental Health, 14(1):179–189, 2019. mistry1982personal PJ Mistry. Personal names: Their structure, variation, and grammar in Gujarati. South Asian Review, 6(3):174–190, 1982. ghansham2002female Devaki Monani Ghansham. Female foeticide and the dowry system in India. In Townsville International Women’s Conference, James Cook Univ., Australia, 2002. banerjee2014dowry Priya R. Banerjee. Dowry in 21st-century India: the sociocultural face of exploitation. Trauma, Violence, & Abuse, 15(1):34–40, 2014. rastogi2006dowry Mudita Rastogi and Paul Therly. Dowry and its link to violence against women in India: Feminist psychological perspectives. Trauma, Violence, & Abuse, 7(1):66–77, 2006. carpenter2016protecting Deepshikha Carpenter and Polly Vauquline. Protecting Women from Domestic Violence in Assam, India? Evaluating Section 498-A, The Indian Penal Code (IPC), 1983 vs the Protection of Women from Domestic Violence Act (PWDVA), 2005. Journal of International Women's Studies, 18(1):133–144, 2016. babu2011dowry Gopalan Retheesh Babu and Bontha Veerraju Babu. Dowry deaths: a neglected public health issue in India. International health, 3(1):35–43, 2011. jakimow2013everyone Tanya Jakimow. ‘everyone must give’: Explaining the spread and persistence of bridegroom price among the poor in rural telangana, india. Journal of Asian and African Studies, 48(2):180–194, 2013. DBLP:conf/nips/MikolovSCCD13 Tomás Mikolov, Ilya Sutskever, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems, pages 3111–3119, 2013. dagan2005pascal Ido Dagan, Oren Glickman, and Bernardo Magnini. The pascal recognising textual entailment challenge. In Machine Learning Challenges Workshop, pages 177–190. Springer, 2005. bowman-etal-2015-large Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. A large annotated corpus for learning natural language inference. In EMNLP, 2015. halterman-etal-2021-corpus Andrew Halterman, Katherine Keith, Sheikh Sarwar, and Brendan O'Connor. Corpus-level evaluation for event QA: The IndiaPoliceEvents corpus covering the 2002 Gujarat violence. In ACL-IJCNLP, pages 4240–4253, 2021. Capitol2022 Ashiqur R. KhudaBukhsh, Rupak Sarkar, Mark S. Kamlet, and Tom M. Mitchell. Fringe news networks: Dynamics of US news viewership following the 2020 presidential election. In ACM WebScience, pages 269–278, 2022. hossain-etal-2020-covidlies Tamanna Hossain, Robert L. Logan IV, Arjuna Ugarte, Yoshitomo Matsubara, Sean Young, and Sameer Singh. COVIDLies: Detecting COVID-19 misinformation on social media. In Proceedings of the 1st Workshop on NLP for COVID-19 (Part 2) at EMNLP 2020, December 2020. DBLP:conf/naacl/ManziniLBT19 Thomas Manzini, Yao Chong Lim, Alan W. Black, and Yulia Tsvetkov. Black is to criminal as caucasian is to police: Detecting and removing multiclass bias in word embeddings. In NAACL-HLT, pages 615–621, 2019. greenwald2014malice Anthony G. Greenwald and Thomas F. Pettigrew. With malice toward none and charity for some: ingroup favoritism enables discrimination. American Psychologist, 69(7):669, 2014. DBLP:conf/acl/HoyleWWAC19 Alexander Hoyle, Lawrence Wolf-Sonkin, Hanna M. Wallach, Isabelle Augenstein, and Ryan Cotterell. Unsupervised discovery of gendered language through latent-variable modeling. In ACL 2019, pages 1706–1716, 2019. warriner2013norms Amy Beth Warriner, Victor Kuperman, and Marc Brysbaert. Norms of valence, arousal, and dominance for 13,915 english lemmas. Behavior research methods, 45(4):1191–1207, 2013. DBLP:conf/iclr/LampleCRDJ18 Guillaume Lample, Alexis Conneau, Marc'Aurelio Ranzato, Ludovic Denoyer, and Hervé Jégou. Word translation without parallel data. In ICLR. OpenReview.net, 2018. qi2020stanza Peng Qi, Yuhao Zhang, Yuhui Zhang, Jason Bolton, and Christopher D. Manning. Stanza: A Python natural language processing toolkit for many human languages. In ACL: System Demonstrations, 2020. bojanowski2017enriching Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. Enriching word vectors with subword information. TACL, 5:135–146, 2017. bender2021dangers Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? In ACM FaccT, pages 610–623, 2021. rudinger2017social Rachel Rudinger, Chandler May, and Benjamin Van Durme. Social bias in elicited natural language inferences. In Proceedings of the First ACL Workshop on Ethics in Natural Language Processing, pages 74–79, 2017. DBLP:journals/corr/abs-2105-05541 Shanya Sharma, Manan Dey, and Koustuv Sinha. Evaluating gender bias in natural language inference. CoRR, abs/2105.05541, 2021. kumar2020nurse Vaibhav Kumar, Tenzin Singhay Bhotia, and Tanmoy Chakraborty. Nurse is closer to woman than surgeon? mitigating gender-biased proximities in word embeddings. TACL, 8:486–503, 2020. SAPPowerAgency Maarten Sap, Marcella Cindy Prasettio, Ari Holtzman, Hannah Rashkin, and Yejin Choi. Connotation frames of power and agency in modern films. In EMNLP 2017, pages 2329–2334, 2017. PowerTransformer Xinyao Ma, Maarten Sap, Hannah Rashkin, and Yejin Choi. Powertransformer: Unsupervised controllable revision for biased language correction. In EMNLP 2020, pages 7426–7441, 2020. taylor1953cloze Wilson L. Taylor. “Cloze procedure”: A new tool for measuring readability. Journalism quarterly, 30(4):415–433, 1953. paperno-etal-2016-lambada Denis Paperno, Germán Kruszewski, Angeliki Lazaridou, Ngoc Quan Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, and Raquel Fernández. The LAMBADA dataset: Word prediction requiring a broad discourse context. In ACL 2016, pages 1525–1534, 2016. ettinger-2020-bert Allyson Ettinger. What BERT is not: Lessons from a new suite of psycholinguistic diagnostics for language models. TACL, 8:34–48, 2020. devlin2018bert Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018. Roberta Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized BERT pretraining approach. CoRR, abs/1907.11692, 2019. Megatron Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper, and Bryan Catanzaro. Megatron-lm: Training multi-billion parameter language models using model parallelism. CoRR, abs/1909.08053, 2019. lu2020gender Kaiji Lu, Piotr Mardziel, Fangjing Wu, Preetam Amancharla, and Anupam Datta. Gender bias in neural natural language processing. In Logic, Language, and Security, pages 189–202. Springer, 2020. settles2009active Burr Settles. Active learning literature survey. Computer Sciences Technical Report 1648, University of Wisconsin–Madison, 2009. sindhwani2009uncertainty Vikas Sindhwani, Prem Melville, and Richard D. Lawrence. Uncertainty sampling and transductive experimental design for active dual supervision. In ICML, pages 953–960. ACM, 2009. nguyen2004active Hieu T. Nguyen and Arnold Smeulders. Active learning using pre-clustering. In ICML, page 79, 2004. donmez2007dual Pinar Donmez, Jaime G Carbonell, and Paul N Bennett. Dual strategy active learning. In Machine Learning: ECML 2007, pages 116–127. Springer, 2007. palakodety2020voice Shriphani Palakodety, Ashiqur R. KhudaBukhsh, and Jaime G. Carbonell. Voice for the voiceless: Active sampling to detect comments supporting the Rohingyas. In AAAI 2020, volume 34-01, pages 454–462, 2020. ArjunErasurePaper Sunipa Dev, Masoud Monajatipoor, Anaelia Ovalle, Arjun Subramonian, Jeff M. Phillips, and Kai-Wei Chang. Harms of gender exclusivity and challenges in non-binary representation in language technologies. In EMNLP, pages 1968–1994, 2021.
http://arxiv.org/abs/2307.04561v1
20230710135100
Performance comparison of timing-based anomaly detectors for Controller Area Network: a reproducible study
[ "Francesco Pollicino", "Dario Stabili", "Mirco Marchetti" ]
cs.CR
[ "cs.CR", "cs.PF" ]
Both authors contributed equally to this research. [email protected] 0000-0002-2421-1852 [1] [email protected] 0000-0001-6850-334X [email protected] 0000-0002-7408-6906 University of Modena and Reggio Emilia Via P. Vivarelli, 10 Modena ITA This work presents an experimental evaluation of the detection performance of eight different algorithms for anomaly detection on the Controller Area Network (CAN) bus of modern vehicles based on the analysis of the timing or frequency of CAN messages. This work solves the current limitations of related scientific literature, that is based on private dataset, lacks of open implementations, and detailed description of the detection algorithms. These drawback prevent the reproducibility of published results, and makes it impossible to compare a novel proposal against related work, thus hindering the advancement of science. This paper solves these issues by publicly releasing implementations, labeled datasets and by describing an unbiased experimental comparisons. Performance comparison of timing-based anomaly detectors for Controller Area Network: a reproducible study Mirco Marchetti August 12, 2023 ========================================================================================================== § INTRODUCTION Automotive Cyber Security is a relatively new research area, that is rapidly growing to encompass many different security issues: from the design of security countermeasures aiming to deter cyber attackers, to detection mechanisms trying to identify ongoing attacks and reactive countermeasures to contain and respond to malicious activities. One of the most active research areas in this field is the design of novel intrusion detection systems applied to the Controller Area Network (CAN) bus, one of the prominent network technologies used to interconnect Electronic Control Units (ECUs) deployed within modern vehicles. Several intrusion detection algorithms have already been proposed <cit.>, and it is possible to classify them based on the features of the CAN communication that they use to identify anomalies and attacks. This paper focuses on the subset of CAN intrusion detection systems that identifies anomalies by analyzing the timing with which CAN messages are sent over the CAN bus. This detection approach is promising, since many powerful attacks (such as fuzzing <cit.>, injection <cit.> and Denial of Service <cit.>) are based on the injection of malicious messages on the CAN bus in addition to the legitimate traffic. Our main goal is to propose an unbiased comparison of the detection performance of time-based anomaly detectors for the CAN bus. We stress that this task is much more complex than it might seem. A direct comparison among different detection algorithms published in the scientific literature is hindered by many different issues. Different papers use different detection metrics to evaluate the performance of the proposed solution. Each work uses a proprietary dataset for carrying out the experimental performance evaluation, and in most of the cases the dataset is not publicly released, it is only described at a very high level, and lacks all the details required to replicate the attacks. Moreover, novel solutions designed to detect anomalies in CAN communication are not compared to existing solutions, or are compared with naive detection metrics to demonstrate that a more complex solution is better in the detection task. In the vast majority of cases the authors of a scientific paper do not disclose a reference implementation, thus requiring other researchers to re-implement the proposed algorithm. More often than not, the paper only includes a high level description of the proposed algorithm that lacks many relevant details that are actually required for a real implementation. Finally, several papers omit important aspects related to the tuning and training of the proposed algorithm that have a strong impact on their detection performance. This work tackles three of the important limits in this field of research. The first major contribution is the empirical and unbiased comparison of eight different time-based CAN anomaly detectors over two different datasets, an original one and one that is already publicly available. The second contribution to the state-of-the-art is to foster the reproducibility of similar studies. We publicly release all the reference implementation of the detectors considered in this study, together with the novel dataset used to tune and test the detection algorithm. This contribution allows all researchers and industry practitioners to fully replicate our research results, validate the correctness of our reference implementations, assess the quality of the dataset, and easily compare a novel proposal with respect to the state-of-the-art. Finally, we highlight the limits of publicly available datasets used for the experimental evaluation of existing solutions against the dataset presented at first in this work. This analysis also demonstrate that it is crucial to identify a comprehensive threat model to demonstrate how anomaly detectors are effected by attacks, such as the effects of different cycle times and injection frequency on the overall performance evaluation. The remainder of the paper is structured as follows. Section <ref> discusses the main characteristics of the CAN bus and of CAN messages that are required to understand this work. Section <ref> presents the related work and describes in detail the eight detectors considered in this paper. Section <ref> describes the dataset used throughout the paper to validate and test the detection algorithms. Section <ref> presents our reference implementation of the eight detectors, focusing on the additional assumption and design choices that are missing in the original papers. Section <ref> compares the detection performance of our reference implementations against several attack instances, involving different CAN messages and injection frequencies. Finally, Section <ref> concludes the paper. § A PRIMER ON THE CONTROLLER AREA NETWORK The Controller Area Network (CAN) is one of the communication protocols used between the Electronic Control Units (ECU) deployed within the vehicle <cit.>. The CAN bus is designed to enable communication between the nodes without requiring a host computer. CAN is one of the most deployed networking protocols for in-vehicular communications due to its high resilience to electromagnetic interferences and its low implementation costs. It is a broadcast-message based communication protocol, and transmission on the CAN bus uses a bit-wise arbitration method for contention resolution. When two different nodes start transmission of a frame at the same time, the node with the highest priority continues sending the frame without interruption, while the other node backs off and re-tries transmission at a later time. The CAN protocol defines 4 different types of frames with different usages, but only the data frame is used to transmit data between ECUs. The main fields composing the CAN data frame are the identifier (ID), the data length code (DLC), and the payload (data). The ID identifies the CAN data frame and the content of its data field. Each ECU transmits only a limited set of messages with a particular ID while receiving ECUs use the value of the ID field to select data frames relevant for their functioning. A message with a particular value of the ID field is always sent by only one ECU. The ID field is also used for arbitration of the CAN messages, where lower values of this field denote messages with higher priority. In the current standard for basic CAN communication, the ID field is defined with a size of either 11 (standard format) or 29 bits (extended format). Figure <ref> shows the structure of an extended CAN message. Note that the extra 18 bits of the extended format (ID #2) are encoded separately from the 11 bits of the standard format (ID #1) for backward compatibility. The DLC field encodes the number of bytes composing the data field, and has a size of 4 bits. Since the maximum length of the data field is 8 bytes, valid DLC values ranges from 0 to 8, while values from 9 to 15 are left unused. The data field encapsulates the information that the sender ECU transmits to other ECUs on the network. The data field has a variable size (from 0 to 8 bytes) and usually packs several different signals. The CAN standard leaves complete freedom to the car manufacturers about the structure, number, encoding, and semantic of these signals. Hence, without having access to the formal specifications of the CAN messages for a particular vehicle model, the data encoded in the data field can only be interpreted as an opaque binary blob. Between two consecutive CAN data frames, an interframe space is required. The interframe space consists of at least three consecutive recessive bits, called interframe bits. Following the interframe space, if a dominant bit is detected, it is considered as the start-of-frame of the next data frame. § RELATED WORK Since the introduction of microcontrollers to in-vehicle networks, the automotive domain is considered one of the most prominent examples of Cyber-Physical Systems (CPSs). The main characteristic of an automotive CPS is the central role of the automotive in-vehicle network, which is used by the ECUs to exchange data for their operational needs. With the increasing development of advanced driver assistance systems (ADAS), cyber-security researchers started to demonstrate attacks to these features. Miller and Valasek <cit.> demonstrated the consequences of an Internet-based, remote cyber-attack to a modern, unmodified, licensed vehicle, with a huge media coverage. Since then, many researchers started to develop Intrusion Detection Systems (IDS) by applying concepts borrowed from classical computer networks to the in-vehicle networks <cit.>. Some of these solutions focus on the analysis of the low-level characteristics of the ECUs <cit.>, other solutions focus on the analysis of the in-vehicle network communications. Of this last group of solutions, security researchers have developed intrusion detection systems based on the statistical analysis of the content of the CAN bus <cit.>, while other solutions are focused on the analysis of the content of the CAN data frames <cit.>. Most of these works are only focused on a particular aspect of CAN communication, thus increasing the complexity of comparing novel solutions with existing literature. Moreover, only a handful implementations of the current state-of-the-art is publicly available, preventing the comparison of a novel solution with the existing ones. The scope of this work is to tackle this problem by proposing an unbiased comparison of existing solutions designed to detect anomalies on CAN communication using the timings of the CAN messages as their detection feature. We chose this group of solutions since they represent one of the most analyzed group of solutions for the anomaly detection task on CAN communications, and they can be deployed as a software-only solution, without requiring dedicated hardware components to their functioning. The first work focused on this aspect is presented in <cit.>, while following works are focused on either frequency <cit.> or timing analysis <cit.>. All these works are based on the assumption that most CAN messages are sent periodically on the network within a fixed time interval, hence it is possible to exploit this feature to detect messages that do not follow the expected timing. However, these detection methods are only applicable to cyclic messages and cannot detect any anomaly if the attack targets a non cyclic message. In the following sections, we describe the anomaly detectors considered for the experimental evaluation presented in this work. For readability purpose, we associate each algorithm with a label composed by the last name of the first author and the last two digits of the publication year. For clarity, we uniformed the names of the main parameters, variables, and attack scenarios described in the original works. §.§ Otsuka14 In the work presented in <cit.> the authors propose an anomaly detection algorithm that uses a delayed-decision cycle detection method to detect (and possibly prevent) spoofing attacks. The algorithm presented in assumes that, since data frames are transmitted at a constant cycle time ct, any modification of the normal behavior of CAN communication should change its cycle time. When an ECU receives a data frame with the same CAN ID of the previous data frame and a cycle time less than ct + δ (where δ is a threshold parameter), the ECU holds the data frame until the expected time T is passed. In case another message with the same ID is received in the waiting period, then the ECU is able to detect an ongoing attack and the data frames are not processed. The algorithm presented in is trained with real CAN data and tested against the injection of two different message IDs, one exhibiting a stable cycle time ct while the other exhibits a non-cyclic pattern. The results are presented by means of False Positive Rate (FPR) and False Negative Rate (FNR). Despite the comparison of the detection performance with the previous state-of-the-art is not discussed, we remark that is, to the best of our knowledge, the first paper presenting a detection algorithm based on the timing analysis of CAN communications. The computational overhead of the detection algorithm is equal to 𝒪(1) for each received CAN message. §.§ Taylor15 In the work presented in <cit.>, the authors design an anomaly detection algorithm for the detection of anomalies based on inter-packet time over a sliding window. In particular, the algorithm presented in uses test values over consecutive CAN flows (defined as a sequence of CAN data frames) for its detection task. Each test value is evaluated as a t-test, comparing the mean time difference with its historical value (i.e. the cycle time ct of the same CAN ID). The algorithm then uses the test values for the evaluation of the anomaly score (defined as a logarithmic sum) over a sequence of scores, to identify anomalies in the CAN communication. This algorithm is tested against the injection and the removal of CAN messages with different attack duration, ranging from 100 milliseconds to 1 second. For the message injection attack and for each duration, the injected packed is inserted once, five, and ten times its normal frequency. The detection performance are presented by means of the Receiver Operating Characteristics (ROC) and the Area Under Curve (AUC) measure. The authors do not compare the algorithm with previous works but only with a One-Class Support Vector Machine classifier trained on the same data. The computational overhead of the detection algorithm is equal to 𝒪(1) for each received CAN message. §.§ Cho16 The authors of <cit.> design and test a Clock-based Intrusion Detection System (CIDS) for CAN communications. This algorithm (labeled as ) leverages the intervals of periodic in-vehicle messages for fingerprinting ECUs. Then, the fingerprints are used for constructing a baseline of ECUs' clock behaviors based on the Recursive Least Squares (RLS) algorithm. The Cumulative Sum (CUMSUM) is then computed on this baseline to detect anomalies in message timings thanks to an adaptive threshold. The algorithm is able to detect anomalies inside a window of size W dataframes, and does not identify a single dataframe as malicious. This algorithm is presented in two versions: the per-message detection and the message-pairwise detection. The latter version (message-pairwise detection) is based on the assumption that the number and identity of CAN messages generated from the same ECU are known. We remark that these information can only be known a-priori by either having access to the DBC or by apply modern mapping techniques such as <cit.>. Since we only relies on CAN log traces and we do not have access to the DBC of our test vehicle, only the per-message detection version can be applied to our datasets. The algorithm is tested against the injection of CAN messages (called fabrication attack in the original work) on both a CAN bus prototype and a real vehicle. On the real vehicle scenario, authors target a message with a cycle time ct equals to 20ms, however the original paper does not contain any information about the injection frequency. Moreover, the paper does not include performance comparison against previous work. The computational overhead of the detection algorithm is equal to 𝒪(N^2) for each window, where N is the size of the data matrix. We remark that this computational complexity is equal to the one of the RLS algorithm used for constructing the baseline of ECUs' clock behaviors. §.§ Gmiden16 In the work presented in <cit.>, the authors design a lightweight intrusion detection algorithm based on the analysis of the frequencies of the CAN messages. In particular, the algorithm presented in uses the frequency of a CAN message to detect anomalies in the CAN communication. Upon reception of a message, the algorithm compares the time difference Δ_t between the new message with the previous one sharing the same ID, and generates an anomaly in case the time difference is less than half the estimated cycle time ct. In <cit.> the algorithm is only presented in a theoretical way, hence there is no test of the algorithm against any attack scenario, nor its detection performance are compared with previous works. The computational overhead of the detection algorithm is equal to 𝒪(1) for each received CAN message. §.§ Song16 In the work presented in <cit.>, the authors design a detection algorithm based on the inter-arrival times of CAN messages. In particular, evaluates the time difference Δ_t of messages with the same CAN ID and uses the time difference Δ_t for the detection of two attack scenarios. The first attack scenario considered in is the injection attack, in which messages are injected on the CAN bus randomly. To detect this attack scenario, the algorithm compares the time difference Δ_t with the cycle time ct and raises an anomaly if Δ_t is lower than half the expected cycle time ct. We remark that this detection algorithm appears to be exactly the same as the one proposed in . The second attack scenario is a Denial-of-Service attack, in which a message with a fixed value of the ID field is injected with a high frequency. For the detection of this attack scenario, the algorithm increments the value of a counter every time the Δ_t is lower than 0.2 milliseconds, and raises an anomaly if the counter value is higher than a given threshold. For the test of the algorithm, they used data gathered from a real vehicle and simulated the attacks by injecting messages for a random time window ranging from 5 to 10 seconds. In the injection attack scenario, they injected a message with twice, five, and ten times the original frequency, while in the DoS attack scenario the injection is fixed at 2000 messages per second, testing threshold values of 1, 2, 3, and 5. The detection results are evaluated by means of detection accuracy, but no comparison with previous work is presented. The computational overhead of the detection algorithm is equal to 𝒪(1) for each received CAN message. §.§ Moore17 In the work presented in <cit.>, the authors describe a frequency-based anomaly detection algorithm. The algorithm uses the time difference Δ_t between consecutive messages with the same ID value for its detection purposes. In particular, uses the sequence of Δ_t of each message ID for the identification of the maximum observed error m, which is the maximum absolute difference between the expected cycle time ct and the observed Δ_t. Upon reception of a message, the algorithm compares the Δ_t from the previous message with a threshold value defined as ct · 0.15 + m. If three consecutive values of Δ_t are found outside the defined threshold, then an anomaly is raised. The algorithm is tested against both injection and Denial-of-Service attacks, and presented the results by means of True Positive Rate (TPR), False Positive Rate (FPR), and False Negative Rate (FNR). No comparison with previous work is described. The computational overhead of the detection algorithm is equal to 𝒪(1) for each received CAN message. §.§ Stabili19 In the work presented in <cit.>, the authors designed an anomaly detection algorithm for the detection of missing messages from CAN communications. In particular, the algorithm presented in evaluates the cycle-time (ct) of each CAN ID to build its detection model. In the detection phase, the cycle time is used in conjunction with a configuration parameter k (defined in the validation process for each ID) to detect missing messages from the CAN bus. A message with a particular ID is considered missing if it is not seen on the CAN bus for at least ct × k milliseconds. This algorithm is tested against two similar attack scenarios, the ECU shutdown attack (in which messages with a particular ID are removed from the CAN for a period of time that ranges from 10 to 120 milliseconds) and the ECU inhibition attack (in which all messages with a particular ID are removed from the CAN bus). The detection performance are presented by means of the F-measure and compared with other detection algorithms, despite only is used for the comparison with time-based anomaly detection algorithms. The computational overhead of the detection algorithm is equal to 𝒪(n) for each received CAN message, where n is the number of message ID found in the monitored CAN section. §.§ Olufowobi20 In <cit.> the authors presented SAIDu-CANT, a specification-based intrusion detection system (IDS) using anomaly-based supervised learning. The detection model is learned in the first phase of the algorithm, and requires a clean CAN data trace for learning different parameters that will be later used in the detection phase. The learned parameters are the minimum and maximum inter-arrival time of each CAN message ID f_i,min, f_i,max , the estimated message period P̃_̃ĩ=f_i,min and the release jitter J_i=f_i,max-f_i,min. In the detection phase the algorithm monitors the arrival time of each CAN IDs and compares it against the detection model. If a message is found outside the acceptable interval defined by the specification of the detection model, than it is labeled as anomalous. The algorithm is tested against injection attacks on two different datasets. One dataset is composed by CAN traces gathered from two different Sedan vehicles, while the second dataset contains data from a single vehicle. Of these datasets, only the latter is publicly available[<https://sites.google.com/a/hksecurity.net/ocslab/Datasets/CAN-intrusion-dataset>]. The detection results are presented by means of True Negative (TN), True Positive (TP), False Positive (FP), False Negative (FN), accuracy, recall, precision, and F1 score. No comparison with previous work is presented, despite the authors showcased the detection performance of the algorithm against two other detection mechanisms directly described in the original work. The computational overhead of the detection algorithm is equal to 𝒪(1) for each received CAN message. § DATASET DESCRIPTION This section describes the dataset used for the training and the test of the detection algorithms. We used two different datasets for testing and training the detection algorithms. The first dataset is gathered from the CAN bus of an unmodified, licensed 2016 Volvo V40 Kinetic, and is called the Ventus dataset. The second dataset is the OTIDS <cit.> dataset, which is gathered from a KIA Soul. §.§ Ventus dataset The CAN data is recorded by physically connecting a laptop to the On-Board Diagnostic (OBD-II) port with a PCAN-USB adapter by Peak System <cit.> and a D-Sub to OBD-II cable. The high-speed CAN bus segment exposed on the OBD-II port of the vehicle contains data related to the powertrain, hence it is possible to access to CAN data frames exchanged by the ECUs to control the dynamic of the vehicle. The Ventus dataset is composed by the clean and the infected sections. The former one is used for training and validating the detection algorithms, while the latter is used for the performance evaluation of the detection algorithms. For the generation of the infected dataset, we build a threat model based on the attack scenarios considered by the analyzed algorithms. The final dataset is publicly available at <cit.>. §.§.§ Clean dataset The clean dataset is composed by 7 different CAN traces, including more than 8 million CAN messages corresponding to approximately 90 minutes of CAN traffic. The CAN traces are gathered in different driving sessions performed on different road types (urban, suburban, and highway), traffic conditions, weather conditions, and geographical areas (plain, hill, and mountain). The CAN traces include ID, DLC, and payloads of each CAN data frame associated with its relative timestamp. The clean dataset includes 51 different message IDs, each one characterized by its own cycle time. The cycle time of each message ID is available to car makers and their suppliers in the DataBase for CAN (DBC) file, which is used to describe details of CAN communications. However, this file is kept confidential, and is not publicly available. Since the cycle time of each message might influence the detection outcome, we need to estimate the cycle times of the messages in the clean dataset for the definition of multiple attack scenarios, each one targeting a message with a different cycle time. The cycle time of each message is evaluated as the mean value of the inter-arrival times between two consecutive messages with the same ID, and is rounded to the nearest millisecond. Figure <ref> shows on the y-axis the cycle time ct (expressed in milliseconds) evaluated for each message ID (depicted on the x-axis) on the clean dataset. As shown in Figure <ref>, the clean dataset is composed by 49 messages exposing a cyclic behavior, while 2 messages are identified as non-cyclic. The 49 cyclic messages can be grouped in 17 different cycle time classes, ranging from a minimum of 10 milliseconds (IDs 0x8 and 0x10) to a maximum of 1 second (ID 0x581). We remark that these results are achieved on the clean dataset at our disposal. We empirically verified that the cycle time evaluated on a single trace of the clean dataset present extremely low variance compared with the cycle time evaluated on the other traces, hence we are confident that the results presented in Figure <ref> are representative of the real cycle time of the messages gathered from the same vehicle model. §.§.§ Infected dataset The threat model used for the generation of the infected dataset is built on the attack scenarios described in the related work <cit.>. The threat model is composed by two different attack scenarios, the message injection and the message removal attacks. The attack traces are generated in a laboratory environment for safety reasons. The laboratory setup is composed by a laptop computer, a Raspberry Pi 4 board, and a CANPico <cit.> device. The CAN bus is implemented through a breadboard. The laptop is connected via a PEAK CAN-USB device used to record the content of the CAN bus, the Raspberry is connected to the CAN bus via a CAN shield and is used to replay the normal traces gathered directly from our vehicle. The CANPico device has an integrated CAN transceiver and is connected directly to the CAN bus to generate the attacks. Since all transmitting devices are connected to the CAN bus via a CAN transceiver, re-transmissions, delays, arbitration and, in general, all the low level details that might have been affected by simulation artifacts are handled directly by the transceivers. Since the algorithms considered in this work are based on the analysis of the timings of the CAN messages, each attack scenario is replicated by targeting 10 different messages, each characterized by a different inter-arrival time. The list of the selected message IDs and their cycle-time are presented in Table <ref>. Message injection The message injection attack scenario is used to inject messages on the CAN bus to subvert its normal behavior, by exploiting modern drive-by-wire capabilities such as automatic emergency braking, lane assist, park assist, adaptive cruise control, automatic transmission, and other similar features. The message injection scenario comprises different attacks, such as the replay attack, in which a message already seen in the CAN communication is injected at a later time on the network, the fuzzing attack, in which messages with altered field values are injected to study the consequences on the system, and the denial-of-service, in which messages with high priority are injected with a very high frequency to prevent the delivery of normal CAN data frames. For the aim of our analysis, we do not focus on both fuzzing and denial-of-service for the following reasons: * in case of a fuzzing attack to the ID field of the CAN data frame, the detection algorithms would fail to detect any ongoing attack since it is not possible to build the detection model for a never observed message; * the fuzzing attack on the data field of the CAN data frame (with a legit message ID) can be considered as a replay attack on the same message; * the denial-of-service attack scenario can be analyzed by considering a replay attack with a high frequency of injection. Hence, for the aforementioned reasons, we only focus on the replay attack injection scenario. The replay attack is conducted by injecting on the CAN communication a target message (usually selected after an initial phase of reverse engineering of the values encoded in the payload) with a particular injection frequency. The final injection attack scenario considered in the experimental evaluation is composed by 50 different attack instances, in which each of the 10 selected message IDs is injected with a frequency of 1, 10, 25, 50, and 100 messages each second. The injected messages are equally distributed inside the 1 second time window. Each attack scenario is simulated on all the traces of the clean dataset, for a total of 350 CAN traces. Removal attack The removal attack scenario is used to remove messages from the CAN bus to subvert its normal behavior, preventing the ECUs from receiving data required for their functioning. The removal attack is composed by two different attack scenarios, as already presented in <cit.>: the ECU shutdown and the ECU inhibition attacks. However, since the only difference between the two attacks is their duration, which does not impact on the overall performance of the detection algorithms, in this work we only focus on the ECU inhibition scenario, in which target messages are completely removed from the CAN bus. The final removal attack scenario considered in the experimental evaluation is composed by 10 different attack instances, in which each of the 10 selected message IDs is completely removed from the CAN communication. As for the previous attack scenario, this attack is also simulated on all the traces composing the clean dataset, for a total of 70 CAN traces. §.§ OTIDS dataset The OTIDS dataset is gathered from an unmodified, licensed KIA Soul vehicle <cit.>. The OTIDS dataset is constructed by logging CAN traffic via the OBD-II port from a real vehicle while performing message injection attacks. The OTDIS dataset is composed by 4 different traces, one of them representing the clean dataset (“Attack free state”) used for training of the algorithm, while the other traces represent 3 different injection scearios. We also remark that all the traces composing the OTIDS dataset contain both data frame and remote frame messages. Since remote frames are not used in the algorithms tested in this paper and might change the outcome of the training process, we pre-processed the traces of the OTIDS dataset to leave only CAN data frames. §.§.§ Clean dataset The clean part of the OTIDS dataset is composed by a single trace, with a little more than 1.4 million CAN messages corresponding to approximately 10 minutes of CAN traffic. The clean CAN trace include the ID, DLC, and payloads of each CAN data frame associate with its relative timestamp. The clean part of the OTIDS dataset includes 45 different message IDs, each one characterized by its own cycle time. As for the Ventus dataset, we extracted the cycle times of each message from the clean trace (with the same process already described in <ref>), and the results of the analysis is shown in Figure <ref>. From the results shown in Figure <ref>, all the 45 IDs found in the OTIDS dataset exhibit a cyclic behavior, with a minimum of 9 milliseconds (IDs 0x153, 0x164, and 0x220), to a maximum of 1 second (IDs 0x34, 0x42, 0x43, and 0x44). In the OTIDS dataset we found 9 different cycle time classes. §.§.§ Infected dataset The OTIDS dataset assumes a threat model extremely different from the one considered in the Ventus dataset. We remark that the OTIDS infected dataset is composed by three different types of Injection attacks: * Fuzzy: in the fuzzy attack scenario, messages of spoofed random CAN ID and data are injected in the CAN communication; * Denial-of-Service: in the DoS attack scenario, messages with a CAN ID set to 0x0 are injected in the CAN communication with a high frequency; * Impersonation: in the impersonation attack scenario, valid messages with ID 0x164 are removed from the network and the attacking node injects messages with the same ID. In case of fuzzy and impersonation attacks, the injection starts after 250 seconds of normal CAN communication, while in case of the DoS attack scenario the attack starts at the beginning of the trace. § IMPLEMENTATIONS OF THE DETECTION ALGORITHMS We observe that none of the related work considered in this paper is distributed together with a reference implementation. Moreover, several papers neglect many relevant details that are actually required to implement the proposed detection algorithm. In this section, we describe the additional assumptions used for the implementation of the detection algorithms. All assumptions are motivated and based on the maximization of the detection capabilities of the algorithms. We remark that all our reference implementation are publicly available at <cit.>, thus allowing researchers to easily replicate our experiments and benchmark novel time-based detection algorithms with respect to the state-of-the-art. §.§ Implementation of Otsuka14 For the implementation of the detection algorithm we used the estimated cycle times (see Figures <ref> and <ref>) as the mean reception cycle used in the detection process. We remark that the original work only considers 2 IDs, of which only 1 is cyclic. The cyclic ID has a maximum deviation with respect to the mean cycle time of 2%, while the other ID has a maximum deviation of 30%. Based on these results, authors set a fixed detection threshold of δ = 5% from the expected cycle time. Hence any message received outside the valid time range of [ct^ID-5%, ct^ID+5%] is considered anomalous. The value of the threshold δ is defined as the threshold value that minimizes the number of false positives in the validation process. We replicated the same experimental analysis on both datasets, and the results are presented in Figure <ref> and <ref> for the Ventus and OTIDS dataset, respectively. The two figures shows the deviation from the evaluated cycle time for each message ID. The IDs of the messages are sorted by their cycle time in ascending order (left to right). The results depicted in both Figures <ref> and <ref> show that messages found on the CAN bus more often exhibit a higher deviation from the expected cycle time. The value of δ that minimizes the overall number of false positives on the Ventus dataset is experimentally evaluated to be δ = 4%. While testing the best value of δ to minimize the false positives on OTIDS however, we found out that by increasing the value of δ improves the output of the validation results. However, since higher values of δ introduce a larger time window in which messages are not considered anomalous, the value used in the experimental evaluation is fixed at δ = 25%. In the experimental evaluation presented in the next section we configured the algorithm with these values for its detection purposes. Moreover, the algorithm presented in is designed to discard the malicious CAN messages by using a waiting mechanism. With this mechanism, all the messages with the same CAN ID received before ct + δ are held and discarded as soon as another message with the same ID is received after ct + δ. However, this mechanism is based on the assumption that at least the first received message for each ID is legit, and its detection performance are highly affected in case this assumption is violated. To this aim, our implementation of considers all the messages with the same ID received before ct + δ as a single case: in case at least one of the held messages is an injected frame then a single anomaly is raised, while if no one of the held messages is anomalous only one false positive is considered. This design choice minimizes the false positive rate of this algorithm in case of high frequency injection attacks, thus optimizing its detection results. §.§ Implementation of Taylor15 For the implementation of we used the cycle times estimated on the clean datasets as the historical mean required by the algorithm for the t-tests. For the experimental evaluation of the detection performance of we follow the same assumptions described in <cit.>. We remark that in the messages with a cycle time below 50ms are representative of more than 90% of the CAN IDs, while in the Ventus and the OTIDS datasets these messages are only 61.22% and 53.33% respectively of the whole datasets (30 out of 49 and 24 out of 45 cyclic messages). This difference does not impact the detection performance of the algorithm, however this method is applicable only to messages having a cycle time below 50ms, hence in both datasets the algorithm has limited applicability compared to the original paper. Moreover, in the original work where is presented there are some missing details that prevent it from being directly implemented, hence some additional assumptions are made also for this algorithm. The first additional assumption is focused to the number of sequences 𝒮_q used for the evaluation of the anomaly score A(𝒮_q). This is a crucial aspect for the evaluation of the detection performance of the algorithm, since it impacts directly on the detection performance. As an example, consider a scenario in which a value of 𝒮_q = 10 is used, i.e. the anomaly score is evaluated every 10 t-tests. As described in <cit.>, each item of the sequence is the value of the highest t-test in a window of 1 second. Hence, in the considered scenario, an anomaly score A(𝒮_q) evaluated on a sequence of 10 t-tests implies that the algorithm is able to detect a single anomaly within a 10 seconds time window. This design choice would prevent a fair comparison between the detection performance of and the other algorithms, that are able to flag many different anomaly in a similar time window. We tested different values of 𝒮_q for the evaluation of the anomaly score, from a minimum of 𝒮_q = 2 up to the whole series of t-tests. At the end of this analysis we discovered that the mean value of the A(𝒮_q) is minimally affected by the number of sequences used for its evaluation. Hence, we chose the value of 𝒮_q = 2 to allow the detection of anomalies in the smallest possible time window. The second additional assumption is focused on the definition of the detection threshold. In <cit.> this aspect is not discussed since the presented evaluation is focused on the analysis of the ROC and AUC measures, which are threshold-independent. However, for the performance evaluation we need to define a threshold value used to distinguish between normal and anomalous time windows. As a first step in the definition of the threshold value we analyzed the distribution of the anomaly scores on the clean datasets, and the results are shown in Figure <ref> for the Ventus dataset and in Figure <ref> for the OTIDS dataset. From the analysis of the distribution of the anomaly scores presented in Figures <ref> and <ref> it is possible to design two different detection mechanisms for each dataset. The first one is based on the analysis of the distribution range across the different traces, and defines a threshold value higher than the maximum anomaly score evaluated in the validation process. As an example, by considering the results achieved on the Ventus dataset depicted in Figure <ref>, the anomaly scores are always below 2.0, hence it is possible to use this value as the threshold for the classification of the anomalies. In the detection process, any A(𝒮_q) ≥ 2.0 is considered as an anomaly and all values below the threshold as considered legit values. The second detection mechanism is based on the similarity of the distributions with the distribution of the normal function, hence it is possible to define a threshold using the mean value and the standard deviation of the anomaly scores, as already proposed in <cit.>. Anomaly detection methods based on the similarity with the normal distribution often define the normal threshold as [μ - 3 ×σ, μ + 3 ×σ], and consider any outside value as anomalous. However, we remark that only 99.68% of the distribution is covered with a value of k = 3, hence an anomaly detector based on this detection model introduces 0.32% of false positives. Since this last detection method introduces false positives that negatively impact the performance evaluation of this detector, our implementation of uses a fixed detection threshold of 2.0 for the Ventus dataset and of 1.0 on the OTIDS dataset, to distinguish between valid and anomalous time windows. §.§ Implementation of Cho16 The detection algorithm uses the estimated clock skew for the detection of attacks to the CAN communications. The algorithm used for the clock skew estimation is described in the original work (Algorithm 1) and, despite the pseudo-code is well documented and described, there are a few missing information that require additional assumptions. The most critical assumptions are related to the initialization of different parameters used for the evaluation of the identification error. Since the identification error is used for the detection task, it is critical to initialize these values correctly. The three most impacting parameters are the size of the window N, the initial value of the parameter P used in the procedure, and the number of standard deviations κ. While the size of the window N only impacts the detection delay of the algorithm, the initial value of the parameter P heavily impacts on the computation of the identification error, thus impacting the detection capabilities of the algorithm. In an example described in the original work N and κ are initialized to the values of 20 and 5, respectively. However, the value used to initialize P is not disclosed. We tested different combinations of N, P, and κ on the clean datasets, aiming to minimize false positives false positives. The final values evaluated on the Ventus dataset are N = 10, P = 0.05, and κ=0.1, while on the OTIDS dataset the lowest number of false positives is given by the combination of N = 5, P = 0.001, and κ=2.5. We remark that these values are identified after a long process of testing on the clean dataset, since there is no description on how to identify the best parameters in the original work. Moreover, since the threat model considered in assumes that the attack starts after 420 seconds of normal CAN communication, and the attacks on the Ventus dataset are generated from the beginning, we modified our implementation to allow to evaluate the clock skew and identification errors on the clean base trace used for attack generation. This allows to learn the clock skew and identification errors on legit values, which are then used as reference against the attack scenarios. §.§ Implementation of Gmiden16 The description of the detection algorithm presented in its original work is complete of all the details required for its functioning, hence we were able to produce a reference implementation that comply with the original design without the need for additional assumptions or design choices. In our version, we used the cycle time evaluated on the clean dataset as the reference inter-arrival time required by the algorithm. §.§ Implementation of Song16 The detection algorithm presented in <cit.> is based on the assumption that CAN messages exhibit a fixed inter-arrival time. However, the experimental analysis presented in <ref> demonstrate that this assumption does not hold on our dataset, where more frequent messages have a higher deviation from the mean value. We remark that this phenomena is well known in real CAN buses implemented in modern vehicles, also other real datasets exhibit the same behavior, due to possible delays and re-transmissions in CAN bus segments that have a relatively high usage (about 50% or higher). This is also acknowledge by producers of automotive ECUs, that consider deviations up to 15% of the reference cycle time to be within the normal working parameters. While these assumptions are included in the original paper, it appears that they are not necessary for the proposed detection algorithms. We recall that includes two different algorithms targeting message injection and denial of service, respectively. The algorithm for detecting message injection appears to be identical to , hence we reuse the same implementation. The algorithm for detecting DoS attacks is clearly explained in the original paper, hence we produced a reference implementation that fully complies with the description provided by the authors. §.§ Implementation of Moore17 In the original work presenting , the authors assumed that the first 15 seconds of each trace is unaltered, and they used the first 5 seconds for the definition of the detection model. However, in the infected traces composing the Ventus dataset the attacks are generated starting at the beginning of each trace, as described in Section <ref>. Since the Ventus dataset contains a clean dataset composed by more than 90 minutes of clean CAN traffic, we used the first 5 seconds of the traces composing the clean dataset for training . However, in the OTIDS dataset the attacks are generated after 250 seconds of normal traffic, hence we trained the detection model on the first 5 seconds of the infected trace. Following the analysis presented in <cit.>, we also compared the maximum variance of the cycle time of each message ID with respect to the expected cycle time ct on the first 5 seconds of the traces composing the datasets. Results are presented in Figure <ref> and <ref> for the Ventus and OTIDS dataset, respectively. From this analysis it is possible to notice that all the IDs exhibit a small variance from the expected cycle time in the first 5 seconds of the traces of the clean section of the datasets, with the exception of ID 0x405 for the Ventus dataset and of IDs 0x18 and 0x50 for the OTIDS dataset. We remark that these results are comparable to the ones presented in <cit.> on the dataset at their disposal. We recall that raises an alert if the time between consecutive messages with the same ID Δ_t is lower or grater than a threshold (see Section <ref> for additional details). To prevent false positives, raises an anomaly only if three consecutive alerts are generated for the same ID. This design choice prevents a direct comparison of false positive and false negative rates with the other algorithms. To allow performance comparisons without penalizing , given an anomaly we classify all the injected messages that generated one of the three consecutive alerts as a true positive. If a legit message generated one of the alerts required for issuing an anomaly, we do not count that as a false positive. On the other hand, if an anomaly has been raised after three alerts all generated by legit messages, we consider that anomaly as a single false positive. This solution increases the overall detection performance of and makes it comparable to other algorithms in terms of ℱ-measure. §.§ Implementation of Stabili19 The detection algorithm is based on the assumption that it is possible to create a detection model that raises 0 false positives in the validation phase. To this aim, uses a parameter k for the definition of the valid waiting time for each different ID. We replicated the tuning of the parameter k following the description provided in <cit.> over both datasets, and the results of this analysis are presented in Figure <ref> and <ref> for the Ventus and OTIDS datasets, respectively. At the end of this training process on the Ventus dataset, the minimum value of the parameter k that does not generate false positives on the clean dataset is equal to 2 (for 16 message IDs), while the maximum value is 8 (for a single message ID). On the OTIDS dataset however, all the IDs do not generate any false positive by using a value of k = 2. Moreover, focusing on the Ventus dataset and comparing the results presented in Figure <ref> with the ones presented in Figure <ref> it is possible to notice that messages with lower cycle time values exhibit a higher deviation from the expected cycle time and require a higher value of the parameter k to achieve 0 false positives. This interesting result is explained by considering that messages with lower cycle time (i.e. appearing on the CAN bus more frequently than the others) are more prone to delays and transmission errors over a real CAN bus that has a load comparable or higher to 50%, hence they require higher tolerance by any detection mechanism based on the analysis of the timings. §.§ Implementation of Olufowobi20 The detection algorithm is composed by the training and the detection phase, which are presented in Algorithm 1 and 2 of the original paper, respectively. The algorithm is based on the assumption that the real model and its parameters are unknown and unavailable, since these information (specifically the precise message periods) is generally not disclosed by manufacturers. For this reason considers a detection model and based on parameters derived from observations of the CAN communications. Hence, the training phase of is focused on the definition of the parameters required for the detection process: the estimated message period P̃_̃ĩ=f_i,min and the release jitter J_i=f_i,max-f_i,min. For the estimation of these parameters the transmission time C_i is required, which is defined as the time required to transmit a message on the CAN bus. However, we remark that it is not possible to identify the value of C_i by observing normal CAN communications but should be learned by measuring the transmission time on the ECU transmitting a CAN message. Since it is not possible to identify the value of C_i from the clean dataset, we used a worst-case estimation of this parameter in the training process. For the worst-case estimation of C_i we used the clean dataset to reconstruct the sequence of bits composing the CAN data frame to calculate the size of the message size(m_i). Than the value of C_i is defined as C_i = max(size(m_i)) / bitrate, where max(size(m_i)) is the maximum reconstructed bit size of CAN messages with the same ID i and bitrate is the nominal bitrate of the CAN. In our experiments, the nominal bitrate of both datasets is 500kbps. Another issue raised during the implementation of is related to the availability of the value of the precise message period P_i. We remark that P_i (as described in the original work) belongs to the list of parameters not available for the definition of the detection model. However, in the description of the detection process (Algorithm 2 of the original work) the precise message period P_i is used as one of the parameters of the algorithm. Since the precise message period P_i is different from the estimated message period P̃_̃ĩ (which is computed in the training phase), in our implementation we defined P_i as the cycle time ct. Moreover, another issue raised in the implementation of the detection algorithm we discover that the variable k (one of the inputs of the detection function) is modified (in Algorithm 2, row 7) but the modified value is used. This variable is used for the evaluation of the arrival time window of the next message, and is crucial for the detection task. In our implementation of we modified the detection algorithm by returning the value of k in case is modified. Despite the aforementioned issues raised during the implementation of , in its experimental evaluation we discovered that the latter modification introduced a significant downside that afflicts the detection performance of . In particular, the novel issue is related to the modification of the value of k in case a message is marked as an anomaly. In this scenario, the algorithm will update the value of k and, in case of injection attacks, all the following messages are identified as anomalies since the arrival time window used for the detection task is out of sync with the tested message, resulting in thousands of false positives. To overcome this issue, we introduced an “update protection mechanism” on the value of k that triggers when a message is considered anomalous, reverting the value of k to its previous value. § EXPERIMENTAL EVALUATION In this section, we present the experimental evaluation of the detection performance of the algorithms against the datasets (and their respective threat models) presented in Section <ref>. To compare the detection performance of the different algorithms against the described attack scenarios we consider the as the key performance indicator. The is the harmonic mean of the precision and the recall of the detection performance. This indicator is commonly used in comparing intrusion detection systems. It ranges from 0 to 1, where values close to 0 denote the inability to detect any anomaly and values close to 1 denote the ability of the algorithm to detect all the anomalies with low false positives. The perfect detection algorithm exhibiting 100% precision and recall has equal to 1. §.§ Detection results against the Ventus dataset §.§.§ Message injection detection comparison The performance evaluation of the algorithms against the message injection attack is presented in Figure <ref>. Figure <ref> is composed by 5 different subplots representing different attack scenarios. Top to bottom, these attack scenarios are the injection of 1, 10, 25, 50, and 100 messages each second. The y-axis of each subplot of Figure <ref> shows the evaluated with the detection algorithms, while the x-axis represents the injected message ID, ordered by ascending cycle time. The 10 reference message IDs used in these experiments are the ones presented in <ref> of Section <ref>, as they represent messages with different cycle times. The detection results of each algorithm against a given combination of ID and injection frequency is represented as a boxplot. We used this graphical indicator since it allows to summarize in a concise representation the detection results achieved against the same attack (same ID and injection frequency) across different infected traces. For readability purposes, we only show the main components of the boxplot: the 10_th, 25_th, 50_th (median), 75_th, and 90_th percentiles. As an example, the top subplot of Figure <ref> refers to injection attacks performed with an injection frequency of 1 message per second. This subplot is divided into 10 “columns”, each referring to the injection of a different message ID. The first of these columns refers to the injection of message ID 0x10, and includes 6 boxplots drawn in different colors and having a small horizontal offset to improve readability. The first boxplot (red) summarizes the detection performance of , the second (blue) of , the third (dark orange) of , the fourth (cyan) of and , the fifth (green) of , while the last one (purple) of . We remark that is not included in this set of experiments since it is designed to detect only missing messages, and it cannot be applied against injection attacks. To highlight the trend related to the detection performance depending on the message cycle time, we also draw solid lines connecting the median values of all the boxplots related to the same algorithm. We recall that is designed to support only messages with a cycle time lower than 50ms (see Section <ref> for additional details), hence this algorithm is only applicable to the first 4 message IDs having the lower cycle times among the 10 considered in our experiments. To visually represent the inapplicability of to the last 6 IDs we draw a blue × instead of the boxplot. From the analysis of the results shown in Figure <ref> it is possible to notice different trends by either focusing on the comparison of the detection performance against the injection of the same volume of messages with different IDs (“horizontal” analysis) or by focusing on the difference between the performance against the increasing injection rate of the same message ID (“vertical” analysis). In both cases however, we remark that our implementation of is not able to detect any anomaly in the considered attack scenarios, since the injection of messages does not introduce a significant deviation from the normal anomaly score used by this algorithm. With the “horizontal” analysis it is possible to notice that by using and () the overall detection performance of the algorithms increases by simulating the attack on messages with higher cycle times, while the overall trend for is the opposite. By focusing on the results of and () algorithms with a “vertical” analysis it is possible to notice that achieves higher detection performance against the injection of a low volume of messages (≤ 10 messages per second), while for injection frequency of at least 25 messages per second () converge to higher also against the injection of messages with low cycle times. Both algorithms are limited in their detection against the injection of the message with ID 0x10 (i.e. the message with the lowest cycle time in our dataset). We remark also that () is able to achieve a of 1.0 against different injection scenarios (the top 2, 4, 6, and 8 messages with the highest cycle times against the injection of 10, 25, 50, and 100 messages per second, respectively), while achieves a perfect only in a subset of these scenarios (the top 1, and 3 messages with the highest cycle times against the injection of 50, and 100 messages per second, respectively). By focusing on with a “vertical” analysis however it is possible to notice that the overall detection performance of the algorithm increases by increasing the injection frequency, despite having high variance across different tests. The detection results achieved by however are completely different and require a dedicated analysis. As presented in Figure <ref>, by comparing the detection performance of against the different injection frequencies it is not possible to identify any trend with both “horizontal” and “vertical” analysis. We recall that uses a false-positive prevention system which raises an anomaly only after 3 consecutive alerts, as described in Section <ref>. As an example, consider the case of injection of a single message each second using the message with the lowest cycle time (ID 0x10). In this scenario the detection performance of are extremely low since the injected message is right after or just before another valid message, thus resulting in the generation of up to 2 alerts, that are not enough to cause the generation of an anomaly. However, messages with higher cycle times have a smaller margin of error (comparing the experimental results for the definition of the parameter m discussed in Section <ref>), hence it is possible to detect anomalies more frequently. One could expect that by increasing the injection frequency the overall detection performance should also increase, however the experimental evaluation demonstrates that for high enough injection frequencies starts to generate an increasing number of false negatives (i.e. missed anomalies). To better explain this counter-intuitive behavior, we refer to an example. Consider the scenario of the injection of the message 0x581, which has a ct = 1000ms. For this message, the value of the parameter m is 20ms, hence the time required for the identification of an alarm for this message is approximately 152ms (see Section <ref> for additional details on how detects anomalies). Since in our dataset the injection is simulated by equally distributing the injected messages on the interested time window, by injecting messages with a frequency of 10 messages per second we are injecting a message every 100 milliseconds, thus the time required for the identification of an alert of 152ms is higher than the cycle time between two consecutive messages, increasing the number of false negatives. This explains the two different trends that can be observed by comparing the “vertical” and “horizontal” analysis. By targeting messages with higher cycle time, the overall detection performance of increases. However, by increasing the injection frequency it is possible to increase the detection performance against the injection of messages with lower cycle times, despite there is a huge increment of false negatives for messages whose time required for the identification of a single alert is higher than the time between two consecutive injected messages. Finally, by analyzing the detection performance of “horizontally” it is possible to notice that the detection performance of the algorithm increases by increasing the cycle time of the injected message, in a trend similar to the one already observed for and (). However, with the “vertical” analysis it is possible to notice that the detection performance are not influenced by increasing the frequency of the injection. We also remark that the detection performance of against the injection of the message with the highest cycle time are always equal to 0. §.§.§ Message removal detection comparison To compare the detection performance of the only two algorithms supporting the detection against the message removal attack we present the results by means of percentage of detected anomalies, i.e. the number of alarms raised by the two detectors compared to the number of removed messages. We remark that it is possible to use this detection metric only in case the training of the detectors is based on a zero false positives approach, since it is impossible to distinguish between true positives and false positives otherwise. The detection results of the and against the message removal attack are presented in Figure <ref>, where the percentage of detected anomalies (y-axis) for each removed message ID (x-axis) is shown. The percentage of detected anomalies is presented using the box-plots. The detection performance of the two algorithms against this attack scenario are extremely consistent across the different missing IDs, hence it is nearly impossible to notice the different percentiles since they overlap with the median. We recall that is designed to support only messages with a cycle time below 50 milliseconds, hence it is possible to use it only against the removal of the first 4 message IDs. From the analysis of the results depicted in Figure <ref> it is clear that our implementation of is not able to reliably detect anomalies also against this attack scenario, since the removal of messages from normal CAN communications does not introduce significant deviations of the anomaly score used by . The detection performance achieved by are close to 100%, although this ideal value is never reached. This aspect has already been addressed in <cit.> and is related to the introduction of the valid waiting time required to achieve zero false positives in the validation process. §.§ Detection results against the OTIDS dataset In this section we present the performance evaluation of the detection algorithms against the OTIDS dataset. To perform an evaluation of the detection algorithms, a labeled dataset is required, and since the OTIDS dataset is not labeled, we recreated the labels by following the description of the attacks. The three attack scenarios included in the OTIDS dataset are described as follows: * Fuzzy attack: the attack starts after 250 seconds of normal traffic, and includes both normal and injected messages with 8 different message IDs: 0x153, 0x164, 0x1F1, 0x220, 0x2C0, 0x4B0, 0x4B1, and 0x5A0. * Denial-of-Service attack: the attack starts at the beginning of the trace, and injects messages with ID 0x0. * Impersonation attack: the attack starts after 250 seconds of normal traffic, removes all messages with ID 0x164 and inject messages with the same message ID mimicking its normal cycle time. Following the attack description, we remark that it is not possible to distinguish between normal and injected messages in case of the fuzzy attack scenario, hence we can not use that particular attack scenario in our experiments, with only the DoS and the impersonation attack scenarios available for performance evaluation. However, we remark also that in the DoS attack scenario the injected message ID is not found in the training trace, thus making the detection task trivial by simply checking if there is a reference for the ID in the detection model. Hence, the only attack scenario from the OTIDS dataset that can be used for performance evaluation is the impersonation attack. The comparison of the performance evaluation of the algorithms against the impersonation attack scenario of the OTIDS dataset is shown in Figure <ref>. Figure <ref> shows, for each detection algorithm, the ℱ-measure evaluated on the impersonation attack scenario. The results shown in Figure <ref> shows that in this particular attack scenario the algorithm that is able to achieve the highest detection performance are and , while the other detection techniques struggles to achieve a ℱ-measure close to 0.5, with failing to detect a single anomaly (ℱ-measure equal to 0). However, we remark that these results are relative to a single test case, and that by changing the injected message or its injection frequency the overall detection results might vary significantly, as observed previously in Section <ref>. In the OTIDS impersonation attack scenario, the impersonated message ID 0x164 is one of the most frequent messages, with a cycle time of 9 milliseconds. The impersonation attack requires the removal of the target message after 250 seconds, and the injection of a single message with the same message ID every 9 milliseconds, mimicking the original frequency. In these conditions, we remark that the impersonation attack of the OTIDS dataset is actually a particular scenario of masquerading attack, in which the injected CAN messages substitute the normal ones. However, following the performance evaluation of the tested algorithms we conducted manual analysis of the detection results to understand the behavior of the detection algorithms. The first interesting results from this analysis is that , , and raise anomalies after the start of the attack. By analyzing the beginning of the attack simulation, we discovered that in the first 18 milliseconds following the start of the attack there are 4 different messages with ID 0x164 instead of the expected 2. This implies that in this time window the attack scenario is closer to the message injection attack considered in the threat model of Ventus, allowing , , and to detect anomalies. The second interesting result is that both and raised the majority of their alarms after approximately 50 milliseconds since the start of the attack, being able to identify manipulated messages as anomalous. This implies that the evaluated detection performance of these two algorithms are to be considered as against the impersonation/masquerade attack scenario and not relative to the message injection attack. This interesting result might also be the cause for the low detection performance of both algorithms against a real message injection attack scenario as presented earlier in Section <ref>. As a final remark, we highlight that the detection performance of are heavily affected by the values of its configuration parameter. However, we were not able to identify any combination of values that allows to identify anomalies against the attacks on the OTIDS dataset. § CONCLUSIONS This paper contributes to the state-of-the-art by (I) surveying and implementing eight different detection algorithms based on CAN message timing analysis; by (II) publicly releasing their reference implementations; and by (III) testing the implemented algorithms against two different datasets, to present a detailed comparison of the detection performance of the analyzed detection algorithms against the same threat model and using the same detection metrics (and detection rate). The novel dataset used for our experimental evaluation <cit.>, which is composed by more than 90 minutes of training data and more than 400 CAN traces containing different labeled attacks, is publicly available to advance current solutions. With respect to the current state-of-the-art, this work presents an empirical and unbiased comparison of timing-based detection algorithms against the threat model of two different datasets, by addressing reproducibility of the results and highlighting the limitations of the dataset already publicly available for the performance evaluation of CAN anomaly detection algorithms. All the implemented algorithms and the dataset used in our experimental evaluation are publicly available to enable further improvements on this research topic. Our main motivation is the impossibility of a direct comparison of similar proposals, due to inherent limitations of the literature. Authors usually do not release the source code of their implementations. Detection algorithms are described with an insufficient level of detail, thus requiring additional assumptions for their implementation that might have a considerable impact on detection performance. Different algorithms are tested on different private datasets, and the same attack is often implemented in different ways. These issues make it impossible to demonstrate an advancement over the state of the art, and even to compare novel proposals against it. This work solves all the aforementioned issues, thus establishing a fair, transparent and open-source baseline that can be used by all researchers and industry practitioners. ACM-Reference-Format
http://arxiv.org/abs/2307.07460v2
20230714163852
Priority Downward Closures
[ "Ashwani Anand", "Georg Zetzsche" ]
cs.FL
[ "cs.FL" ]
Canonical Quantization of Teukolsky fields on Kerr Background Jochen Zahn August 12, 2023 ============================================================= When a system sends messages through a lossy channel, then the language encoding all sequences of messages can be abstracted by its downward closure, i.e. the set of all (not necessarily contiguous) subwords. This is useful because even if the system has infinitely many states, its downward closure is a regular language. However, if the channel has congestion control based on priorities assigned to the messages, then we need a finer abstraction: The downward closure with respect to the priority embedding. As for subword-based downward closures, one can also show that these priority downward closures are always regular. While computing finite automata for the subword-based downward closure is well understood, nothing is known in the case of priorities. We initiate the study of this problem and provide algorithms to compute priority downward closures for regular languages, one-counter languages, and context-free languages. § INTRODUCTION When analyzing infinite-state systems, it is often possible to replace individual components by an overapproximation based on (subword) downward closures. Here, the (subword) downward closure of a language L⊆Σ^* is the set of all words that appear as (not necessarily contiguous) subwords of members of L. This overapproximation is usually possible because the verified properties are not changed when we allow additional behaviors resulting from subwords. Furthermore, this overapproximation simplifies the system because a well-known result by Haines is that for every language L⊆Σ^*, its subword downward closure is regular. This idea has been successfully applied to many verification tasks, such as the verification of restricted lossy channel systems <cit.>, concurrent programs with dynamic thread spawning and bounded context-switching <cit.>, asynchronous programs (safety, termination, liveness <cit.>, but also context-free refinement verification <cit.>), the analysis of thread pools <cit.>, and safety of parameterized asynchronous shared-memory systems <cit.>. For these reasons, there has been a substantial amount of interest in algorithms to compute finite automata for subword downward closures of given infinite-state sytems <cit.>. One situation where downward closures are useful is that of systems that send messages through a lossy channel, meaning that every message can be lost on the way. Then clearly, the downward closure of the set of sequences of messages is exactly the set of sequences observed by the receiver. This works as long as all messages can be dropped arbitrarily. Priorities However, if the messages are not dropped arbitrarily but as part of congestion control, then taking the set of all subwords would be too coarse an abstraction: Suppose we want to prioritize critical messages that can only be dropped if there are no lower-priority messages in the channel. For example, RFC 2475 describes an architecture that allows specifying relative priority among the IP packets from a finite set of priorities and allows the network links to drop lower priority packets to accommodate higher priority ones when the congestion in the network reaches a critical point <cit.>. As another example, in networks with an Asynchronous Transfer Mode layer, cells carry a priority in order to give preferences to audio or video packages over less time-critical packages <cit.>. In these situations, the subword downward closure would introduce behaviors that are not actually possible in the system. To formally capture the effect of dropping messages by priorities, Haase, Schmitz and Schnoebelen <cit.> introduced Priority Channel Systems (PCS). These feature an ordering on words (i.e. channel contents), called the Prioritised Superseding Order (PSO), which allows the messages to have an assigned priority, such that higher priority messages can supersede lower priority ones. This order indeed allows the messages to be treated discriminatively, but the superseding is asymmetric. A message can be superseded only if there is a higher priority letter coming in the channel later. This means, PSO are the “priority counterpart” of the subword order for channels with priorities. In particular, in these systems, components can be abstracted by their priority downward closure, the downward closure with respect to the PSO. Fortunately, just as for subwords, priority downward closures are also always regular. This raises the question of whether it is possible to compute finite automata for the priority downward closure for given infinite-state systems. For example, consider a recursive program that sends messages into a lossy channel with congestion control. Then, the set of possible message sequences that can arrive is exactly the priority downward closure S of the language S of sent messages. Since S is context-free in this case, we would like to compute a finite automaton for S. While this problem is well-understood for subwords, nothing is known for priority downward closures. Contribution We initiate the study of computing priority downward closures. We show two main results. On the one hand, we study the setting above—computing priority downward closures of context-free languages. Here, we show that one can compute a doubly-exponential-sized automaton for its priority downward closure. On the other hand, we consider a natural restriction of context-free languages: We show that for one-counter automata, there is a polynomial-time algorithm to compute the priority downward closure. Key technical ingredients The first step is to consider a related order on words, which we call block order, which also has priorities assigned to letters, but imposes them more symmetrically. Moreover, we show that under mild assumptions, computing priority downward closures reduces to computing block downward closures. Both our constructions—for one-counter automata and context-free languages—require new ideas. For one-counter automata, we modify the subword-based downward closures construction from <cit.> in a non-obvious way to block downward closures. Crucially, our modification relies on the insight that, in some word, repeating existing factors will always yield a word that is larger in the block order. For context-free languages, we present a novel inductive approach: We decompose the input language into finitely many languages with fewer priority levels and apply the construction recursively. Outline of the paper We fix notation in <ref> and introduce the block order and show its relationship to the priority order in <ref>. In <ref>, we then present methods for computing block and priority downward closures for regular languages, one-counter languages, and context-free languages, respectively. § PRELIMINARIES We will use the convention that [i,j] denotes the set {i,i+1,…,j}. By Σ, we represent a finite alphabet. Σ^* (Σ^+) denotes the set of (non-empty) words over . When defining the priority order, we will equip with a set of priorities with total order (,⋖ ), i.e. there exists a fixed priority mapping from to . The set of priority will be the set of integers [0,d], with the canonical total order. By sets p (p∈), we denote the set of letters in with priority p. For priority p∈, p=0∪⋯∪p, i.e. the set of letters smaller than or equal to p. For a word w=a_0a_1⋯ a_k, where a_i∈, by w[i,j], we denote the infix a_ia_i+1⋯ a_j-1a_j, and by w[i], we denote a_i. Finite automata and regular languages A non-deterministic finite state automaton () is a tuple = (Q, Σ, δ, q_0, F), where Q is a finite set of states, Σ is its input alphabet, δ is its set of edges i.e. a finite subset of Q×Σ∪{ϵ}× Q, q_0∈ Q is its initial state, and F⊆ Q is its set of final states. A word is accepted by if it has a run from the initial state ending in a final state. The language recognized by an NFA is called a regular language, and is denoted by (). The size of a , denoted by ||, is the number of states in the . (Well-)quasi-orders A quasi-order, denoted as (X, ≤), is a set X with a reflexive and transitive relation ≤ on X. If x≤ y (or equivalently, y≥ x), we say that x is smaller than y, or y is greater than x. If ≤ is also anti-symmetric, then it is called a partial order. If every pair of elements in X is comparable by ≤, then it is called a total or linear order. Let (X,≤_1) and (Y,≤_2) be two quasi orders, and h:X→ Y be a function. We call h a monomorphism if it is one-to-one and x_1≤_1 x_2 h(x_1)≤_2 h(x_2). A quasi order (X, ≤) is called a well-quasi order (), if any infinite sequence of elements x_0,x_1,x_2,… from X contains an increasing pair x_i≤ x_j with i<j. If X is the set of words over some alphabet, then a (X,≤) is called multiplicative if ∀ u,u',v,v'∈ X, u≤ u' and v≤ v' imply that uv≤ u'v'. Subwords For u,v∈Σ^*, we say u v, which we refer to as subword order, if u is a subword (not necessarily, contiguous) of v, i.e. if u = u_1u_2⋯ u_k and, v = v_0u_1v_1u_2v_2⋯ v_k-1u_kv_k where u_i∈ and v_i∈^*. In simpler words, u v if some letters of v can be dropped to obtain u. For example, let =[0,1]. Then, 0 00 010110; 0 and 00 can be obtained by dropping letters from 00 and 010, respectively. But 010 cannot be obtained from 110, as the latter does not have sufficiently many 0s. If u v, we say that u is subword smaller than v, or simply that u is a subword of v. And we call a mapping from the positions in u to positions in v that witnesses u v as the witness position mapping. Since Σ is a with the equality order, by Higman's lemma, Σ^* is a with the subword order. It is in fact a multiplicative WQO: if u u' and v v', then dropping the same letters from u'v' gives us uv. Priority order We take an alphabet with priorities totally ordered by . We say u v, which we refer to as priority order, if u=ϵ or, u = u_1u_2⋯ u_k and, v = v_1u_1v_2u_2⋯ v_k u_k, such that ∀ i∈ [1,k], u_i∈ and v_i∈u_i^*. It is easy to observe that the priority order is multiplicative, and is finer than the subword order, i.e. ∀ u,v∈^*, u v u v. As shown in <cit.>, the priority order on words over a finite alphabet with priorities is a well-quasi ordering: (Σ^*, ) is a . Downward closure We define the subword downward closure and priority downward closure for a language L⊆Σ^* as follows: L :={ u∈^* |∃ v∈ L u v }, L :={ u∈^*|∃ v∈ L u v }. The following is the starting point for our investigation: It shows that for every language L, there exist finite automata for its downward closures w.r.t. and . lemmadonwardclosuresregular Every subword downward closed sets and every priority downward closed set is regular. For the subword order, this was shown by Haines <cit.>. The same idea applies to the priority ordering: A downward closed set is the complement of an upward closed set. Therefore, and since every upward closed set in a well-quasi ordering has finitely many minimal elements, it suffices to show that the set of all words above a single word is a regular language. This, in turn, is shown using a simple automaton construction. In <ref>, we prove an analogue of this for the block ordering (<ref>). We stress that <ref> is not effective: It does not guarantee that finite automata for downward closures can be computed for any given language. In fact, there are language classes for which they are not computable, such as reachability sets of lossy channel systems and Church-Rosser languages <cit.>. Therefore, our focus will be on the question of how to effectively compute automata for priority downward closures. § THE BLOCK ORDER We first define the block order formally and then give the intuition behind the definition. Let Σ be a finite alphabet, and = [0,d] be a set of priorities with a total order . Then for u,v∈Σ^*, where maximum priority occurring among u and v is p, we say u v, if * if u,v∈p^*, and u v, or * if u = u_0x_0u_1x_1⋯ x_n-1u_n and, v = v_0y_0v_1y_1⋯ y_m-1v_m where x_0,… x_n-1,y_0,…, y_m-1∈p, and for all i∈[0,n], we have u_i,v_i∈p-1^* (the u_i and v_i are called sub-p blocks), and there exists a strictly monotonically increasing map ϕ:[0,n]→ [0,m], which we call the witness block map, such that * u_i v_ϕ(i), ∀ i, * ϕ(0)= 0, * ϕ(n) =m, and * x_i v_ϕ(i)y_ϕ(i)v_ϕ(i)+1⋯ v_ϕ(i+1), ∀ i ∈ [0,n-1]. Intuitively, we say that u is block smaller than v, if either * both words have letters of same priority, and u is a subword of v, or, * the largest priority occurring in both words is p. Then we split both words along the priority p letters, to obtain sequences of sub-p blocks of words, which have words of strictly less priority. Then by item <ref>, we embed the sub-p blocks of u to those of v, such that they are recursively block smaller. Then with items <ref> and <ref>, we ensure that the first (and last) sub-p block of u is embedded in the first (resp., last) sub-p block of v. We will see later that this constraint allows the order to be multiplicative. Finally, by item <ref>, we ensure that the letters of priority p in u are preserved in v, i.e. every x_i indeed occurs between the embeddings of the sub-p block u_i and u_i+1. Consider the alphabet = { 0^a,0^b,1^a,1^b,2^a,2^b } with priority set =[0,2] and i={i^a, i^b}. In the following examples, the color helps to identify the largest priority occurring in the words. First, notice that ϵ 0^a 0^a0^b, and hence 1^b0^a0^a 1^b0^a0^a1^a0^a0^b, but 1^b0^a0^a 1^b0^a0^a 1^a0^b0^b. This is because 0^a0^b0^b, i.e. the last sub-1 block of the former word cannot be mapped to the last sub-1 block of the latter word. As another example, we have 2^a1^b0^a 0^a 2^a0^a1^b0^a0^a1^a0^a0^b, but 2^a1^b0^a0^a 2^b0^a1^b0^a0^a1^a0^a0^b. This is because 2^a does not exist in the latter word, violating item <ref>. Finally, notice that 1^a1^b1^a 2^a 1^b, because the sub-2 block 1^a1^b would have to be mapped to a single sub-2 block in the right-hand word; but none of them can accomodate 1^a1^b. Note that by items <ref> and <ref>, we have that u v u v, for all u,v∈^*. Then there exists a position mapping ρ from [0,|u|] to [0,|v|] such that u[i]=v[ρ(i)], for all i. We say that a position mapping respects block order if for all i, v[ρ(i),ρ(i+1)] contains letters of priorities smaller than u[i] and u[i+1]. It is easy to observe that if u v, then there exists a position mapping from u to v respecting the block order. The following is a straightforward repeated application of Higman's Lemma <cit.> (see <ref>). theoremthmgeneralizedblockwqo (Σ^*, ) is a . In fact, the block order is multiplicative, i.e. for all u,v,u',v'∈^* such that u u' and v v', it holds that uv u'v'. (Σ^*,) is a multiplicative . For singleton , the result trivially holds because it coincides with the subword order. Let (p-1^*,) be multiplicative. Now we show that (p^*,) is multiplicative. To this end, let u u', v v', and ϕ, ψ be the witnessing block maps respectively. We assume u = u_0x_0u_1x_1u_2x_2⋯ x_k-1u_k v = v_0y_0v_1y_1v_2y_2⋯ y_l-1v_l u' = u'_0x'_0u'_1x'_1u'_2x'_2⋯ x'_k-1u'_k' v' = v'_0y'_0v'_1y'_1v'_2y'_2⋯ y'_l-1v'_l' where x_i,y_i,x_i',y_i'∈p. Consider the function δ [0,k+l-1]→ [0,k'+l'-1] with i↦ϕ(i), if 1≤ i≤ k ψ(i-k+1), if k< i≤ k+l-1 Since the k^th sub-p block of u and the 1^st sub-p block of v combines in uv to form one sub-p block, we have k+l-1 sub-p blocks. Similarly, u'v' has k'+l'-1 sub-p blocks. And hence u_kv_1 u'_k'v'_1, by induction hypothesis. The recursive embedding is obvious for other sub-p blocks. We also have that δ(0)=0 and δ(k+l-1)=k'+l'-1. By monotonicity of ϕ and ψ, δ is also strictly monotonically increasing. Hence, δ witnesses uv u'v'. Pumping In the subword ordering, an often applied property is that for any words u,v,w, we have uw uvw, i.e. inserting any word leads to a superword. This is not true for the block ordering, as we saw in <ref>, (<ref>). However, one of our key observations about the block order is the following property: If the word we insert is just a repetition of an existing factor, then this yields a larger word in the block ordering. This will be crucial for our downward closure construction for one-counter automata in <ref>. [Pumping Lemma]lemmaarbitrarypumpinglemma For any u,v,w∈Σ^*, we have uvw uvvw. Before we prove <ref>, let us note that by applying <ref> multiple times, this implies that we can also repeat multiple factors. For instance, if w=w_1w_2w_3w_4w_5, then w w_1w_2^2w_3w_4^3w_5. <Ref> shows an example on how to choose the witness block map. We proceed by induction on the number of priorities. If there is just a single priority (i.e. ={0}), then coincides with and the statement is trivial. Let us assume the generalizedblockrepeat is established for words with up to n priorities. We distinguish two cases. * Suppose v contains only letters of priorities [0,n]. Then repeating v means repeating a factor inside a sub-(n+1) block, which is a word with priorities in [0,n]. Hence, the statement follows by induction: Formally, this means we can use the embedding mapping that sends block i of uvw to block i of uvvw. * Suppose v contains a letter of priority n+1. write v=v_0x_1v_1⋯ x_mv_m, where x_1,…,x_m are the letters of priority n+1 in v and v_0,…,v_m are the sub-(n+1) blocks of v. Then: uvw =uv_0x_1⋯ v_m-1x_mv_mw, uvvw =uv_0x_1⋯ v_m-1x_mv_mv_0x_1⋯ v_m-1x_m_skippedv_mw. The idea is simple: Our witness block map just skips the m sub-(n+1) blocks inside of v_mv_0x_1⋯ v_m-1x_m. Thus, the sub-(n+1) blocks in uv_0x_1⋯ v_m-1x_m are mapped to the same blocks in uv_0x_1⋯ v_m-1x_m, and the sub-(n+1) blocks in v_mw are mapped to the same blocks in v_mw. This is clearly a valid witness block map, since the first (resp. last) sub-(n+1) block is mapped to the first (resp. last), and each sub-(n+1) block is mapped to an identical sub-(n+1) block. Regular downward closures As for and , we define L={u∈Σ^* |∃ v∈ L u v} for any L⊆Σ^*. lemmalemgeneralizedblockregular For every L⊆Σ^*, L is a regular language. For the proof of <ref>, one can argue as mentioned above: The complement Σ^*∖ (L) of L is upward closed. And since is a WQO, Σ^*∖ (L) has finitely many minimal elements. It thus remains to show that for each word w∈Σ^*, the set of words -larger than w is regular, which is a simple exercise. Details can be found in <ref>. Block order vs. priority order We will later see (<ref>) that under mild conditions, computing priority downward closures reduces to computing block downward closures. The following generalizedBlockFiner is the main technical ingredient in this: It shows that the block order refines the priority order on words that end in the same letter, assuming the alphabet has a certain shape. A priority alphabet (Σ,) with =[1,d] is called flat if |i|=1 for each i∈ [1,d]. If is flat and u,v∈Σ^*a for some a∈, then u v implies u v. Since u v, there exists a witness position mapping ρ that maps the positions of the letters in u to that of v, such that it respects the block order, and it maps the last position of u to the last of v. Let u=u_0u_1⋯ u_k. We say that a position mapping violates the priority order at position i (for i∈ [0,k-1]), if v[ρ(i)+1, ρ(i+1)] has a letter of priority higher than that of u[i+1]. Note that if ρ does not violate the priority order at any position, then u v. Let i be the largest position at which ρ violates the priority order, i.e. v[ρ(i)+1, ρ(i+1)] has a letter of priority higher than that of u[i+1]. We show that if ρ respects the block order till position i, there exists another witness position mapping ρ' that respects the block order till position i-1, and has one few position of violation (i.e. no violation at position i). We first observe that u[i]>u[i+1], which holds since ρ respects the block order till position i, implying that v[ρ(i)+1, ρ(i+1)] does not have a letter of priority higher than min{u[i], u[i+1]}, and if u[i]≤ u[i+1], ρ does not violate the priority order at i. Then observe that v[ρ(i)+1, ρ(i+1)] does not have a letter with priority p, where u[i]>p> u[i+1], otherwise the sub-u[i] block of u immediately after u[i], can not be embedded to that of v immediately after v[ρ(i)], since it would have to be split along p, and the first sub-p block in v will not be mapped to any in u. Then v[ρ(i)+1, ρ(i+1)] has letter of priority u[i] (for a violation at i). Then consider the mapping ρ' that maps i to the last u[i] letter in v[ρ(i)+1, ρ(i+1)] (say at v[j] for some j, ρ(i)+1≤ j≤ρ(i+1)). This mapping respects the block order till position i-1, trivially, as we do not change the mapping before i. We show that there is no priority order violation at position i. This holds because the only larger priority letter occurring in v[ρ(i)+1, ρ(i+1)] was u[i], and due to the definition of ρ', v[ρ'(i)+1, ρ'(i+1)] has no letter of priority higher than u[i+1]. Since we do not change the mapping after position i, ρ' does not introduce a violation at any position after i. Hence we have a new position mapping that has one few position of priority order violation. We want to stress that the flatness assumption in <ref> is crucial: Consider the alphabet from the <ref>. Then 1^a0^a 1^a1^b0^a, but 1^a0^a1^a1^b0^a. Here only one position mapping exists, and it is not possible to remap 1^a to 1^b since they are two distinct letters of same priority. Hence, we need to assume that each priority greater than zero has at most one letter. § REGULAR LANGUAGES In this section, we show how to construct an NFA for the block downward closure of a regular language. To this end, we show that both orders are rational transductions. Rational transductions A finite state transducer is a tuple =(Q,X,Y,E,q_0,F), where Q is a finite set of states, X and Y are input and output alphabets, respectively, E is the set of edges i.e. finite subset of Q× X^*× Y^*× Q, q_0∈ Q is the initial state, and F⊆ Q is the set of final states. A configuration of is a triple (q,u,v)∈ Q× X^*× Y^*. We write (q,u,v)→_ (q',u',v'), if there is an edge (q,x,y,q') with u'=ux and v'=vy. If there is an edge (q,x,y,q'), we sometimes denote this fact by q_ q', and say “read x at q, output y, and goto q'”. The size of a transducer, denoted by ||, is the number of its states. A transduction is a subset of X^*× Y^* for some finite alphabets X,Y. The transduction defined by is () = { (u,v)∈ X^*× Y^* | (q_0,ϵ,ϵ)→_^* (f,u,v) for some f∈ F }. A transduction is called rational if it is defined by some finite-state transducer. Sometimes we abuse the notation and output a regular language R⊆ Y^* on an edge, instead of a letter. It should be noted that this abuse is equivalent to original definition of finite state transducers. We say that a language class is closed under rational transductions if for each language L ∈, and each rational transduction R⊆ X^*× Y^*, the language obtained by applying the transduction R to L, RL def={ v∈ Y^* | (u,v)∈ R for some u∈ L } also belongs to . We call such language classes full trio . Regular languages, context-free languages, recursively enumerable languages are some examples of full trios <cit.>. Transducers for orders It is well-known that the subword order is a rational transduction, i.e. the relation T = { (u,v)∈ X^*× X^* | v u } is defined by a finite-state transducer. For example, it can be defined by a one-state transducer that can non-deterministically decide to output or drop each letter. Note that on applying the transduction to any language, it gives the subword downward closure of the language. This means, for every L⊆ X^*, we have TL=L. We will now describe analogous transducers for the priority and block order. theoremsizeprioritytrans Given a priority alphabet with priorities [0,k], one can construct in polynomial time a transducer for and a transducer for , each of size (k). The transducers for the block and priority order are similar. Intuitively, both remember the maximum of the priorities dropped or to be dropped, and keep or drop the coming letters accordingly. We show the transducer for the priority order here since it is applied in <ref>. The transducer for the block order is detailed in <ref>. Let be a finite alphabet, with priorities = [0,k]. Consider the transducer that has one state for every priority, a non-final sink state, and a distinguished final state. If the transducer is in the state for priority r and reads a letter a of priority s, then * if s<r, then it outputs nothing and stays in state r, * if s≥ r, then it can output nothing, and go to state s, * if s≥ r, it can also output a, and go to state 0, or the accepting state non-deterministically, * for any other scenario, goes to the sink state. The priority 0 state is the initial state. Intuitively, the transducer remembers the largest priority letter that has been dropped, and keeps only a letter of higher priority later. To be accepting, it has to read the last letter to go to the accepting final state. The following theorem states that the class of regular languages form a full trio. Given an and a transducer , we can construct in polynomial time an of size ||·|| for ()(()). <Ref> give us a polynomial size recognizing the priority and block downward closure of a regular language, which is computable in polynomial time as well. Priority and block downward closures for regular languages are effectively computable in time polynomial in the number of states in the recognizing the language. <ref> now allow us to reduce the priority downward closure computability to computability for block order. If is a full trio and we can effectively compute block downward closures for , then we can effectively compute priority downward closures. The key idea is to reduce priority downward closure computation to the setting where (i) all words end in the same letter and (ii) the alphabet is flat. Since by <ref>, on those languages, the block order is finer than the priority order, computing the block order will essentially be sufficient. Let us first establish (i). Let L∈. Then for each a∈, the language L_a=L∩^*a belongs to . Since L=⋃_a∈ L_a∪ E and thus L=⋃_a∈ L_a∪ E, it suffices to compute priority downward closures for each L_a, where E = {ϵ} if ϵ∈ L, else ∅. This means, it suffices to compute priority downward closures for languages where all words end in the same letter. To achieve (ii), we make the alphabet flat. We say that (,') is the flattening of ( ,= [0,d]), if ' is obtained by choosing a total order to Σ such that if a has smaller priority than b in (,), then a has smaller priority than b in (,'). (In other words, we pick an arbitrary linearization of the quasi-order on Σ that expresses “has smaller priority than”). Then, we assign priorities based on this total ordering. Let [𝖿𝗅𝖺𝗍] and [𝖿𝗅𝖺𝗍] denote the block order and priority order, resp., based on the flat priority assignment. It is a simple observation that for u,v∈Σ^*, we have that u[𝖿𝗅𝖺𝗍] v implies u v. Now observe that for u,v∈ L_a, <ref> tells us that u[𝖿𝗅𝖺𝗍] v implies u[𝖿𝗅𝖺𝗍] v and therefore also u v. This implies that (L_a[𝖿𝗅𝖺𝗍])=L_a. By assumption, we can compute a finite automaton with ()=L_a[𝖿𝗅𝖺𝗍]. Since then ()=(L_a[𝖿𝗅𝖺𝗍])=L_a, we can compute L_a by applying <ref> to to compute ()=L_a. § ONE-COUNTER LANGUAGES In this section, we show that for the class of languages accepted by one-counter automata, which form a full-trio <cit.>, the block and priority downward closures can be computed in polynomial time. We prove the following theorem. Given an OCA , () and () are computable in polynomial time. Here, the difficulty is that existing downward closure constructions exploit that inserting any letters in a word yields a super-word. However, for the block order, this might not be true: Introducing high-priority letters might split a block unintentionally. However, we observe that the subword closure construction from <cit.> can be modified so that when constructing larger runs (to show that our NFA only accepts words in the downward closure), we only repeat existing factors. <Ref> then yields that the resulting word is block-larger. According to <ref>, it suffices to show that block downward closures are computable in polynomial time (an inspection of the proof of <ref> shows that computing the priority downward closure only incurs a polynomial overhead). One-counter automata. One-counter automata are finite state automata with a counter that can be incremented, decremented, or tested for zero. Formally, a one-counter automaton (OCA) is a 5-tuple (Q,Σ,δ,q_0,F) where Q is a finite set of states, q_0∈ Q is an initial state, F⊆ Q is a set of final states, Σ is a finite alphabet and δ⊆ Q×(Σ∪{ϵ} )×{-1,0,+1,z }× Q is a set of transitions. Transitions (p_1,a,s,p_2)∈δ are classified as incrementing (s=+1), decrementing (s=-1), internal (s=0), or test for zero(s=z). A configuration of an OCA is a pair that consists of a state and a (non-negative) counter value, i.e., (q,n)∈ Q×. A sequence π= (p_0,c_0),t_1,(p_1,c_1),t_2,⋯, t_m,(p_m,c_m) where (p_i,c_i)∈ Q×, t_i∈δ and (p_i-1,c_i-1) (p_i,c_i) is called: * a quasi-run, denoted π=(p_0,c_0)w_(p_m,c_m), if none of t_i is a test for zero; * a run, denoted π=(p_0,c_0)_(p_m,c_m), if all (p_i,c_i)∈ Q×. For any quasi-run π as above, the sequence of transitions t_1,⋯,t_m is called a walk from the state p_0 to the state p_m. A run (p_0,c_0)(p_m,c_m) is called accepting in if (p_0,c_0)=(q_0,0) where q_0 is the initial state of and p_m is a final state of , i.e. p_m∈ F. In such a case, the word w is accepted by . Simple one-counter automata As we will show later, computing block downward closures of OCA easily reduces to the case of simple OCA. A simple OCA (SOCA) is defined analogously to OCA, with the differences that (i) there are no zero tests, (ii) there is only one final state, (iii) for acceptance, the final counter value must be zero. We first show that the block downward closures can be effectively computed for the simple one-counter automata languages. propositionblockdownwardsimpleOCA Given a simple OCA , we can compute () in polynomial time. We present a rough sketch of the construction, full details can be found in <ref>. The starting point of the construction is the one for subwords in <cit.>, but the latter needs to be modified in a non-obvious way using <ref>. Let =(Q,Σ,δ,q_0,q_f) be a simple OCA, with |Q|=K. We construct an that can simulate in three different modes. In the first mode, it simulates until the counter value reaches K, and when the value reaches K+1, it switches to the second mode. The second mode simulates while the counter value stays below K^2+K+1. Moreover, and this is where our construction differs from <cit.>: if is in the second mode simulating in some state q, then can spontaneously execute a loop from q to q of while ignoring its counter updates. When the counter value in the second mode drops to K again, non-deterministically switches to the third mode to simulate while the counter value stays below K. Thus, only needs to track counter values in [0,K^2+K+1], meaning they can be stored in its state. We claim that then ()⊆()⊆(). lemmaocacontainsoriginaloca ⊆. If a word in () has a run with counters bounded by K^2+K+1, then it trivially belongs to (). If the counters go beyond K^2+K+1, then with the classical “unpumping” argument, one can extract two loops, one increasing the counter, one decreasing it. These loops can then be simulated by the spontaneous loops in the second mode of . The more interesting inclusion is the following: lemmaocadownwardclosurecontainsoca ⊆. We have to show that each spontaneous loop in can be justified by padding the run with further loop executions so as to obtain a run of . This is possible because to execute such a spontaneous loop, we must have gone beyond K and later go to zero again. Thus, there exists a “pumping up” loop adding, say k≥ 0 to the counter, and a “pumping down” loop, subtracting, say ℓ≥ 0 from the counter. We can therefore repeat all spontaneous loops so often that their effect — when seen as transitions in — is a (positive or negative) multiple M of k·ℓ. Then, we execute the k- and the ℓ-loop so often so as to get the counter values so high that (i) our repeated spontaneous loops never cross zero and (ii) the effect difference of the new loops is exactly M. Since in our construction (in contrast to <cit.>), the padding only repeated words that already exist in the run of , <ref> implies that the word of embeds via the block order. General OCA Let us now show how to construct the block downward closure of general OCAs. Suppose we are given an OCA . For any two states p,q, consider the simple OCA _p,q obtained from by removing all zero tests, making p initial, and q final. Then () is the set of words read from (p,0) to (q,0) without using zero tests. We now compute for each p,q a finite automaton _p,q for the block downward closure of _p,q. Clearly, we may assume that _p,q has exactly one initial state and one final state. Finally, we obtain the finite automaton from as follows: We remove all transitions except the zero tests. Each zero test from p to q is replaced with an edge pq. Moreover, for any states p and q coming from , we glue in the automaton _p,q (by connecting p with _p,q's initial state and connecting _p,q's final state with q). Then, since the block order is multiplicative, we have that L() accepts exactly the block downward closure of . Futhermore, note that since our construction for simple OCA is polynomial, the general case is as well: The latter employs the former to |Q|^2 simple OCAs. § CONTEXT-FREE LANGUAGES The key trick in our construction for OCA was that we could modify the subword construction so that the overapproximating NFA has the property that in any word from (), we can repeat factors to obtain a word from . This was possible because in an OCA, essentially any pair of loops—one incrementing, one decrementing—could be repeated to pad a run. However, in context-free languages, the situation is more complicated. With a stack, any pumping must always ensure that stack contents match: It is not possible to compensate stack effects with just two loops. In terms of grammars, the core idea for subword closures of context-free languages L is usually to overapproximate “pump-like” derivations X uXv by observing that—up to subwords—they can generate any u'Xv' where the letters of u' can occur on the left and the letters of v' can occur on the right in derivations X· X·. Showing that all such words belong to the downward closure leads to derivations X u”v̅Xv”u̅, where u”,v” are super-words of u',v' such that X u”Xu̅ and Xv̅Xv” can be derived. The additional infixes could introduce high priority letters and thus split blocks unintentionally. Therefore, we provide a novel recursive approach to compute the block downward closure by decomposing derivations at high-priority letters. This is non-trivial as this decomposition might not match the decomposition given by derivation trees. Formally, we show: Given a context-free language L⊆n^*, one can construct a doubly-exponential-sized automaton for L, and thus also for L. We do not know if this doubly exponential upper bound is optimal. A singly-exponential lower bound follows from the subword case: It is known that subword downward closures of context-free languages can require exponentially many states <cit.>. However, it is not clear whether for priority or block downward closures, there is a singly-exponential construction. We again note that <ref> (and its proof) imply that for <ref>, it suffices to compute a finite automaton for the block downward closure of the context-free language: Computing the priority downward closure then only increases the size polynomially. Grammars We present the construction using context-free grammars, which are tuples =(N,T,P,S), where N is a finite set of non-terminal letters, T is a finite set of terminal letters, P is a finite set of productions of the form X→ w with X∈ N and w∈ (N∪ T)^*, and S is the start symbol. For u,v∈ (N∪ T)^*, we have u v if there is a production X→ w in P and x,y∈ (N∪ T)^* with u=xXy and v=xwy. The language generated by , is then ():={w∈ T^* | S w}, where is the reflexive, transitive closure of . Assumption on the alphabet In order to compute block downward closures, it suffices to do this for flat alphabets (see <ref>). The argument is essentially the same as in <ref>: By flattening the alphabet as in the proof of <ref>, we obtain a finer block order, so that first computing an automaton for the flat alphabet and then applying <ref> to the resulting finite automaton will yield a finite automaton for the original (non-flat) alphabet. In the following, we will assume that the input grammar is in Chomsky normal form, meaning every production is of the form X→ YZ for non-terminals X,Y,Z, or of the form X→ a for a non-terminal X and a terminal a. Kleene grammars Suppose we are given a context-free grammar =(N,Σ,P,S). Roughly speaking, the idea is to construct another grammar ' whose language has the same block downward closure as (), but with the additional property that every word can be generated using a derivation tree that is acyclic, meaning that each path contains every non-terminal at most once. Of course, if this were literally true, ' would generate a finite language. Therefore, we allow a slightly expanded syntax: We allow Kleene stars in context-free productions. This means, we allow right-hand sides to contain occurrences of B^*, where B is a non-terminal. The semantics is the obvious one: When applying such a rule, then instead of inserting B^*, we can generate any B^k with k≥ 0. We call grammars with such productions Kleene grammar. A derivation tree in a Kleene grammar is defined as for context-free grammars, aside from the expected modification: If some B^* occurs on a right-hand side, then we allow any (finite) number of B-labeled children in the respective place. Then indeed, a Kleene grammar can generate infinite sets using acyclic derivation trees. Given a Kleene grammar , let () be the set of words generated by using acyclic derivation trees. Given a Kleene grammar , one can construct an exponential-sized finite automaton accepting (). The automaton simulates a (say, preorder) traversal of an acyclic derivation tree of . This means, its state holds the path to the currently visited node in the derivation tree. Since every path has length at most |N|, where N is the set of non-terminals of , the automaton has at most exponentially many states. Given <ref>, for <ref>, it suffices to construct a Kleene grammar ' of exponential size such that (')=(). Normal form and grammar size We will ensure that in the constructed grammars, the productions are of the form (i) X→ w, where w is a word of length ≤ 3 and consisting of non-terminals Y or Kleene stars Y^* or (ii) X→ a where a is a terminal. This means, the total size of the grammar is always polynomial in the number of non-terminals. Therefore, to analyze the complexity, it will suffice to measure the number of non-terminals. Highest occurring priorities Similar to classical downward closure constructions for context-free languages, we want to overapproximate the set of words generated by “pump derivations” of the form X uXv. Since we are dealing with priorities, we first partition the set of such derivations according to the highest occurring priorities, on the left and on the right. Thus, for r,s∈[0,p], we will consider all derivations X uXv where r is the highest occurring priority in u and s is the highest occurring priority in v. To ease notation, we define r to be the set of words in r^* in which r is the highest occurring priority. Since r=r, we will write r to remind us that this is not an alphabet. Notice that for r∈[1,p], we have r=r^*rr^* and 0=0^*. Language of ends In order to perform an inductive construction, we need a way to transform pairs (u,v)∈r×s into words over an alphabet with fewer priorities. Part of this will be achieved by the end maps _r(·) and _s(·) as follows. Let Σ̂ be the priority alphabet obtained from Σ by adding the letters #, , and as letters with priority zero. Now for r∈[1,p], the function _rr→Σ̂_≤ r-1^* is defined as: _r(w) = u v, where w=urx_1r⋯ x_nrv for some n≥ 0, u,v,x_1,…,x_n∈r-1^*. Thus, _r(w) is obtained from w by replacing the largest possible infix surrounded by r with . For r=0, it will be convenient to have the constant function _00→{}. Analogously, we define for s∈[1,p] the function _ss→Σ̂_≤ s-1^* by _s(w)=u v, where w=usx_1s⋯ x_nsv for some n≥ 0, u,v,x_1,…,x_n∈s-1^*. Moreover, we also set _00→{} to be the constant function yielding . In particular, for r,s∈[1,p], we have _r(w),_s(w)∈Σ̂_≤ p-1 and thus we have reduced the number of priorities. Now consider for r,s∈[0,p] the language E_X,r,s = {_r(u)#_s(v) | X uXv, u∈r^*rr^*, v∈s^*ss^* }. For the language E_X,r,s, it is easy to construct a context-free grammar: lemmatransformEnd Given , a non-terminal X, and r,s∈[0,p], one can construct a grammar _X,r,s for E_X,r,s of linear size. Defining the sets E_X,r,s with fresh zero-priority letters #, , is a key trick in our construction: Note that each word in E_X,r,s is of the form u v# w x for u,v,w,x∈p-1^*. The segments u,v,w,x come from different blocks of the entire generated word, so applying the block downward closure construction recursively to E_X,r,s must guarantee that these segments embed as if they were blocks. However, there are only a bounded number of segments. Thus, we can reduce the number of priorities while retaining the block behavior by using fresh zero-priority letters. This is formalized in the following embedding-fresh-letter: lemmaembeddingFreshLetter For u,u',v,v'∈p^*, we have u#v u'#v' iff both (i) u u' and (ii) v v'. Language of repeated words Roughly speaking, the language E_X,r,s captures the “ends” of words derived in derivations X uXv with u∈r and v∈s: On the left, it keeps everything that is not between two occurrences of r and on the right, it keeps everything not between two occurrences of s. We now need languages that capture the infixes that can occur between r's and s's, respectively. Intuitively, these are the words that can occur again and again in words derived from X. There is a “left version” and a “right version”. We set for r,s∈[1,p]: _X,r,s = {yr | y∈r-1^*, ∃ x,z∈r^*, v∈s X xryrzXv } _X,r,s = {ys | y∈s-1^*, ∃ u∈r, x,z∈r^* X uXxsysz }. The case where one side has highest priority zero must be treated slightly differently: There are no enveloping occurrences of some r,s∈[1,p]. However, we can overapproximate those words by the set of all words over a particular alphabet. Specifically, for r,s∈[0,p], we set _X,0,s = {a∈0|∃ u∈0, v∈s X uXv, a occurs in u} _X,r,0 = {a∈0|∃ u∈r, v∈0 X uXv, a occurs in v} lemmatransformRepeat Given , a non-terminal X, and r,s∈[0,p], one can construct grammars _X,r,s, _X,r,s for _X,r,s,_X,r,s, respectively, of linear size. Overapproximating derivable words The languages E_X,r,s and _X,r,s and _X,r,s now serve to define overapproximations of the set of (u,v)∈r×s with X uXv: One can obtain each such pair by taking a word from E_X,r,s, replacing and , resp., by words in r_X,r,s^* (_X,0,s^* if r=0) and s_X,r,s^* (_X,r,0^* if s=0), respectively. By choosing the right words from E_X,r,s, _X,r,s, and _X,r,s, we can thus obtain u# v. However, this process will also yield other words that cannot be derived. However, the key idea in our construction is that every word obtainable in this way from E_X,r,s, _X,r,s, and _X,r,s will be in the block downward closure of a pair of words derivable using X· X·. Let us make this precise. To describe the set of words obtained from E_X,r,s, _X,r,s, and _X,r,s, we need the notion of a substitution. For alphabets Γ_1,Γ_2, a substitution is a map σΓ_1→2^Γ_2^* that yields a language in Γ_2 for each letter in Γ_1. Given a word w=w_1⋯ w_n with w_1,… w_n∈Γ_1, we define σ(w):=σ(w_1)⋯σ(w_n). Then for K⊆Γ_1^*, we set σ(K)=⋃_w∈ Kσ(w). Now let Σ_X,r,sΣ̂_≤ p→ 2^Σ̂_≤ p^* be the substitution that maps every letter in p∪{#} to itself (as a singleton) and maps to r_X,r,s^* and to s_X,r,s^*. Now our observation from the previous paragraph can be phrased as: lemmagrammarCorrectness For every u#v∈Σ_X,r,s(E_X,r,s), there are u'∈r and v'∈s with u u', v v', and X u'Xv'. Constructing the Kleene grammar We now construct the Kleene grammar for () by first computing the grammars _X,r,s, _X,r,s, and _X,r,s for each non-terminal X and each r,s∈[1,p]. Then, since _X,r,s, _X,r,s, and _X,r,s generate languages with at most p-1 priorities, we can call our construction recursively to obtain grammars '_X,r,s, '_X,r,s, and '_X,r,s, respectively. Then, we add all productions of the grammars '_X,r,s, '_X,r,s, and '_X,r,s to '. Moreover, we make the following modifications: Each production of the form Y→ (resp. Y→) in _X,r,s is replaced with Y→ Z_rS_X,r,s^* (resp.Y→ Z_sS_X,r,s^*), where S_X,r,s (resp.S_X,r,s) is the start symbol of '_X,r,s (resp. '_X,r,s), and Z_r is a fresh non-terminal used to derive r or ε: We also have Z_r→ r for each r∈[1,p] and Z_0→ε. Moreover, each production Y→# in '_X is removed and replaced with a production Y→ w for each production X→ w in . We call the resulting grammar '. Correctness Let us now observe that the grammar ' does indeed satisfy (')=(). The inclusion “⊇” is trivial as ' is obtained by adding productions. For the converse, we need some terminology. We say that a derivation tree t_1 in ' is obtained using an expansion step from t_0 if we take an X-labeled node x in t_0, where X is a non-terminal from , and replace this node by a derivation X uwv using newly added productions (i.e. using _X,r,s, _X,r,s, and _X,r,s and some Y→ w where X→ w was the production applied to x in t_0). Then by construction of ', any derivation in ' can be obtained from a derivation in by finitely many expansion steps. An induction on the number of expansion steps shows: lemmacflCorrectness We have (')=(). Acyclic derivations suffice Now that we have the grammar ' with (')=(), it remains to show that every word in ' can be derived using an acyclic derivation: lemmacflAcyclicity (')=(). Essentially, this is due to the fact that any repetition of a non-terminal X on some path means that we can replace a corresponding derivation X uXv by using new productions from '_X,r,s, '_X,r,s, and '_X,r,s. Since these also have the property that every derivation can be made acyclic, the lemma follows. See <ref> for details. Complexity analysis To estimate the size of the constructed grammar, let f_p(n) be the maximal number of non-terminals of a constructed Kleene grammar for an input grammar with n non-terminals over p priorities. By <ref>, there is a constant c such that each grammar _X, _X, and _X has at most cn non-terminals. Furthermore, ' is obtained by applying our construction to 3n(p+1)^2 grammars with p-1 priorities of size cn, and adding Z_p. Thus f_p(n)≤ n+3n(p+1)^2f_p-1(cn)+1. Since f_p-1(n)≥ 1, we can simplify to f_p(n)≤ 4n(p+1)^2 f_p-1(cn). It is easy to check that f_0(n)≤ 4n+1≤ 5n, because _X,0,0 and _X,0,0 and _X,0,0 each only have one non-terminal. Hence f_p(n)≤ (4n(p+1)^2)^pf_0(c^pn)≤ (4n(p+1)^2)· 4(c^pn), which is exponential in the size of . § CONCLUSION We have initiated the study of computing priority and block downward closures for infinite-state systems. We have shown that for OCA, both closures can be computed in polynomial time. For CFL, we have provided a doubly exponential construction. Many questions remain. First, we leave open whether the doubly exponential bound for context-free languages can be improved to exponential. An exponential lower bound is easily inherited from the exponential lower bound for subwords  <cit.>. Moreover, it is an intriguing question whether computability of subword downward closures for vector addition systems <cit.>, higher-order pushdown automata <cit.>, and higher-order recursion schemes <cit.> can be strengthend to block and priority downward closures. § MISSING PROOFS FROM <REF> Before we prove <ref>, we recall some well-known facts that are used in the proof. WQOs are preserved under many operations on quasi-ordered sets. We mention some of these operations below, which will be used later in the paper to show the WQO property. The reader is referred to Halfon's thesis <cit.> for a well informed survey of such results. We mention some of these results below. Given two s (A,≤_1) and (B,≤_2), its product (A× B,≤_1×≤_2) is also a . Here, (a_1,b_1)≤_1×≤_2 (a_2,b_2) if a_1≤_1 a_2 and b_1≤_2 b_2. (X^*,≤_*) is a (X,≤) is a . We say a_1a_2⋯ a_k ≤_* b_1b_2⋯ b_l if there is a strictly monotonically increasing map ϕ:[1,k]→ [1,l] such that ∀ i∈ [1,k], a_i≤ b_ϕ(i). Let (X,≤_1) and (Y,≤_2) be two quasi orders, and h:X→ Y be a monomorphism. If (Y,≤_2) is a then (X,≤_1) is a . We prove that the block order is a WQO. We restate <ref> for convenience of the reader. * We will prove the lemma by induction on the size of . Firstly, we note that generalized block order and subword order coincide for singleton priority set i.e. u v u v, by definition. Since (Σ^*,) is a , this gives us the base case, i.e. if is singleton, then (Σ^*,) is a . Now, for the induction hypothesis, assume that the lemma is true for =[0,p-1]. We show that the lemma holds for =[0,p]. Since (A,=) is a for any finite A, by Dickson's lemma, (Σ_p-1^*× A_p, × =) is a . Then, by Higman's lemma, ((Σ_p-1^*× A_p)^*, (× =)_*) is a . Again, by Dickson's lemma, ((Σ_p-1^*× A_p)×(Σ_p-1^*× A_p)^*×Σ_p-1^*, (× =)×(× =)_*×) is a . Now, consider the function h: (Σ_p^*,) → ((Σ_p-1^*× A_p)×(Σ_p-1^*× A_p)^*×Σ_p-1^*, (× =)×(× =)_*×) defined as, u_0y_0u_1y_1u_2y_2⋯ y_k-1u_k↦ ((u_0,y_0),(u_1,y_1),(u_2,y_2),… ,(u_k-1,y_k-1), u_k), where u_is are sub-p blocks. It is easy to see that h is a monomorphism. Then, by the monomorphism lemma, we get that (Σ_p^*,) is a . We now introduce the notion of upward closed sets, which allows us to prove the <ref>. Upward closure. Upward closure is the dual of downward closure. Given a set S with a partial order ◃, the ◃-upward closure of L⊆ S, denoted by L↑_◃, is the set of elements of S which are larger ◃ than some element in L, i.e. L↑_◃ = { u∈ S | ∃ v such that v ◃ u } A subset L of S is called ◃-upward closed if L=L↑_◃. The subword upward closure and block upward closure are defined by taking the set of finite words Σ^* with the partial orders and , respectively. For L⊆Σ^*, L↑ = { u∈ S | ∃ v such that v u } L⇑ = { u∈ S | ∃ v such that v u } It is easy to see that the complement of a downward closed set is an upward closed set and vice versa, hence they are dual of each other. The following theorem characterizes regular sets using upward closure. A set S is regular iff it is the upward closure of some multiplicative . We restate the <ref> and <ref> below. * The proof for priority downward closure is analogous for that of block order, as shown below. * Since the complement of the downward closed set is an upward closed set, regular languages are closed under complementation, and block order is a multiplicative WQO (<ref>), the proof of the lemma is a simple corollary of <ref>. § MISSING PROOFS FROM <REF> * For the block order consider the transducer that has one state for every priority and a sink state, and for every state it reads a letter, and * if the letter has lower or equal priority as the state, does not output it, and stays there, * if the letter has equal priority, outputs it, and goes to state 0, and * for other scenarios, goes to the sink state. At state 0, if the letter is output, then it stays at 0, else goes to the state with priority of the letter. This intuitively allows dropping whole sub-i blocks until priority i is output again. The construction exploits the fact that between two consecutive letters which are not dropped, no bigger priority letter is dropped. Similarly, for priority order, we have the same state space, along with another accepting state. * if the letter has strictly lower priority than the state, then does not output it, and stays there, * if the letter has same or higher priority, does not output it, and goes to the state with priority of the letter, * if the letter has same or higher priority, outputs it, and goes to the state with priority 0, or the accepting state non-deterministically. The priority 0 state is the initial state, and the new accepting state is the final state. Intuitively, the transducer remembers the largest priority letter that has been dropped, and keeps only a letter of higher priority later. To be accepting, it has to read the last letter to go to the accepting final state. § MISSING PROOFS FROM <REF> We restate the <ref> below and prove it formally. * In this proof we use the shorthand [n] for [0,n]= {0,…, n}, and (n) for [1,n]={1,…,n}. We describe the construction of the intermediate formally. Let U≥ K^2+K+1. =(Q_1∪ Q_2∪ Q_3,Σ,Δ,q_0',F') where Q_1=Q×[K]×{1}, Q_2=(Q×[U]×{2})∪ (Q× Q) and Q_3=Q×[K]×{3}. We let q_0'=(q_0,0,1) and F'={(q_f,0,1),(q_f,0,3) }. The transition relation is the union of the relations Δ_1, Δ_2 and Δ_3 defined as follows: Transitions in Δ_1: * (q,n,1)(q',n,1) for all n∈[K] whenever (q,a,i,q')∈δ. Simulate an internal move. * (q,n,1)(q',n-1,1) for all n∈(K) whenever (q,a,-1,q')∈δ. Simulate a decrement. * (q,n,1)(q',n+1,1) for all n∈[K-1] whenever (q,a,+1,q')∈δ. Simulate an increment. * (q,K,1)(q',K+1,2) whenever (q,a,+1,q')∈δ. Simulate an increment and shift to second phase. Transitions in Δ_2: * (q,n,2)(q',n,2) for all n∈[U] whenever (q,a,i,q')∈δ. Simulate an internal move. * (q,n,2)(q',n-1,2) for all n∈(U) whenever (q,a,-1,q')∈δ. Simulate a decrement. * (q,n,2)(q',n+1,2) for all n∈[U-1] whenever (q,a,+1,q')∈δ. Simulate an increment. * (q,K+1,1)(q',K,3) whenever (q,a,+1,q')∈δ. Simulate an decrement and shift to third phase. * (q,n,2)(q,q). Start simulating OCA as an NFA. * (q,q)(q,n,2). Stop simulating OCA as an NFA. * (q_1,q_2)(q_1',q_2) for all a∈Σ and q_2∈ Q whenever (q_1,a,x,q_1')∈δ for some x∈{+1,-1,0,z }. Simulate OCA as an starting from q_2. Transitions in Δ_3: * (q,n,3)(q',n,3) for all n∈[K] whenever (q,a,i,q')∈δ. Simulate an internal move. * (q,n,3)(q',n-1,3) for all n∈(K) whenever (q,a,-1,q')∈δ. Simulate a decrement. * (q,n,3)(q',n+1,3) for all n∈[K-1] whenever (q,a,+1,q')∈δ. Simulate an increment. * Let w∈. Then there is a run ρ in on w, which can be partitioned as follows, ρ = (q_0,0)(q_1,s)(q_2,t)(f,0) where ρ_1= (q_0,0)(q_1,s) is the longest prefix such that the counter value stays below K, and ρ_3=(q_2,t)(f,0) is the longest suffix disjoint from ρ_1 such that the counter value stays below K, and ρ_2=(q_1,s)(q_2,t). Since Q_1 and Q_2 can simulate by keeping track of counter values below K, we know that there are runs ρ'_1= (q_0,0,1)(q_1,s,1) and ρ'_3=(q_2,t,1)(q_3,0,1) in . We also observe that if the counter value does not go above K in ρ, then ρ_2 and ρ_3 are empty, and ρ=(q_0,0)(f,0). So (q_0,0,1)(f,0,1) is a valid and accepting run in . So now suppose ρ exceeds K in the counter. Now if the counter value stays below U, then ρ_2 can be simulated by Q_2 and it's transitions, and ρ'= (q_0,0,1)(q_1,K,1)(q_2,K,1)(f,0,1) is a valid run in . Let the maximum counter value reached in the run ρ be m. If m≥ U, then we show that the run can be shortened to keep the counter value below U, and the new run along with the trimmed part can be simulated by . Let ρ_m be the shortest prefix of ρ_2 such that at the end of ρ_m the counter value is m. For each K≤ i≤ m, consider the pair of states (p_l^i, p_r^i), such that (p_l^i,i) be the last configuration in ρ_m with the counter value i, and (p_r^i,i) be the first configuration in ρ_2 after ρ_m such that the counter value is i. Since we have only K many vertices, we have K^2 such pairs. But i ranges from K+1 to K^2+K+1, by PHP, we have that for some K≤ i<j≤ m, (p_l^i, p_r^i) =(p_l^j, p_r^j). Then the runs (p_l^i,i) (p_l^j,j) (p_r^j,j) (p_r^i,i) can be removed from ρ_2 to reduce the counter value, and can be simulated in by edges of type 5, 6, and 7 in Δ_2. We do this repeatedly to get a shorter run in which does not exceed U, and simulate the trimmed parts by edges of type 5,6, and 7. And this shorter run can be trivially simulated by edges of type 1-4 in Δ_2. Hence, we can simulate ρ in , and w∈. * Let w∈, and let ρ be the witnessing run. Let the minimum value of the counter in ρ be m. If m≥ 0, then it is a run in , and there is nothing to show. Now suppose m is negative. Then let ρ=(q_0,0,1)(q_1,K,1)(q_2,K+1,2)(q_3,K+1,2)(q_4,K,1)(f,0,1). For 0≤ i≤ K, consider the state p_i such that (p_i,i,1) is the first configuration along (q_0,0,1)(q_1,K,1) with the counter value i. Then by PHP, there exist 0≤ i<j≤ K, such that p_i=p_j. Let the run between (p_i,i) to (p_j,j) be ρ_l, and the counter difference k_1=j-i. Similarly, there exist a similar run ρ_r with counter difference k_2 in (q_4,K,3)(f,0,3). Notice that ρ_l can be pumped to make the counter value arbitrary high, and similarly, ρ_r can be pumped to bring down the counter value from arbitrary high value. Moreover, observe that since m is negative, there must exist ρ_c=(q,n,2) (q,q)(q,q)(q,n,2), such that the counter value reduces by m'>0 after this execution. Now consider a N such that k_1N + m>0. Then on pumping ρ_l k_2Nm' times, the counter value before ρ_c becomes n+k_1k_2Nm'. And executing u k_2m times makes the counter value n+k_1k_2Nm'+k_2mm'= n+ (k_1N+m)k_2m'>0. Then pumping ρ_r (k_1N+m)m' times brings the counter value to 0 in the end. However, this will give a run on word of the form w'=x_1x_2^K_1x_3ay_1u^K_2y_2bz_1z_2^K^3z_3 where K_1= k_2Nm', K_2 = k_2m and K_3 = (k_1N+m)m', such that x_1x_2x_3=x, y_1uy_2=y and z_1z_2z_3=z. But from lemma <ref>, we know that w w'. Since w'∈, w∈. With this we have shown that has the same downward closure as . And observe that the is a NFA with polynomially many states, K^3+3K^2+K, where K= |Q|. § MISSING PROOFS FROM <REF> Given a non-terminal X in a context-free grammar, one can construct a linear-size grammar for the language {u#v| X uXv }. Given a non-terminal X, one can compute in polynomial time the alphabets Γ_X and Γ_X. First, apply <ref> to construct a grammar for K={u#v| X uXv }. Then, we can decide whether a∈Γ_X by checking whether K intersects the regular language R_a={xay#v| x,y,v∈Σ_0^*}, for which one can construct a three-state automaton. Since intersection emptiness between a context-free language and a given regular language is decidable in polynomial time, the set Γ_X can be computed in polynomial time. An analogous argument holds for Γ_X. Since the context-free languages are closed under rational transduction, we use the standard triple construction (see e.g., <cit.>) to obtain new grammars after applying transductions. The technique allows construction of new grammar of size linear in the original grammar and polynomial in the size of the transducer. Given a CFG recognizing a language L, and a transducer defining a transduction T, the language TL is recognized by a CFG of size ||· |Q|^2, where Q is the number of states in . §.§ Proof of <ref> * We first apply <ref> to construct a grammar ' for K={u#v| X uXv }. Then consider the following transducer T, that first reads and outputs all the letters until an r is seen. Once an r is read, it outputs and keeps dropping subsequent letters until on reading another r it non-deterministically decides that r will not be seen before #. Then if another r is read before the #, it rejects the run by going to a non-final sink state. Otherwise, it outputs the letters that are seen until it encounters a #, and outputs it. Then it reads and outputs all the letters until a s is read, in which case it outputs , and continues dropping the letters until another s is read, and it non-deterministically decides to not read a s. It output all the letters after this s. Moreover, it goes to the sink state if it reads a letter with priority greater than r (and, priority greater than s) before the # (after the #). Note that the transducer only needs 7 states; one reject state, 3 states for the right part of #, and 3 for the left part. We apply the transducer T to the grammar ' to obtain a grammar of size 49|'|, which is linear in the original grammar, due to <ref>. Although we show the transducer for r,s>0, for the case of r=0 or s=0, the transducer just outputs or accordingly. §.§ Proof of <ref> * For the forward direction, let us assume that u#v u'#v'. Then suppose the largest priority occurring in u#v and u'#v' be p. Then there exists a witness block map ρ for u#v u'#v'. Now let # belongs to the m^th and n^th sub-p blocks of u#v and u'#v' respectively. We then show that u u'. Consider the block map ρ_u that maps i^th sub-p block of u to the ρ(i)^th sub-p block of u' for all i∈ [0,m]. By definition of ρ, u_i u'_ρ_u(i), where w_i denotes the i^th sub-p block of w for all i∈[0,n-1]. Moreover, ρ_u(0) = 0, and ρ_u(m)=n. It only remains to show that u_m u'_n. But this holds recursively, since the 0-block that # of u belongs to is subword smaller than the 0-block that # of u' belongs to. A similar argument shows that the block map ρ_v that maps i^th sub-p block of v to (ρ(i+m)-n)^th sub-p block of v' is the required witness block map. Now, for the other direction, let ρ_u and ρ_v be the witness block maps for u u' and v v'. Then consider the block map ρ such that i↦ρ_u(i), if i≥ n ρ_v(i-m)+n, otherwise. Again it suffices to show that the m^th sub-p block of u#v is block smaller than the n^th sub-p block of u'#v'. But this again recursively holds since the 0-block that # of u#v belongs to is subword smaller than the 0-block that # of u'#v' belongs to. §.§ Proof of <ref> * Due to <ref>, we again only construct a transducer of constant size that results in _X,r,s when applied to ' where ' is the grammar for the language K= {u#v| X uXv} obtained via <ref>. The case of _X,r,s is analogous. Consider the following transducer T. The transducer has a non-final sink state, which we call the rejecting state. The transducer reads the letters (with equal or less priority than r) and does not output anything (i.e. outputs ϵ), until it reads an r and decides to output the next sub-r block non-deterministically. It then outputs all the letters read till the next r, and then does not output the subsequent letters. On reading #, it outputs nothing, but verifies if the highest occurring letter in the right of # is s. If that is the case, it accepts, otherwise rejects. It is clear from the construction that T(') is the language _X,r,s. Note that the transducer has 5 states: one rejecting state, 3 to output sub-r block on the left of #, and one to verify if the word on the right of # is in s. The cases when r=0 or s=0 are rather straightforward, as the transducer just non-deterministically outputs one arbitrary letter from the corresponding side of #. §.§ Proof of <ref> * For every u#v∈σ_X,r,s(E_X,r,s), there are u'∈r and v'∈s with u u', v v', and X u'Xv'. Let u#v∈σ_X,r,s(E_X,r,s). Then let u=u_0ru_1ru_2r⋯ ru_k, for u_1r⋯ u_k-1r∈_X,r,s^*, and v=v_0sv_1sv_2s⋯ sv_l, for v_1sv_2s⋯ v_l-1s∈_X,r,s^*. That is, u_0 u_k# v_0 v_k∈ E_X,r,s. This implies that there exists (and ), such that u_0 (v_0) and u_k (v_l) are respectively the first and the last sub-r (sub-s) blocks of (). Let the production rule sequence X* X be denoted by ρ_0. Then by the definition of _X,r,s, we have production rule sequences _i X* e_iXf_i, such that u_i is a sub-r block of e_i, and f_i∈s. Similarly, there are production rule sequences _j X* g_jXh_i such that v_j is a sub-s block of h_j, and g_j∈r. Then the derivation sequence ρ_0_1⋯_k_1⋯_lρ_0 gives X u'Xv', where u'= e_1e_2⋯ e_k g_1⋯ g_l and v'= f_1f_2⋯ f_k h_1⋯ h_l. It is easy to see that u u' and v v'. §.§ Proof of <ref> * Of course, for zero expansions, there is nothing to prove, so suppose we have a derivation in ' with k expansions and let t be a derivation tree obtained by an expansion step from t_0 by replacing the X-labeled node x. Moreover, let X uwv be the derivation using new productions inserted at x. Then by construction, we know that u#v∈σ_X(E_X). By <ref>, this implies that there is a word u'#v'∈σ_X(E_X) with u',v'∈Σ_p^*, u u', and v v' Therefore, by <ref>, there exist u”,v”∈Σ_p^* with u' u” and v' v” and a derivation X u”Xv” in . In particular, there is a derivation X u”wv” in . Now consider the derivation t_1 obtained from t_0 by replacing x with the derivation X u”wv”. Then t_1 derives a new word of the form α u”β v”γ, where αβγ is the word derived by t_0. Since t_1 only uses productions from in addition to those in t_0, we know that t_1 needs just k-1 expansions. Hence, we know by induction that α u”β v”γ∈(). Since is multiplicative, we have α uβ vγα u'β v'γα u”β v”γ∈(). Since α uβ vγ is the word generated by t and belongs to (), this completes the proof. §.§ Proof of <ref> * Consider a derivation tree t in '. We pick t so that it minimizes the number of nodes whose label repeats below them. Note that each path in t alternates between productions from and segments using newly introduced productions. By induction, we may assume that within each segment, no non-terminal repeats. Now observe that if any new non-terminal that occurs in two segments, then these two segments must come from the same grammar '_X and thus X must repeat on that path. Therefore, if there is any repetition, there is also a repetition of some non-terminal X of . On the path where X repeats, pick the top-most and the lowest occurrence of X. Between these two, we have a derivation X_' uXv. This derivation can be replaced by a single derivation using new productions. Moreover, by our choice of occurrences of X, this replacement will not introduce cross-segment repetitions. Thus, we obtain a new derivation with (at least) one fewer repetition. By applying this argument again and again, we arrive at a derivation with no repetitions. We have thus shown: For every derivation tree t in ' with repetition of non-terminals, there exists an equivalent acyclic tree in '. This, with <ref>, gives the desired result.
http://arxiv.org/abs/2307.06199v1
20230712144034
COVID-19 incidence in the Republic of Ireland: A case study for network-based time series models
[ "Stephanie Armbruster", "Gesine Reinert" ]
stat.AP
[ "stat.AP" ]
Network-based time series for COVID-19 Armbruster and Reinert COVID-19 incidence in the Republic of Ireland: A case study for network-based time series models Stephanie Armbruster^* Department of Biostatistics, Harvard University, 655 Huntington Avenue, Boston, MA 02115, USA; ^*Corresponding author: [email protected] Gesine Reinert Department of Statistics, University of Oxford, 24-29 St Giles, Oxford OX1 3LB, UK August 12, 2023 ================================================================================================================================================================================================================================================================================ Network-based Time Series models have experienced a surge in popularity over the past years due to their ability to model temporal and spatial dependencies such as arising from the spread of an infectious disease. As statistical models for network time series, generalised network autoregressive (GNAR) models have been introduced. GNAR models are vertex-based models which have an autoregressive component modelling temporal dependence and a spatial autoregressive component to incorporate dependence between neighbouring vertices in the network. This paper compares the performance of GNAR models with different underlying networks in predicting COVID-19 cases for the 26 counties in the Republic of Ireland. The dataset is separated into subsets according to inter-country movement regulations and categorized into two pandemic phases, restricted and unrestricted. Ten static networks are constructed based on either general or COVID-19 specific approaches. In these networks, vertices represent counties, and edges are built upon neighbourhood relations, such as railway lines. We find that while for the prediction task, no underlying static network is consistently superior for either restricted or unrestricted phase, for pandemic phases with restrictions sparse networks perform better while for unrestricted phases, dense networks explain the data better. GNAR models have higher predictive accuracy than ARIMA models, which ignore the network structure. ARIMA and GNAR models perform similarly in pandemic phases with more lenient or no COVID-19 regulation. These findings indicate evidence of network dependencies in the restricted phase, but not in the unrestricted phase. They also show some robustness regarding the network construction method. An analysis of the residuals justifies the model assumptions for the restricted phase but raises questions for the unrestricted phase. 2020 Mathematics Subject Classification: 62M10, 05C82, 91D30 Network-based time series, COVID-19, spatial models, networks arabic § INTRODUCTION In recent years, statistical models which incorporate networks and thereby acknowledge spatial dependencies when predicting temporal data have experienced a surge in popularity (e.g. <cit.>, <cit.>, <cit.>). Against this backdrop, Knight et al. <cit.> developed a generalised network autoregressive (GNAR) time series model which in addition to the standard temporal dependence when modelling time series data, also incorporates a second type of dependence; this dependence is captured in a network. In <cit.> the proposed network-based time series model is leveraged to predict mumps incidence across English counties during the British mumps outbreak in 2005. Similar to mumps, COVID-19 is a highly infectious disease spread by direct contact between people <cit.>. Human movement networks have been extensively relied upon to explain COVID-19 patterns (e.g. <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>). Therefore, it is a natural conjecture that such movement networks may help predict the spread of COVID-19. This paper * fits GNAR models to predict the weekly COVID-19 incidence for all 26 counties in the Republic of Ireland, exploring different network constructions; * assesses the prevalence of a network effect in COVID-19 incidence in Ireland and the suitability of GNAR models to predict epidemic outbreaks as complex as COVID-19; * investigates the influence of changes in inter-county mobility, due to COVID-19 restrictions, on the performance of GNAR models as well as on the model parameters and hyperparameters. The networks are constructed according to general approaches based on statistical definitions of neighbourhoods as well as approaches specific to the infectious spread of the COVID-19 virus. For each network, for prediction with the GNAR model we select the best performing hyperparameter values using the Bayesian Information Criterion. By splitting the available Irish data into two phases of the pandemic, restricted and unrestricted, we are able to investigate the potential change in the temporal and spatial dependencies in COVID-19 incidence between the two phases. This paper is organised as follows. Section <ref> introduces the data set. The methodology for network construction and for network-based time series modeling is described in Section <ref>. Section <ref> provides an exploratory data analysis, while the model fit is shown in Section <ref>. The conclusions for the different pandemic phases are found in Section <ref>. The results are discussed in Section <ref>; concluding remarks are provided in Section <ref>. The extensive Supplementary Material include the history of the COVID-19 outbreak in Ireland as well as additional material, visualising the COVID-19 data as well as the constructed networks, and illustrating the performance of the network-based time series models on the COVID-19 networks. The code as well as the original and processed data for the paper is available on GitHub: <https://github.com/stephanieArmbru/Case_study_GNAR_COVID_Ireland.git>. § THE IRISH COVID-19 DATA SET By March 2023, the Republic of Ireland had recorded a total of 1.7 million confirmed COVID-19 cases and 8,719 deaths <cit.> since the beginning of the pandemic. In this paper, from now on we abbreviate the Republic of Ireland by Ireland. The Health Protection Surveillance Centre identified four main variants of concern for the COVID-19 virus in Ireland <cit.>, in addition to the original variant: Alpha from 27.12.2020, Delta from 06.06.2021, Omicron I from 13.12.2021, and Omicron II from 13.03.2022 (Figure 1a in <cit.>). The open data platform <cit.>, <cit.> by the Irish Government provides weekly updated multivariate time series data on confirmed daily cumulative COVID-19 cases for all 26 Irish counties, starting from the beginning of the pandemic in February 2020. A COVID-19 case is attributed to the county the patient has their primary residence in[This attribution is only partially reliable due to a lack of validation during infection surges <cit.>.] . The cumulative case count is given for 100,000 inhabitants and population corrected according to the population size from the 2016 census of Ireland <cit.>. In the data used this paper, the first COVID-19 case was registered in Dublin on 02.03.2020 and the last reported date is 23.01.2023, spanning a total of 152 weeks <cit.>. From 20.03.2020 onward, patients were suffering from COVID-19 in every Irish county. The daily COVID-19 data is aggregated to a weekly level to avoid modelling artificial weekly effects <cit.>, <cit.>. Due to delayed reporting during winter 2021/22, the weekly COVID-19 incidences from 12.12.2021 to 27.02.2022 are averaged over a window of 4 weeks <cit.>, <cit.>, <cit.>. The main COVID-19 regulations restricting physical movement and social interaction between Irish counties <cit.>, <cit.>, <cit.>, <cit.> are leveraged to naturally split the data into five subsets. The standard deviation in COVID-19 incidence across the 16 Irish counties, averaged over the considered time period, is indicated by σ. (1) Start of the pandemic, with gradually stricter movement restrictions and lockdowns (restricted phase), from 27.02.2020 for 25 weeks (σ = 19.17); (2) County-specific movement restrictions (less restricted phase), from 18.08.2020 for 18 weeks (σ = 35.05); (3) Level-5 lockdown (restricted phase), with inter-county travel restrictions, from 26.12.2020 for 20 weeks (σ = 148.79); (4) Allowance of non-essential inter-county travel (less restricted phase), from 10.05.2021 for 43 weeks (σ = 173.17); (5) End of all restrictions (unrestricted phase), from 06.03.2022 for 46 weeks (σ = 101.46). When split according to major movement restrictions, the individual data subsets have too few observations to train GNAR models and assess their prediction accuracy. Hence, the datasets are grouped as follows. Datasets 1 and 3 are concatenated, representing pandemic situations with strict COVID-19 restrictions, including an inter-county travel ban <cit.>; we call this the restricted data set. The restricted dataset contains 45 weeks of observation (average standard deviation across time period and counties σ_r = 99.27). Datasets 2, 4 and 5 represent periods with fewer or no regulations, in particular no inter-county travel limitations <cit.>. We concatenate these data set to give the unrestricted data set. It has 107 weeks of observation (average standard deviation across time period and counties σ_ur = 129.62). For the concatenation, the time gaps are inserted as missing data, 23.08.2020 - 20.12.2020 for the restricted dataset, and 27.12.2020 - 09.05.2021 for the unrestricted dataset. § METHODOLOGY We first fix some notation. Throughout this paper, 𝒢 = {𝒱, ℰ} is a deterministic undirected network with vertex set 𝒱 containing N vertices and edge set ℰ; an edge between vertices i and j is denoted by i ∼ j. All networks are unweighted and simple, without multiple edges and without self-loops. The neighbourhood of a subset of vertices A ⊂𝒱 is defined as the set of neighbours outside of A to the vertices in A, N(A) = ⋃_i ∈ A{ j ∈𝒱\ A: i ∼ j}. The set of r^ th-stage neighbours, or the r^ th-stage neighbourhood, for vertex i ∈ A is defined recursively as N^(0)(i) = { i} and N^(r)(i) = N( N^(r-1)(i) ) \⋃_q = 1^r-1 N^(q)(i) . §.§ COVID-19 networks: constructions and properties COVID-19 specific networks can be constructed intuitively, leveraging that human mobility has a shaping influence on disease spread (<cit.>, <cit.>, <cit.>, <cit.>, <cit.> ). Hence our first set of networks are based on geographical approaches, as follows. In the Railway-based network, an edge is established between two counties if there exists a direct train link between the respective county towns (without change of trains) and the county towns are closest to each other on this train connection[If the county town is not connected via railway, it is substituted by the largest town in the county or any town that lies on the train network. Substitutions are required for: Naas - Newbridge, Trim - Enfield. For county Kildare, the county town Naas does not lie on the rail network and is therefore replaced by Newbridge, the most populous town in Kildare. Trim, the county town for county Meath, is not included in the train network and is therefore substituted by Enfield, the only town in Meath with a train connection. Counties Monaghan, Cavan and Donegal are not reachable by train. We assume that the respective county towns are connected to their nearest railway station in a neighbouring county by bus, linking Cavan to Longford, Monaghan to Dundalk and Lifford to Sligo.]. The Queen's contiguity network connects each county with the counties it shares a border with <cit.>. The Economic hub network adds an additional edge between each county and its nearest economic hub, Dublin, Cork, Limerick, Galway or Waterford, to the Queen's contiguity network. [Economic hubs are identified as the largest cities with highest economic power. The five "power-houses of Ireland's economic success" are Dublin, Cork, Limerick, Galway and Waterford, where technology and pharmaceutical companies as well as deep-water ports create high-paying jobs <cit.>.]. To measure the distance to the nearest economic hub we use the Great Circle distance d_C(i, j), the shortest distance between two points on the surface of a sphere <cit.>. For two points i, j with latitude δ_i, δ_j and longitude λ_i, λ_j on a sphere of radius r > 0, d_C(i, j) = r · cos^-1( cos(δ_i) · cos(δ_j) · cos(λ_i - λ_j) - sin(δ_i) · cos(δ_j) ) . The K-nearest neighbours network (KNN) connects a vertex with its K nearest neighbours with respect to d_C <cit.>, <cit.>. The distance-based-nearest-neighbour network (DNN) constructs an edge between counties if their Great Circle distance d_C lies within a certain range [l, r] <cit.>. For the COVID-19 network, we set l = 0 and consider r a hyperparameter, chosen large enough to ensure that no vertex is isolated. The maximum value for r is determined by the largest distance between any two vertices, for which it returns a fully connected network <cit.>. In addition to these geographical networks, the Delaunay triangulation constructs triangles between vertices such that no vertex lies within the circumsphere of any constructed triangle <cit.>, thus ensuring that there are no isolated vertices. The Gabriel, Sphere of Influence network and Relative neighbourhood are obtained from the Delaunay triangulation network by omitting certain edges. In a Gabriel network, vertices x and y in Euclidean space are connected if they are Gabriel neighbours; that is, d(x,y) ≤min(. √((d(x,z)^2 + d(y,z)^2)) | z ∈𝒱) where d(x, y) = √(∑_i = 1^n(x_i - y_i)^2) denotes the Euclidean distance. In a Sphere of Influence network (SOI), long edges in the Delaunay triangulation network are eliminated and only edges between SOI neighbours are retained, as follows. For x ∈𝒱 and d_x the Euclidean distance between x and its nearest neighbour in 𝒱, let C_x denote the circle centred around x with radius d_x. For y ∈𝒱 the quantities d_y and C_y are defined analogously. Vertices x and y are SOI neighbours if and only if C_x and C_y intersect at least twice, preserving the symmetry property of the Delaunay triangulation <cit.>. The Relative neighbourhood network only retains edges between relative neighbours, d(x,y) ≤min(. max(d(x,z), d(y,z)) | z ∈𝒱) . The Relative neighbourhood network is contained in the Delaunay triangulation, SOI and Gabriel network, and is the sparsest of the four networks <cit.>. Finally, the Complete network represents the homogeneous mixing assumption, where every county has an influence on every other county and hence every county is connected to every other county <cit.>. Figure <ref> shows the Queen's contiguity network and the railway-based network for Ireland. Figures of the other networks are found in the Supplementary Material <ref>; network summaries are provided in Table <ref>. §.§ Generalised network autoregressive models Network-based Time Series models incorporate non-temporal dependencies in the form of networks in addition to temporal dependencies as established in time series models <cit.>, <cit.>, <cit.>[See the Supplementary Material <ref> for a more detailed literature review on network-based time series models.]. In contrast to standard time series methodology and spatial models <cit.>, <cit.>, <cit.>, network-based time series models are not limited to geographic relationships but can incorporate any generic network. As COVID-19 is an infectious disease with spatial spreading behaviour, warranting constructing networks based on spatial information, we use terms relating to spatial dependence in our exposition. Other types of dependence could easily be incorporated in the model through networks which reflect the hypothesised dependence. In this paper, we use the global α generalised network autoregressive models GNAR(p, s) to model the observation X_i, t for a vertex i at time t as the weighted linear combination of an autoregressive component of order p and a network neighbourhood autoregressive component of a certain order (neighbourhood stage); for i=1, …, p, the entry s_i gives the largest neighbourhood stage considered for vertex i when regressing on up to p past values. The effect of neighbouring vertices depends on some weight ω_i, q. Our GNAR model is given by X_i, t = ∑_j = 1^p ( α_i,j X_i, t-j + ∑_r = 1^s_j∑_q ∈ N^(r)(i)β_j, r ω_i, q^(t) X_q, t-j) + ε_i, t where ε_i,t∼ N(0, σ^2_i) are uncorrelated[We define ∑_r = 1^0 (.) := 0.]. This model is a special case of the GNAR model in <cit.>, <cit.> [See the matrix form of the GNAR model in Supplementary Material <ref>.]. A typical choice for weighting is the normalised inverse shortest path length weight, where d_i, q denotes the shortest path length (SPL) <cit.>; in connected networks, 1 ≤ d_i, q < ∞ for i q. For i, q ∈𝒱 such that q ∈ N^(r)(i), ω_i, q = d_i, q^-1/∑_k ∈ N^(r)(i) d_i, k^-1 . The GNAR model (<ref>) relies on vertex specific coefficients α_i, j. The global-α model is a simplification of this GNAR model which assumes vertex unspecific autoregressive coefficients, ∀ i ∈{1, ..., N}: α_i, j = α_j. We denote the GNAR model in (<ref>) by GNAR-p-s_1, …, s_p-FALSE, to indicate vertex-specific coefficients α_i,j; the global-α version is simply denoted by GNAR-p-s_1, …, s_p. To fit a GNAR model, we must choose the lag p, or α-order, and the vector of neighbourhood stages, s = (s_1, ..., s_p), also called β-order. They can be determined either through expert knowledge, e.g. on the spread of infections, or through a criterion-based search <cit.>. Consistent values for the GNAR coefficient are then estimated vy Least Squares (LS) estimation for i.i.d. error terms, or by Estimated Generalised Least Squares (EGLS) estimation for spatially correlated error terms <cit.>, <cit.>, <cit.>[Additional information in Supplementary Material <ref>]. §.§ GNAR model selection and predictive accuracy For our analysis of the Irish COVID-19 data, model selection, i.e. the choice of α- and β-order, is criterion-based. In accordance with <cit.>, the model which achieves lowest Bayesian Information Criterion (BIC) is chosen. The BIC avoids overfitting by penalizing the likelihood for the data given a certain parametric distribution by the dimensionality of the required parameter <cit.>. For a sample X of size n and a parameter θ of dimension k, BIC(k, n) = k log(n) - 2 ·log(L(X; θ)) . The GNAR package assumes Gaussian errors <cit.>; under this assumption, the BIC is consistent. This assumption could be weaked; it can be shown that the BIC is consistent for the GNAR model (<ref>) if the error term is i.i.d. with bounded fourth moments <cit.>, <cit.>, <cit.>. The predictive accuracy of a GNAR model is measured by the mean absolute scaled error (MASE). MASE is defined for each county i as the ratio of absolute forecasting error ε̂_i, t = |X_i, t - X̂_i, t| for some vertex i divided by the mean absolute error between true and a naive 1-lag random walk forecast for the entire observed time period [1, T] <cit.>, <cit.>. |q_i, t| = |X_i, t - X̂_i, t | /1/T-1∑_l = 2^T |X_i, l - X_i, l - 1| . MASE is chosen due to its insensitivity towards outliers, its scale invariance and its robustness <cit.>. § DATA EXPLORATION §.§ The weekly incidence differences GNAR models require stationary data <cit.>. To remove any linear trend, we carry out 1-lag differencing of the weekly COVID-19 incidence for the 26 Irish counties, giving the incidence difference, (1-lag) COVID-19 ID, between two subsequent weeks <cit.>[The stationarity is assessed by applying a Box-Cox transformation to each subset. As evident from Figure <ref> in the Supplementary Material <ref>, the values for λ achieving maximal likelihood fall close (enough) to 1, indicating that no further transformation is required. ] §.§ Constructed networks First, we construct networks with the 26 counties as nodes. All but two of the network construction methods do not require any parameter settings; the exceptions are the KNN network and the DNN network. For the COVID-19 KNN network, neighbourhood sizes sequencing from k = 1 to the fully connected network, k = 25, by 2 steps are considered. The minimal distance for the COVID-19 DNN network measures 90.3 km, between Kerry and Cork, and the maximal value 338.5 km, between County Cork and Donegal. The KNN and DNN network parameters are chosen to minimise the BIC of the associated GNAR model. This is achieved for k = 21 and d = 325. Table <ref> shows average degree, average SPL and average local clustering coefficient for the 9 COVID-19 networks (excluding the complete network). There is considerable variability in particular regarding the network density, with the KNN and DNN networks having much larger average degree than the other networks; the sparsest network is the Relative neighbourhood network. Some of the ordering is not surprising; the Economic hub network expands the Queen's contiguity network with additional edges between a county and its nearest economic hub. The degree for each vertex, and hence the average degree compared to the Queen's contiguity network are increased by 1. Similarly, the SPL is shortest in the denser DNN and KNN networks. The Railway-based network has the longest average SPL due to its vertex chains and the low number of shortcuts between counties. For the Queen's contiguity network, the introduction of shortcuts to the economic hubs leads to a decrease in average SPL, i.e. the disease spreads quicker. The Gabriel network is sparser than the SOI network, with slightly longer shortest path length. Deleting long edges in the Delaunay triangulation network to obtain the SOI network decreases the average degree and the average local clustering coefficient, but increases the average SPL. To assess small world behaviour, Table <ref> also provides the corresponding average SPL and average local clustering for a Bernoulli G(n,m) random graph on n=26 vertices, with the same number of edges as the constructed network. Small world behaviour would be indicated by an average SPL which is of the same order as the G(n,m) graph, while having higher average local clustering coefficient. This criterion is satisfied for the Queen's network, the Economic hub network, the Delaunay network, the Gabriel network, and the SOI network. The Railway network has much larger average SPL than the G(n,m) network, while the dense KNN and DNN networks have almost the same average SPL and local clustering coefficient as the G(n,m) network. Real-world networks are frequently scale-free <cit.>, <cit.>, in the sense that the degree distribution possesses the scale-free property. As evident from calculations and log-log plots in the Supplementary Material <ref>, no COVID-19 network fulfills this property. §.§ Spatial effects Next, we investigate whether the constructed networks capture any spatial correlation. Intuitively, for a spatial effect, the closer in SPL two vertices on a network are, the more highly correlated their COVID-19 incidences should be. We compute Moran's I for each network, which measures spatial correlation across time <cit.>, <cit.>, <cit.>. For some time t ∈ T and x_i^(t) denoting the COVID-19 ID for county i at time t, Moran's I follows as the average weighted correlation across space, I^ t = ∑_i = 1^N ∑_j = 1, i ≠ j^N w_ij· (x^(t)_i - x^(t)) (x_j - x^(t))/W_0 ·1/N∑_i (x_i^(t) - x^(t))^2 where W_0 = ∑_i, j = 1^N w_ij for normalisation. The weights w_ij can be chosen arbitrarily. For the COVID-19 networks, we select inverse distance weighting where distance is measured in Great circle distance between the central points of two counties [For non-neighbours, the weights are zero, i.e. ∀ r: j ∉N^(r)(i): w_ij = 0. Irish counties can be classified into rural and urban counties according to <cit.>. The central point for rural counties is the centroid of the geographic area of the county, while it is the central town for urban counties.]. This weighting is similar to that proposed in <cit.>, suggesting exponential weights w_ij = e^- d_ij, to account for an exponential decrease in correlation as the SPL between vertices grows. The spatial dependency between counties varies strongly over time for every network, see Figure <ref> and Figure <ref> in Supplementary Material <ref>. Peaks in Moran's I coincide with peaks in the 1-lag COVID-19 ID at the beginning of the pandemic as well as during the winters 2020/21 and 2021/22. The introduction of restrictive regulations, e.g. lockdowns, lead to a decreasing trend in Moran's I while the ease of restrictions from summer 2021 onward results in an increasing trend in Moran's I. Hence, there is an indication of a network effect in the data, i.e. that the pandemic influence of one county on another is associated with the inter-county mobility of their inhabitants. This is in particular visible after the official end of restriction in March 2022. For a statistical evaluation of the spatial dependency, we apply a permutation test in combination with Moran's I to assess the relationship between network distance and COVID-19 case correlation. For each date, we permute the COVID-19 cases between counties R = 100 times and compute Moran's I with exponential weights <cit.>. A date-specific 95% credibility interval ∀ t = 1, …, T: [m_t, l, m_t, u] based on empirical quantiles (q = 0.025, 0.5, 0.975 quantiles) is constructed. Under the null hypothesis, assuming no correlation between network structure and COVID-19 incidence, 5% (0.05 · T ≈ 8) of observed Moran's I values m_t over time t = 1, …, T are expected to lie outside the time dependent 95% credibility interval, [m_t, l, m_t, u]. If the proportion N_m = T^-1∑_t {𝕀(m_t > m_t, u ) + 𝕀( m_t < m_t, l)} of rejected tests over time is greater than expected under the null, we conclude that the network distance has an effect on the correlation between COVID-19 incidence[The constructed test is not a proper significance test in the statistical sense, given the dependence between tests over time. It rather provides a rough intuition regarding the spatial correlation in COVID-19 incidence assuming different underlying networks.]. The proportion of rejected tests is indicated for the restricted and unrestricted data set: Railway-based network N_m = (0.25, 0.142), Queen's contiguity network N_m = (0.227, 0.217), Economic hub network N_m = (0.25, 0.179), KNN network N_m = (0.091, 0.151), DNN network N_m = (0.068, 0.094), Delaunay triangulation network N_m = (0.205, 0.189), Gabriel network N_m = (0.114, 0.132), SOI network N_m = (0.182, 0.198), Relative neighbourhood network N_m = (0.159, 0.189). The p-value for all network lies above the expected α = 0.05, supplying evidence for a relationship between network distances and case correlations. Depending on the network, the proportion for either the restricted or the unrestricted data set is larger. § RESULTS §.§ GNAR model fitting To assess whether GNAR models may be more appropriate than standard ARIMA models for the data, we compare standard ARIMA models with its their GNAR counterparts. The ARIMA models, fitted to each county individually, achieve a average BIC = 1846.882 over all counties on the entire data set, BIC = 534.58 for the restricted data set and BIC = 670.27 for the unrestricted data set. Optimal[Optimal describes the best performing combination of α- and β-order as well as global-α setting and weighting scheme which obtain the minimal BIC value. The a-priori range of α-order spans {1, ..., 5}. The possible choices for the β-order are combinations of 1^st- and 2^nd-stage neighbourhoods, see Supplementary Material <ref> for a detailed enumeration. We use the term "complex" to describe networks which have large α-order and/or large β-order with many large, non-zero components.] GNAR model for each COVID-19 network achieve much lower BIC. On the entire data set, the GNAR-5-11110 model with the KNN network (k = 21)[From hereon referred to as the KNN network.] achieves the lowest BIC = 194.75[For more detail on fitting a GNAR model to each COVID-19 network on the entire data set, see the Supplementary Material <ref>.]. For the restricted phase, the best GNAR models yield BIC=91.41 on the Queen's contiguity network, and for the unrestricted phase BIC=190.17 on the KNN (k = 21) network, see Table <ref>. This observation justifies the use of GNAR models here. As detailed in Section <ref>, the nature of the virus suggests that the transmission of COVID-19 between Irish counties may depend strongly on the population flow between counties <cit.>. Protective COVID-19 restrictions taken by the Irish Government restricted and at times forbade inter-county travel in Level 3-5 lockdowns <cit.>, <cit.>. As supported by the positive and negative trends in Moran's I, the spatial dependence of COVID-19 incidence across counties is likely to have decreased during lockdowns and increased during periods in which inter-county travel was allowed <cit.>. This motivates training a GNAR model which is specific to pandemic phases. Due to limited sample size, GNAR models cannot be fitted to each data subset defined by governmental COVID-19 restrictions. Therefore, we resort to concatenating datasets representing restricted and unrestricted pandemic phases. §.§ Pandemic phases Table <ref> summarizes the optimal GNAR models and COVID-19 network for the restricted and unrestricted data set. For both phases, the best performing GNAR model select an autoregressive component of order 5. The average residual as well as the average MASE are smaller for the restricted than the unrestricted pandemic phases, implying that here GNAR models are more suited to predicting periods with strict regulations than periods with fewer or no restrictions. The increase in accuracy here is associated with higher variance. The optimal network for the unrestricted pandemic phase is much denser than the optimal network for the restricted phase. This might indicate the effect of inter-county travel bans on COVID-19 infections. As evident from Tables <ref>, <ref> in Supplementary Material <ref>, the BIC value for the optimal GNAR model lie within the range [91.41, 93.27] for the restricted data set and within the range [190.17, 192.54] for the unrestricted data set. Figure <ref> in Supplementary Material <ref> illustrates that denser networks perform better for the unrestricted data set while sparse networks achieve lower BIC for the restricted data set. A decrease in inter-county dependence due to COVID-19 restrictions should result in decreasing values for the β-coefficients in the GNAR model. This hypothesis can only be partially verified, see Figure <ref>. The absolute value of β-coefficients increases from the restricted to the unrestricted phase, implying increased spatial dependence after COVID-19 restrictions have been eased or lifted. Interestingly, the GNAR model picks up a decrease in temporal dependence in COVID-19 ID. As a disease spreads more freely due to lenient or no restrictions, it has been observed in other data studies that case numbers can grow more erratic and become less dependent on historic data <cit.>, <cit.>. This effect, in addition to peaks and high volatility in COVID-19 ID observed during pandemic phases with less restrictive regulations, might contributed to the negative α-coefficient values for the unrestricted data set. Identical observations can be made when considering how the coefficients develop between the restricted and unrestricted phase for the GNAR model that is optimal for the entire data set, namely, GNAR(α = 5, β = (1, 1, 1, 1, 0)) with the KNN (k = 21) network; the β-coefficients increase in absolute value for the unrestricted phase compared to the restricted phase, see Figure <ref> in Supplementary Material <ref>. The predictive accuracy for both datasets is comparable and varies from county to county, see Figures <ref> and <ref> for 9 example counties, MASE values for the remaining counties follow similar patterns. For the restricted phase, GNAR models achieve lower MASE than the ARIMA models except for counties Donegal, Cavan, Leitrim, Monaghan and Sligo, for which the ARIMA model performs equally well. These counties border to Northern Ireland and might suffer from a border effect between the Republic of Ireland and Northern Ireland. Throughout the pandemic, no strict border closure between Ireland and Northern Ireland was enforced <cit.>, <cit.>. The constructed GNAR models based on Ireland-restricted networks do not account for the disease spread across the Irish border. For the restricted phase, the predictions differ more strongly between the GNAR model and the ARIMA model, see Figure <ref> in Supplementary Material <ref>. For the unrestricted phase, the GNAR and ARIMA models follow roughly the same trajectory while achieving smaller residuals for most counties. This might imply a stronger network effect during the unrestricted pandemic phase and may point to some efficiency of COVID-19 regulations to contain inter-country spread. The above model fits assume that the observations follow a Gaussian i.i.d. error structure. To assess this assumption, we test whether the residuals ε̂_i,t follow a normal distribution with a county-specific Kolmogorov-Smirnov test, aggregated over time. We obtain primarily insignificant p-values across counties for the restricted phase (# p ≤ 0.025 = 6, # p > 0.025 = 20)[Significant p-values were established in counties Donegal, Dublin, Kilkenny, Laois, Offaly and Sligo] and majority significant p-values across counties for the unrestricted phase (# p ≤ 0.025 = 21, # p > 0.025 = 5)[The Kolmogorov-Smirnov test was insignificant for counties Donegal, Kerry, Longford, Roscommon, Sligo.]. Table <ref> in Supplementary Material <ref> details the average MASE, average residual and p-value for each county, resulting for the two optimal GNAR models for the restricted and unrestricted data set. The Gaussian nature of residuals indicate suitability of the GNAR model to model restricted pandemic phases and ensure consistency in coefficient estimates. For the unrestricted phase, the Gaussianity in the model assumptions could not be statistically verified. These conclusions are supported by the county-specific QQ-plots in Supplementary Material <ref> . The GNAR model further assumes that the errors are uncorrelated. To assess this assumption, the residuals are investigated according to their temporal as well as spatial autocorrelation by applying the Ljung-Box test and Moran's I based permutation test <cit.>, <cit.>. The former concludes significant temporal correlation for short-term lags in the GNAR residuals for each county. Thus there is evidence that the GNAR model insufficiently accounts for temporal dependence in COVID-19 incidence in subsequent weeks. The residuals show remaining spatial autocorrelation. The Moran's I based permutation test counts N_m = 9 Moran's I values outside the corresponding 95% credibility interval (expected 0.05 · 45 ≈ 2) for the restricted phase and N_m = 16 for the unrestricted phase (expected 0.05 ⋯ 107 ≈ 5). The reduction in spatial correlation for the restricted phases and the Queen's contiguity is greater (N_m = 10 on COVID-19 cases to N_m = 9 for residuals) than for the unrestricted phases and the KNN network (N_m = 16 for COVID-19 cases and residuals). We conclude that there is evidence that the GNAR model insufficiently incorporates the spatial relationship in COVID-19 case numbers across counties. These possible violations of the model assumptions have to be taken into account when interpreting the model fit. § DISCUSSION In this paper, we modelled the COVID-19 incidence across the 26 counties in the Republic of Ireland by fitting GNAR models, leveraging different networks to represent spatial dependence between the counties. We found that the GNAR model performs better on data collected during pandemic phases with inter-county movement restrictions than data gathered during less restricted phases. Sparse networks perform better for the restricted data set, while denser networks achieve lower BIC for the unrestricted data set, implying higher spatial correlation in COVID-19 incidence during pandemic periods with fewer restrictions. In addition, the GNAR model coefficient values hint at the efficiency of movement regulations. Here we discuss these findings in more detail, including limitations and challenges. A key challenge for modelling is that COVID data is characterised by high uncertainty e.g due to testing hesitancy, low testing capacity and double counting <cit.>, <cit.>, <cit.>. In general, such uncertainty is typical for data of spreads of epidemics <cit.>. For COVID-19 in particular, incidence data is extremely erratic due to its tendency for fast and sudden local outbreaks and due to many unreported cases <cit.>, <cit.>. §.§ Choice of network The fitted GNAR models make use of different underlying networks. While the performance of COVID-19 networks are similar, as ellaborated on in Section <ref>, each network has different assumptions and implications. The DNN network requires its lowest upper bound to be chosen such that it is larger than the maximum distance between any vertex and its nearest neighbour. The number of edges increases rapidly as the upper boundary increases. Hence, if the distance to the nearest neighbours varies greatly between vertices, i.e. the data is irregularly spaced, this requirement will result in high degree heterogeneity between the vertices <cit.>. The KNN network per construction connects each county with its k nearest neighbours, obtaining a homogeneous degree distribution with little variability around the average degree k. Consequently, the KNN network cannot account for heterogeneity in connectivity between counties, e.g. due to areas with higher population exchange or higher COVID-19 transmission rate. The Complete network assumes homogeneous mixing between the counties <cit.>. The validity of such assumption for pandemic phases with movement restrictions can be questioned while it can be considered suitable for phases with free movement. Despite the low popularity of train traveling in Ireland, even the railway-based network seems to capture relevant population flow between counties informing the spread of COVID-19[Popularity of train travel is measured in the modal split of passenger transport (MSPT) for railway, which is defined as the percentage share of passenger-kilometres travel by train compared to the total inland passenger transport, measured in passenger-kilometres <cit.>. Passenger-kilometres denote the product of travel distance times travelers, i.e. two passengers traveling 5km in a car amount to 10 passenger-kilometres <cit.>. The car remains the most popular mode of transport in Ireland, accounting for a MSPT of 81.8% <cit.>. The MSPT of train travel lies at 3.3%, below the EU28 average <cit.>, <cit.>. The general usage of public transport in Ireland also dramatically decreased during COVID-19. For July 2022, Google Mobility data registers a 11% decrease in public mobility for the public transport sector <cit.>. One can assume the reduction was even greater during periods with extreme COVID-19 incidences and correspondingly strict restrictions.]. The Delaunay triangulation networks as well as its induced networks, the Gabriel, Relative neighbourhood and SOI network, suffer from low interpretability, given their general geometrical approach to constructing neighbourhoods. The construction approach bases on a certain geometrical understanding of proximity between vertices which can be criticised as arbitrary because it is not contextually validated <cit.>, <cit.>, <cit.>, <cit.>, <cit.> . For disease modelling, networks ideally mirror the reality of disease transmission through population flow <cit.>. The Queen's contiguity network connects all geographically neighbouring counties. The Economic hub model strives to capture additional human movement, e.g. commuting, by expanding the Queen's contiguity networks with edges to the nearest economic hub. §.§ Choice between ARIMA and GNAR In general, ARIMA models are considered useful for short term predictions of the spread of epidemics <cit.>. Here we find that GNAR models outperform ARIMA models for both pandemic phases. The ARIMA models are fitted to each county individually, requiring a large number of parameters compared to global-α GNAR models. In a trade-off between predictive accuracy and number of parameters we would tend to prefer the constructed global-α GNAR models over ARIMA models. High dimensionality in model coefficients can result in high instability in the form of large standard deviation <cit.>. In GNAR models, the model parametrisation relies on the underlying networks. This guarantees parsimonious models and provides a model inherent dimensionality reduction <cit.>. In general, GNAR models seem more suited to modeling COVID-19 incidence if inter-county movement is restricted. The residuals for the restricted phases can be considered Gaussian. According to the performed Kolmogorov-Smirnov tests, the residuals for the unrestricted phases as well as for the GNAR model fit to the entire COVID-19 data disregarding the different phases deviate from the Gaussian distribution. The parameter estimates for the GNAR model require independent error terms with a bounded fourth moment to be consistent and asymptotically normal. These requirements are obviously fulfilled by Gaussian white noise[For the Gaussian assumption, consistency and asymptotic normality follow easily from the equivalence between the EGLS estimator and Maximum Likelihood estimator <cit.>.]. If the error term is not Gaussian and does not fulfill the conditions, alternative, computationally more intensive approaches, such as the Newton-Raphson method <cit.> or the Iteratively Reweighted Least Squares method <cit.>, should be implemented, in particular to correctly draw inference. In light of the discrete nature of the COVID-19 case counts and the challenge of structural misreporting, alternative error structures, e.g. according to a Poisson process, are plausible <cit.>. To the best of our knowledge, methods to estimate model coefficients while assuming such error structures in the context of GNAR models have not yet been developed. §.§ Interpreting the model fit The fitted GNAR models have different coefficients for the restricted and for the unrestricted phase. The effect of COVID-19 restrictions is not systematically detectable in the β-order, e.g. restricting inter-county travel leading to lower stage neighbourhoods or even β = 0. However, the change of restrictions is reflected in the change in values of the α- and β-order coefficients in the subset-specific models. Nevertheless, GNAR models seem unsuited to quantify the effectiveness of COVID-19 regulations to the same extent as alternative COVID-19 models do which are more tailored to address this effectiveness, e.g. <cit.>, <cit.>, <cit.>. The best performing GNAR models are more complex, i.e. have larger α- and β-order, than the GNAR models commonly implemented, e.g. in <cit.>, <cit.> and <cit.>. It is possible that the higher orders hint at stronger temporal and spatial dependence. The fitted GNAR models benefit from leveraging values further back in history as well as the historic values of neighbours. The spatial dependence decreases over time, leading to smaller stage neighbourhoods for larger lags. For model selection, the BIC and MASE do not agree in their consequences, in particular regarding the comparison between GNAR and ARIMA model. This observation emphasises the importance of choosing a criterion for model selection which corresponds to the desired task. To predict the COVID-19 incidence, the MASE contains more information to assess model accuracy. The BIC value weighs the likelihood of the data given a certain parametrisation of the GNAR model against the number of parameters fitted in the model. The BIC computation in the GNAR package assumes the error term, and hence X_t, to be Gaussian <cit.>. As detailed in Section <ref>, we could not validate this assumption for the unrestricted phase and the deviation from the assumption might explain the poor performance of the BIC in identifying models with high predictive accuracy. To disentangle the performance of the model from violated model assumptions, we carried out a simulation study, in which data was generated on a well defined network with Gaussian error with mean zero and unit variance, using GNAR-5-2111 model on the Queen's contiguity network and SPL weighting. To further assess the stability of the results, in Supplementary Material <ref> data are simulated from the best-fitting GNAR model for the restricted data set. However we found that the GNAR model could not accurately reconstruct the coefficients, see Tables <ref> and <ref> in Supplementary Material <ref>. This finding points to the need to further investigate possible limitations of GNAR models. § CONCLUSION In general, a network model could can be a powerful tool to inform the spread of infectious diseases, see for example <cit.> and <cit.>. This paper analyses how accurately GNAR models relying on differently constructed networks predict COVID-19 cases across Irish counties. While we do not assume that the disease only spreads along the network, we consider the edges to represent the main trajectory of the infection. The analysis shows that no network is consistently superior in predictive accuracy. GNAR models seem relatively robust to the exact architecture of the network, as long as it lies within a certain density range. Our analysis suggest that unrestricted pandemic phases require dense networks, while restricted pandemic phases are better modelled by sparse networks. For our data, the GNAR model is better suited to model the pandemic phases with strict movement regulations compared to those with more lenient or no restrictions. There are some caveats relating to the model. First, COVID-19 is also subject to “seasonal” effects, e.g. systematic reporting delays due to weekends and winter waves <cit.>, <cit.>, <cit.>. The GNAR model does not have a seasonal analogue which can incorporate seasonality in data, like SARIMA for ARIMA models <cit.>. Future work might introduce a seasonal component to the GNAR model, improving its applicability to infectious disease modelling. Moreover, the COVID-19 pandemic had a strong influence on mobility patterns <cit.>, <cit.>, in particular due to restrictions of movement and an increased apprehension towards larger crowds. Considering only static networks may introduce a bias to the model <cit.>, <cit.>, <cit.>. Future work could therefore explore how GNAR models can include dynamic networks to incorporate a temporal component of spatial dependency. Regarding the theory of GNAR models, alternative error distributions, in particular a Poisson distributed error term, could be explored given the indication of non-Gaussian residuals for the unrestricted pandemic phase. Alternative weighting schemes for GNAR models could be investigated to account for differences in edge relevance across time and network. The stability of parameter estimation in GNAR models also warrants further investigation. The network constructions themselves could also be refined. Future analysis could focus on more content-based approaches to constructing networks, e.g. building a network based on the intensity of inter-county trade, computed according to the gravity equation theory <cit.>. To our knowledge, neither GDP nor trade data is reported in Ireland on a county level. Many researchers have successfully modelled the initial spread of COVID-19 from Wuhan across China based on detailed mobility patterns, e.g. <cit.>, <cit.>. Such models require more detailed information on population flow and the origin of COVID-19 infections (e.g. local infection, domestic or international travelling). Partly due to data protection and privacy rights, comparable data of human movement on a level as granular as the Chinese is not publicly available <cit.>, <cit.>. The performance of network based statistical models is hence also determined by the availability - or unavailability - of sufficiently detailed data <cit.>. Finally, in our statistical analysis the information about the dominant strain was not included. With COVID-19 being an evolving disease, it is possible that different strains may display different transmission patterns. If more detailed data become available then this question would also be of interest for further investigation. Acknowledgements. G.R. acknowledges support from EPSRC grants EP/T018445/1, EP/X002195/1, EP/W037211/1, EP/V056883/1, and EP/R018472/1. comnet Supplementary Material § COVID-19 NETWORKS §.§ The constructed networks The COVID-19 networks are constructed according to either a geographical understanding or a statistical approach to neighbourhood. Figure <ref> in the main text shows the Railway-based network and the Queen's continguity network. Here we show the remaining networks, with parameter k=21 for the KNN network, and d=325 for the DNN. Figures <ref> and <ref> show the constructed COVID-19 networks, grouped by the geographical definition of neighbourhoods using the Great Circle distance, and by the geometric definition of neighbourhoods based on triangulations. §.§ Assessing the scale-free property for fitted COVID-19 networks The scale-free property of the COVID-19 networks is determined by the adjusted log-log plot after <cit.> which should cover at least three orders of magnitude to be considered adequate proof <cit.>. This is not the case for the Economic hub, the Delaunay triangulation, the SOI, the KNN and the DNN network. All remaining networks show a non-linear plot and hence scale-dependent behaviour, see Figure <ref>. This conclusion is corroborated by the R^2 values from a least-squares fit to the log-log plot. For a scale-free networks, the value for R^2 should lie above 0.99<cit.>, <cit.>; none of our networks satisfy this criterion. §.§ Assessing the network effect via Moran's I Figure <ref> in the main text shows Moran's I across time for the KNN network and the Railway-based network. Here in Figure <ref>, we provide the results for the other networks. For the Delaunay triangulation, Gabriel, SOI and Relative neighbourhood network, Moran's I follows a similar pattern, due to the induced nature of the networks. The Queen's and Economic hub network show different trajectories in Moran's I despite their similarity. § GNAR MODELS § DEFINITIONS AND A SHORT INTRODUCTION TO TIME SERIES ANALYSIS The temporal relationship within a time series is measured by the autocovariance (or autocorrelation), i.e. the covariance (or correlation) between X_t_1 and X_t_2 at some time points t_1 and t_2, observed for the same statistical unit (w.l.o.g. t_1 < t_2), AC(X_t_1, X_t_2) = ℂov(X_t_1, X_t_2) = 𝔼( (X_t_1 - μ) (X_t_2 - μ) ) ACor(X_t_1, X_t_2) = ℂorr(X_t_1, X_t_2) = 𝔼( (X_t_1 - μ) (X_t_2 - μ) )/√(𝕍(X_t_1))√(𝕍(X_t_2)) . Stationarity implies that, as time progresses, the distribution of the observations converges to a certain distribution, i.e. is independent of time. Weak stationarity or Covariance-stationarity is defined by a time-independent, constant mean and a time-independent covariance whose values only depends on the size of the lag k, not on time t <cit.>, ∀ t ∈{1, ..., T}, k ∈{0, ..., T-t } 𝔼(X_t) = μ ℂov(X_t, X_t + k) = γ_k . A time series is strictly stationary if the joint distribution for (X_t, X_t + k_1, ..., X_t + k_n) is independent of time t and only depends on the time intervals k_i between subsequent observations <cit.>. §.§ Further background on network-based time series models Network time series models expand multivariate time series by incorporating non-temporal dependencies. Such dependencies are represented by a network. An edge indicates a certain relationship between the variables which are represented by the two vertices. The output variable at each vertex is modelled to depend on its own past values as well as the past values of its neighbouring vertices which are determined by the network underlying the model. A myriad of methods for multivariate time series analysis exist <cit.>. Spatial Autoregressive and Moving Average models The development of network time series models has drawn inspiration from models for spatial observations. <cit.> developed the spatial autoregressive model (SAR) which models the value at a certain vertex as a weighted average across a vertex-specific re-defined location set and an additive random error. Such model can be expanded to a mixed regressive-autoregressive model by incorporating exogenous variables Y. <cit.> utilises this model to account for "the geography of social phenomena" (<cit.>, p. 359-360). <cit.> also introduces an alternative model relying on autoregression in the error term. <cit.> extends these models to spatial network autoregressive models (SARMA) that include a weight matrix W which encodes the spatial dependencies; see <cit.> for choices of W. These models have been explored in numerous papers (e.g. <cit.>, <cit.>, <cit.>, <cit.>, <cit.> and <cit.>). However, all above mentioned models do not take any time dependence into account, see <cit.>, and require Gaussian error term with homoscedastic variance. Temporal extensions The m-STAR model by <cit.> models a spatial as well as a temporal dependence, restricted to 1-lag autoregression. The sets of spatial weights represent network interdependence between vertices on different contextual levels. The spatial temporal conditional autoregressive model (STCAR) is a continuous Markov random field with a Gaussian conditional probability density function. Its distributions rely on a space-time autoregressive matrix which accounts for both spatial and temporal dependencies. A very general model choice is the vector autoregression model (VAR). A VAR model regresses a vector at time on its values at previous time points, according to its order which is restricted by the size of the data set <cit.>, <cit.>. While general VAR models can model data which are represented by networks, they would typically include one parameter per edge in the network and are hence often too large to be of practical use. Hence specifications to network data have recently been developed, as follows. Network autoregressive models For network time series, <cit.> established a particular VAR model, the network autoregression model. It includes an intercept and exogenous variables while time dependence is restricted to lag 1. Network autoregression follows the tradition of SMA and SAR models but incorporates historic values of the vertex in question and its 1^st-stage neighbourhood. To ensure less sensitivity to outliers in the data and to incorporate heteroscedasticity, the paper <cit.> expanded the model to network quantile autoregression <cit.>. The paper <cit.> further develops the grouped network autoregressive model (groupNAR)[The paper <cit.> uses the abbreviation "GNAR". To avoid misunderstandings, we refer to the grouped network autoregressive model from <cit.> as "groupNAR" and to the model by <cit.> as "GNAR".] which breaks the homogeneity by attributing each vertex to a group and estimating group-specific coefficients. The classification of the vertices is learned simultaneously with the parameter estimation. The number of classes has to be pre-specified <cit.>. Similarly to the 1-stage network autoregressive model, the network autoregression model NAR(p,s) includes historic values of the observation itself as well as its neighbours while assuming stationarity and spatial network homogeneity, with a general choice of lags and of neighbourhood stages. The network autoregressive (integrated) moving average models (NARIMA) resemble restricted vector autoregressive (VAR) models with restriction reflecting the underlying network, <cit.>, <cit.>. NARIMA models facilitate dimensionality reduction[It reduces the computational complexity for calculating model coefficients from 𝒪(n^2) for VAR models to 𝒪(n) for NARIMA models.] and allow great flexibility in modelling spatial and temporal dependencies <cit.>, <cit.>. A NARIMA model consists of a NAR component, X_t, and a moving average component of order q (MA(q)), ∑_l = 1^q η_l ε_i, t-l X_i, t = ∑_j = 1^p ( α_j X_i, t-j + ∑_r = 1^s_j∑_q ∈ N^(r)(i)β_j, r X_q, t-j) + ∑_l = 1^q η_l ε_i, t-l + ε_i, t . The relevance of vertices is acknowledged by vertex-specific weights <cit.>. The choice of weights strongly depends on the scenario and content the model is applied to <cit.>. X_i, t = ∑_j = 1^p ( α_j X_i, t-j + ∑_r = 1^s_j∑_q ∈ N^(r)(i)β_j, rω_i, q X_q, t-j) + ∑_l = 1^q η_l ε_i, t-l + ε_i, t . The gNARIMA model generalises (<ref>) by incorporating time dependent weights ω_i, q, t which allow excluding some of the vertices some of the time <cit.>. All above models do not incorporate any exogenous variables and can only integrate one network <cit.>, <cit.>. The GNAR model (<ref>) used in this paper is an adaptation of the gNARIMA model <ref> with autoregressive weights α_i,j which are allowed to depend on the vertex itself, but without a moving average component. §.§ The GNAR model in matrix form, and generalised least squares estimation We can rewrite the GNAR model (<ref>) in matrix form X = B Z + ε with X = [X_p+1, ..., X_T] and Z = [Z_p, ..., Z_T-1] where Z_t^T = [X_t, ..., X_t-p+1]. The matrix B = [ϕ_1, ..., ϕ_p] summarises α- and β-coefficients in the form ϕ_j = diag(α_α_i, j) + ∑_r = 1^s_jβ_j, r W^(r), where W^(r) denotes the weight matrix with [W^(r, ]_l, m = ω_l, m·𝕀_m ∈ N^(r)(l). The random error matrix is expressed by ε = [ε_p+1, ... , ε_T], where the vector ε_t = (ε_1, ..., ε_n) ∼ (0, Σ_ε) is iid. with variance Σ_ε = σ^2 · I_N × N <cit.>. GNAR implies restrictions R ∈ℝ^pN^2 × M on the parametrisation, vec(B) = ϕ = R γ where vec(B) describes the reformatting of matrix B into a vector by stacking its columns, the vector ϕ denotes the unrestricted coefficient vector for a VAR model and the vector γ consists of the M unrestricted parameters. For a vertex-specific GNAR model, M = N p + ∑_j = 1^p s_j and for a global-α model, M = p + ∑_j = 1^p s_j <cit.>. The Least Squares (LS) estimation finds ϕ̂ such that it minimises the sum of squares <cit.>, ϕ̂ = argmin_ϕ tr( (X - BZ)^T Σ_ε^-1 (X - BZ) ) = ( (Z Z^T)^-1 Z ⊗ I_N ) · vec(X) , where ⊗ denotes the Kronecker product. For the re-parametrisation imposing constraints in (<ref>), the Generalised Least Squares estimation (GLS) computes γ̂ = ( R^T (ZZ^T ⊗Σ_ε^-1) R )^-1 R (Z ⊗Σ_ε^-1 ) · vec(X) <cit.>, <cit.>. The GLS estimator is consistent and asymptotically follows a normal distribution if {X_t}_t is stationary and ε_t a standard white noise process. It is identical to the Maximum Likelihood estimator if we assume ε_t to be Gaussian <cit.>. However, the GLS estimator requires knowledge of the error covariance matrix Σ_ε which is usually unknown. The Estimated GLS estimator (EGLS) substitutes the true covariance matrix in the estimation (<ref>) with a consistent estimator Σ̂_ε which converges to Σ_ε in probability as n →∞ <cit.>, γ = ( R^T (ZZ^T ⊗Σ̂_ε^-1) R )^-1 R (Z ⊗Σ̂_ε^-1 ) vec(X) . The matrix R^T (ZZ^T ⊗Σ̂_ε^-1) R must be non-singular which holds with probability 1 for continuous X_t <cit.>. The estimate γ is consistent and asymptotically normal if ε is standard white noise. For a stationary time series with standard white noise, the EGLS estimate (<ref>) and GLS estimate (<ref>) are asymptotically equivalent <cit.>. Under stationarity and standard white noise error, a possible choice for a consistent estimator Σ̂_ε is Σ̂_ε = 1/T (X - B̂Z) (X - B̂Z)^T where B̂ follows from the unconstrained LS estimate (<ref>) and a corresponding transformation to obtain matrix B̂ <cit.>. We obtain B by inserting the EGLS estimate for γ into (<ref>) <cit.>, vec(B) = R γ . §.§ Choices for the order of the model We iterate through all possible combinations of α-order p ∈{1, ..., 5} and β-order: [0], [1], [1, 0], [1, 1], [2, 1], [2, 2], [1, 0, 0], [1, 1, 0], [1, 1, 1], [2, 1, 1], [1, 0, 0, 0], [1, 1, 0, 0], [1, 1, 1, 0], [1, 1, 1, 1], [2, 1, 1, 1], [2, 2, 1, 1], [2, 2, 2, 1], [1, 0, 0, 0, 0], [1, 1, 0, 0, 0], [1, 1, 1, 0, 0], [1, 1, 1, 1, 0], [1, 1, 1, 1, 1], [2, 1, 1, 1, 1], [2, 2, 1, 1, 1] and [2, 2, 2, 1, 1]. For the BIC selection of the best model for each data subsets, the range of β-order is expanded to include the following additional vectors: [2, 0, 0] - [5, 0, 0], [2, 1, 0] - [5, 1, 0], [2, 2, 0], [4, 1, 1], [5, 1, 1], [2, 2, 1], [2, 2, 2] - [5, 2, 2], [2, 0, 0, 0] - [5, 0, 0, 0], [2, 1, 0, 0] - [5, 1, 0, 0], [2, 2, 0, 0], [2, 1, 1, 0] - [5, 1, 1, 0], [2, 2, 1, 0], [2, 2, 2, 0] - [5, 2, 2, 0], [2, 1, 1, 1] - [5, 1, 1, 1], [3, 2, 2, 1] - [5, 2, 2, 1], [2, 2, 2, 2], [2, 0, 0, 0, 0] - [7, 0, 0, 0, 0], [2, 1, 0, 0, 0] - [7, 1, 0, 0, 0], [2, 2, 0, 0, 0], [2, 1, 1, 0, 0] - [7, 1, 1, 0, 0], [2, 2, 1, 0, 0], [2, 2, 2, 0, 0], [2, 1, 1, 1, 0] - [7, 1, 1, 1, 0], [2, 2, 1, 1, 0], [2, 2, 2, 1, 0], [2, 2, 2, 2, 0] - [7, 2, 2, 2, 0], [3, 1, 1, 1, 1] - [7, 1, 1, 1, 1], [3, 2, 2, 1, 1] - [7, 2, 2, 1, 1], [2, 2, 2, 2, 1], [2, 2, 2, 2, 2]. In addition to the BIC, a second model selection criterion is the Akaike Information Criterion (AIC); it works similarly to the BIC but penalises the number of parameters differently <cit.>, <cit.>; AIC(k, n) = 2 k - 2 ·log(L(X; θ)) . The AIC is not necessarily consistent (<cit.>, Corollary 4.2.1) and therefore the BIC is preferred for model selection <cit.>. We found the AIC values very similar to the BIC values in our data analysis and hence do not report them in the main text. § STATIONARITY IN COVID-19 SUBSETS A Box-Cox transformation of the data can often improve stationarity. For dataset 2, 3 and 4, the optimal λ values are close to 1. For dataset 1 and 5, the value lies around λ≈ 1.5. For the sake of interpretability and comparability between the subsets, we round λ = 1 for all datasets. § TRAINING GNAR MODELS §.§ Selecting optimal GNAR models across networks The models are fitted according to three weighting schemes: (1) shortest path length (SPL), (2) inverse distance weighting (IDW), (3) inverse distance and population density weighting (PB). Table <ref> shows the results for the combined data set. Overall, the best performing models have comparatively large α-order and at least a 1^st-stage neighbourhood. The KNN network (k = 21) with SPL weighting achieves the absolute lowest BIC (BIC = 194.75), followed by the DNN network (d =325; BIC = 194.91) and the Complete network (BIC = 194.93). The best performing sparser network is the Delaunay triangulation network (BIC = 196.16). As a general trend, it is noticeable that models with larger β-order are preferred for sparser network and models with smaller β-order for denser networks, indicating a certain trade-off between parameter "complexity" and network density. For the α-order, the denser networks (KNN, DNN, Complete and Economic hub network) have a larger α = 5 than the sparser networks. The SPL weighting outperforms both alternative weighting schemes for every network. Table <ref> indicates that the GNAR models have an overall better fit for the restricted data set than for the unrestriced data set. The best performing GNAR models tend to require higher neighbourhood stages for time lag ρ = 1 for the unrestricted data set, compared to the restricted data set, suggesting a focus of spatial dependence in COVID-19 ID on the immediate past during unrestricted / less restricted pandemic periods. For the restricted data set, the spatial dependence reaches further into the past. The large α values for each model indicate a strong temporal dependence in COVID-19 ID. From Figure <ref> we observe that networks with high density obtain smaller BIC values than sparse networks for the unrestricted data set, with a minimum for the third densest network, the KNN network. For the restricted data set, the optimal BIC values decrease as the network density increases, to reach its minimum for the Queen's continuity network, and increase for denser networks. §.§ GNAR coefficient development for the optimal model: restricted versus unrestricted phase The GNAR model GNAR(α = 5, β = (1, 1, 1, 1, 0)) for the KNN network k = 21, optimal on the entire data set, is fitted on the data for the restricted and unrestricted phases. The absolute value for α and β coefficients increase for the unrestricted pandemic phase compared to the restricted phase. §.§ Prediction with GNAR models The predictive accuracy of the best performing model, fitted on the entire COVID-19 data set for each COVID-19 network, is measured by predicting the lag-1 COVID-19 ID for the time period of 10 weeks, and computing the corresponding weekly mean absolute squared error (MASE) <cit.>, <cit.>, <cit.>. Figures <ref> include the MASE for the county-specific ARIMA(2,1,0) models as a comparative benchmark. The ARIMA models are comparable to any GNAR model in predictive accuracy. No network performs visibly better than any other over space and time. The predictive performance for the Relative neighbourhood, SOI, Gabriel and Delaunay triangulation network is almost identical, which may have been expected due to their similar network characteristics. While the general fit may not be unreasonable, we find that GNAR models across networks do not pick up on the many peaks and dips in the COVID-19 data, as evident from plotting the predicted COVID-19 ID for each network against the true COVID-19 ID in Figures <ref> and <ref>. The Figures also include predictions based on the ARIMA model, fitted to each county separately. The predictive accuracy for the ARIMA models does not systematically surpass the GNAR models. §.§ Gaussianity of residuals for GNAR models The Gaussianity of the residuals may be statistically tested in a Kolmogorov-Smirnov test. Table <ref> summarizes the fit of the GNAR model for the restricted and unrestricted data set for each county, including the p-value for the Kolmogorov-Smirnov test. The conclusions based on the Kolmogorov-Smirnov tests are verified by inspecting QQ-plots. The residual QQ-plots, which we show for the county Dublin as an illustration, indicate non-Gaussian residuals for the GNAR models across all networks, see Figure <ref>. This conclusion is corroborated when applying the Kolmogorov-Smirnov test across counties. The residual variance is low for time periods with low 1-lag COVID-19 ID and high for time periods with high 1-lag COVID-19 ID. Thus there may be some heteroscedastic noise in the data. In contrast to the combined data set, when separating the data set into restricted and unrestricted phases of the pandemic, the QQ-plots for each GNAR model indicate Gaussian residuals for the restricted pandemic phases and residuals which deviate from the normal distribution for the unrestricted pandemic phases. The QQ-plots are shown in Figure <ref> for the counties Dublin, Carlow and Cavan; the remaining counties show similar residual plots. § SIMULATION To establish how well the GNAR model in general can reconstruct the data generating method, data is simulated according to the GNAR-5-21111 model on the Queen's contiguity network. Choosing s = (2, 1, 1, 1, 1), iid. error term ε_i, t∼ N(0, σ^2) with σ^2 = 0.01 and σ^2 = 0.001 and SPL weights, i.e. ω_i, q = 1/|N^(r)(i)| where | N^(r)(i)| denotes the number of vertices in the r^ th-stage neighbourhood for i and the first 5 time points are initialized as X_i, t∼ N(10, σ^2) iid., the outcome X_i for county i and timesteps t = 6, …, 1000 follows from X_i, t = ∑_j = 1^5 ( α_i,j X_i, t-j + ∑_r = 1^s_j∑_q ∈ N^(r)(i)β_j, r ω_i, q X_q, t-j) + ε_i, t The simulated data is leveraged to train a GNAR model on the Queen's contiguity network. Tables <ref> (σ^2 = 0.01) and <ref> (σ^2 = 0.001) compare the true and estimated GNAR model coefficients. The estimated confidence interval contains the true coefficient value only for no coefficient in the setting σ^2 = 0.01 and for one coefficients, α_4, in the setting σ^2 = 0.001. Despite small noise, the GNAR model is incapable of correctly detecting the temporal and spatial dependence in the simulated data.
http://arxiv.org/abs/2307.04907v1
20230710211646
SimpleMTOD: A Simple Language Model for Multimodal Task-Oriented Dialogue with Symbolic Scene Representation
[ "Bhathiya Hemanthage", "Christian Dondrup", "Phil Bartie", "Oliver Lemon" ]
cs.CL
[ "cs.CL", "cs.LG" ]
Planar Curve Registration using Bayesian Inversion [ ================================================== SimpleMTOD is a simple language model which recasts several sub-tasks in multimodal task-oriented dialogues as sequence prediction tasks. SimpleMTOD is built on a large-scale transformer-based auto-regressive architecture, which has already proven to be successful in uni-modal task-oriented dialogues, and effectively leverages transfer learning from pre-trained GPT-2. In-order to capture the semantics of visual scenes, we introduce both local and de-localized tokens for objects within a scene. De-localized tokens represent the type of an object rather than the specific object itself and so possess a consistent meaning across the dataset. SimpleMTOD achieves a state-of-the-art BLEU score (0.327) in the Response Generation sub-task of the SIMMC 2.0 test-std dataset while performing on par in other multimodal sub-tasks: Disambiguation, Coreference Resolution, and Dialog State Tracking. This is despite taking a minimalist approach for extracting visual (and non-visual) information. In addition the model does not rely on task-specific architectural changes such as classification heads. § INTRODUCTION Multimodal conversational agents have witnessed a rapidly growing level of interest among the conversational AI community as well as within the computer vision community. Most multimodal conversational datasets to-date are an extension of visual question answering (VQA) <cit.>. Consequently building upon the success of other visio-linguistic tasks such as VQA, state-of-the-art multimodal conversational agents commonly depend on non-autoregressive models <cit.> most of which are based on BERT <cit.>. However, dialogues with such systems significantly differ from what the conversational AI community has typically viewed as a multi-turn dialogue. First, most of the current multimodal dialogue datasets are focused on querying the visual content whereas external knowledge bases have been an integral part of traditional unimodal dialogue datasets <cit.>. Second, in traditional unimodal dialogues, co-reference resolution (explicitly or implicitly) plays a major role within the dialogues. Additionally, state-of-the-art unimodal conversational agents predominantly rely on GPT-based auto-regressive models <cit.> due to their proven language generation capabilities <cit.>. The SIMMC 2.0 <cit.> task-oriented dialogue dataset bridges this gap between multimodality and the more traditional view of a multi-turn dialogue. Due to the simultaneous presence of signals from multiple modalities, which a user can refer to at any point in the conversation, the multimodal task-oriented dialogues proposed in the SIMMC 2.0 are challenging compared to both text-only counterparts and image querying dialogue datasets. In spite of the inherent complexity of multimodal dialogues, we propose SimpleMTOD, recasting all sub-tasks into a simple language model. SimpleMTOD combines the idea of 'de-localized visual object representations' with a GPT-like auto-regressive architecture. The idea of de-localized representations stems from the analogous process of de-lexicalization that has been extensively used in task-oriented dialogues. In de-lexicalization <cit.>, slot-values such as vegan are replaced by a more general abstracted token such as food-type. Likewise, when de-localized, objects are represented by the catalogue type of the object instance rather than the instance itself. These de-localized tokens then possess a consistent meaning throughout the dataset. Along with the dataset, <cit.> propose four benchmark tasks decomposing multi-modal task oriented dialogue into sub-tasks: Multimodal Disambiguation, Multimodal Co-reference Resolution, Multimodal Dialog State Tracking, and Response Generation. The first three tasks deal with the dialogue context understanding, analogous to NLU and DST in unimodal agents. The last task is similar to unimodal NLG, but expects the generated responses to be sensible within a multimodal context with visual signals and associated knowledge base. The main objective this work is to evaluate the effectiveness of de-localized object representations within SimpleMTOD. Despite the simplicity, SimpleMTOD achieves the state-of-the-art BLEU score of 0.327 for assistant response generation in the SIMMC2.0 test-std [The testing dataset (test-std) is not publicly available and was part of the SIMMC 2.0 challenge used for scoring the submitted systems.] dataset . Furthermore, the model achieves an accuracy of 93.6% in Multimodal Disambiguation (MM-Disambiguation), Object-F1 of 68.1% in Multimodal Co-reference Resolution (MM-Coref), and 87.7% (Slot-F1) and 95.8 (Intent-F1) in Multimodal Dialogue State Tracking (MM-DST). Other than the proposed benchmark settings, we also evaluate SimpleMTOD in an end-to-end setting. Major contributions of our work are as follows: * We formalise notion of multimodal task oriented dialogues as an end-to-end task. * We propose a GPT-based simple language model combined with visual object de-localization and token based spatial information representation, that addresses four sub-tasks in multimodal dialogue state tracking with a single architecture. * We analyse the behaviour of our model using salience scores from the Ecco <cit.> framework, which provide an intuition into which previous token mostly influence predicting the next token. § BACKGROUND Traditional task-oriented dialogue datasets consist of a dialogue corpus, a dialogue ontology with a pre-defined set of slot-value pairs, and annotations required for related sub-tasks in a set of domains <cit.>. The SIMMC 2.0 dataset follows a similar structure and contains dialogues in both the fashion and the furniture domains. However, in the SIMMC 2.0 multimodal dialogue corpus, each dialogue is also associated with an image representing the scene where each dialogue takes place. A scene is made by re-arranging a known set of items (objects) in different configurations. Along with the raw-image, the dataset provides a file (scene JSON) containing details of the images such as objects and relationships between objects. Furthermore, a meta-data file contains visual and non-visual attributes of objects that recur within a scene. §.§ Benchmark Tasks Multimodal Disambiguation: In real-world conversations, references made by humans related to objects or entities can be ambiguous. For example, consider A: Blue trousers are priced at $149.99. U: What about the red ones?, in a setting where there are multiple red trousers. In these situations, there is insufficient information available for co-reference resolution. This task is aimed at identifying such ambiguous scenarios, given the dialogue history. Multimodal Co-reference Resolution: The goal of this task is to resolve any reference in a user utterance to canonical object ids of the object as defined per each scene (see image in Figure <ref>). Users may refer to 1) dialogue context 2) visual context, or 3) both. Mutltimodal Dialogue State Tracking: Similar to unimodal DST, this tracks the belief states of users across multiple turns. The belief state consists of an intent, slot-value pairs, and user requested slots. Assistant Response Generation Given the user utterance, ground-truth APIs, and ground-truth cannonical object ids (with meta-data), the model needs to generate a natural language response describing objects as observed and understood by the user. § METHODS In the first part of this section, we model multimodal task oriented dialogues as a sequence generation task. We define the problem in a more general setup and discuss some empirical limitations applied to the model. §.§ Multimodal Task-Oriented Dialogues Similar to unimodal setting, we view dialogue state (belief-state) tracking, action prediction, and response generation to be the core components of multi-modal task-oriented dialogues. However, outputs of each of the sub-tasks should be conditioned not only on the dialogue history, but also on the associated scene. Multimodal dialogues consist of multiple turns. In a turn t, there exists an associated visual scene V_t, the user-provided input U_t and the system-generated response S_t. Theoretically, the dialogue context can be denoted as C_t = [V_0,U_0, S_0|V_0, . . . S_t-1|M_t-1,V_t, U_t]. Here S_t-1|M_t-1 denotes that the statement S_t-1 is associated with the representation of multimodal information such as objects viewed and mentioned to the user during that turn. Given the context, C_t, SimpleMTOD generates the belief-state B_t: B_t = SimpleMTOD(C_t) B_t is a concatenation of intent, slot-values, requested slots, and resolved object references MRef_t. However, it should be noted that, SimpleMTOD models the context as C_t = [V_t, U_t-n, S_t-n|M_t-n, . . . S_t-1|M_t-1,U_t, ] where the n is the context window. Major deviations from the theoretical representation of C_t are, 1) we ignore the history of visual signals and only consider the current visual scene; 2) we consider only n previous turns in contrast to the entire dialogue. Then, in a more generalized setting where the system have access to an external database, which can be queried,B_t would be used to retrieve database results D_t. These D_t along with context and belief states can be used to generate the system action A_t. A_t = SimpleMTOD(C_t, B_t, D_t) Action A_t is a triplet containing system intent, slot-value pairs, and details on requested slots. However, in our setup, no such database exists. Hence we model action A_t from B_t and C_t keeping D_t=∅. Finally, the concatenation of the context, belief state, (database results), and action is used to generate system responses S_t. S_t = SimpleMTOD(C_t, B_t, D_t, A_t) §.§ De-localized Visual Representation Here we discuss how visual information of a scene is represented within the SimpleMTOD as de-localized tokens and how V_t is derived from those tokens. In the SIMMC 2.0 dataset a scene is a spatial configuration of a set of object instances. From here on we will refer to these instances simply as objects. Probable types of these objects are pre-defined in two meta-data files, with one for each domain. We will refer to these files as catalogues and an entry of these catalogues as a catalogue-item. See Figure<ref> for an example catalogue-item with visual and non-visual attributes defined. For benchmark tasks, non-visual attributes can be used during inference while visual attributes are not allowed. However, we use neither of these attributes in the SimpleMTOD visual representation explained below. In our setup, we assign a unique token (eg: INV_278) to each catalogue-item. These catalogue-items are used as a de-localized version of objects within a scene. While these catalogue-item tokens are consistent across the entire dataset, spatial relationships associated with the objects will be lost. Therefore we encode spatial details of objects as follows: Each scene is divided into 9 regions as shown in Figure <ref>. Every object is assigned to a region based on the center-point of the object bounding box. Then concatenation of catalogue-item tokens and assigned region description (eg: INV_278@TOP:LEFT) tokens are used as object representations. A scene-description is obtained by concatenating all such tokens representing every object within a scene. This is our V_t in SimpleMTOD. §.§ SimpleMTOD Training and Inference For training, we follow routine causal language modeling with teacher forcing. A training sequence X_t in SimpleMTOD is obtained by concatenating all the components; context,user belief state, database results (which is null in our case), system actions and system utterance. X_t = [C_t, B_t, D_t, A_t, S_t] In terms of tokens, X_t can be denoted as X_t = (x^0_t,x^1_t,....x^n(t)_t) when n(t) represent the number of tokens in turn t. In general, the goal of the model is to learn ρ(X) given X = (x^0,x^1,..x^i..x^n) : ρ(X) = Π_i=1^nρ(x^i|x^<i) For this, we train the neural network with parameterization θ minimizing the negative log-likelihood over the multimodal dialogue corpus MD where MD={X_1,X_2....X_|MD} . However, in our setup the tokens related to scene-description V are ignored during the loss calculation. When n(V) is the number of tokens related to the scene description: L(D) = -∑_t=1^|MD|∑_i=n(V)^n(t)logρ_θ(x^i_t|x^<i_t) During inference, the learnt parameter θ is used to predict a token at a time. Unlike training time where ground-truth tokens are used every time, generated tokens become part of the left-context. For inference, we stick to a simple greedy prediction approach with top-k=1. That is we always generate the token with highest probability as the next token. § EXPERIMENTS In Section <ref> we defined an end-to-end setting for SimpleMTOD. However, some of the benchmark tasks allow more ground-truth information to be utilized during training and inference time. For the MM-Disambiguation task, we consider two setups. In the task-specific scenario, we train the model to predict YES or NO tokens directly from context C_t. In the end-to-end setup, we consider the label to be YES only if the system intent predicted is to Disambiguate. Two similar setups are considered for MM-Coref as well. It should be noted that end-to-end version of SimpleMTOD predicts de-localized tokens with spatial information and we obtain the canonical object id by reversing the de-localization process explained in Section <ref>. If multiple objects were found in the same region with same catalogue-item token, the area of the object bounding box is used as a tie-breaker. In the case of assistant response generation, the benchmark task defined in SIMMC 2.0 allows ground-truth system belief state to be used as an input. Therefore, we evaluate both from action response generation as well as end-to-end setting. §.§ Baselines We consider 2 baselines which were provided as part of the SIMMC2.0 challenege. GPT-2: This extends <cit.> to multi modal task-oriented dialogues, encoding objects in a scene using canonical object ids concatenated with the token OBJECT_ID. For the MM-Disambiguation task, a classification head is used, while other tasks are modeled in a generative manner. Multimodal Transformer Networks (MTN): Adapts <cit.> (only) for the MM-DST and Response Generation sub-tasks [MTN-SIMMC2 implementation <https://github.com/henryhungle/MTN/tree/simmc2>]. In contrast to the auto-regressive modeling of SimpleMTOD, MTN uses an encoder-decoder architecture. §.§ Training and Evaluation We follow the experimental setup of the SIMMC 2.0 challenge with same dataset-splits, inference time limitations, and performance metrics. See Appendix:<ref> for details. It should be noted that the test-std split of the SIMMC2.0 dataset is not publicly available and is a held-out set for evaluating submissions to SIMMC2.0 challenge. Therefore, the final version of our model could only be evaluated on the dev-test split. However, the prior version of the model SimpleMTOD_Sub, which did not encode region information or scene information, was submitted to the SIMMC2.0 challenge. § RESULTS MM-Disambiguation As shown in Table <ref> and Column 2 of Table <ref>, SimpleMTOD_Sub achieves accuracy scores of 92.17% and 93.6 on devtest and test-std respectively when trained to predict YES/NO tokens. This is a 27% relative improvement over the GPT-2 based baseline with a classification head. Furthermore, we evaluate the model on the MM-Disambiguation task as part of the end-to-end model. based on the system intent predicted by the model. Here, we consider any INFORM:DISAMBIGUATE prediction as a YES. This approach demonstrates a very similar accuracy score of 92.12. The best performing model (94.5% : Team-6) on test-std, ensembles two models trained on RoBERTa and BART [This is based on the description provided at: < https://github.com/NLPlab-skku/DSTC10_SIMMC2.0>]. MM-Coref Table <ref> and the Third column of the Table <ref> show the MM-Coref Object-F1 scores of on devtest and test-std respectively. SimpleMTOD achieved 68.2 (54% relative gain over baseline) in test-std dataset and 67.6 (84% gain) on the devtest split. While there is no information available on Team-2's leading solution, the BART-based model of Team-4 which is trained end-to-end with task-specific heads achieves 75.8% on this task. MM-DST Despite being a simple language model, both our Intent-F1 (95.8%) and Slot-F1 (87.7%) scores on test-std split are comparable with complex visual-language models. Furthermore, as in Table <ref>, there is significant improvement in the Joint Accuracy scores from 57.3% to 63.1% when positional information is used. Response Generation A prior version of the model, SimpleMTOD_Sub achieves a state-of-the-art BLEU score of 0.327 on the test-std split of the SIMMC2.0 dataset. This is in comparison with models which rely on sophisticated feature extraction processes. In our view, the simplified representation of visual information preserves and complements the generative capabilities of pre-trained models. Furthermore, as shown in Table <ref>, SimpleMTOD achieves a BLEU score of 0.49 on devtest when the ground-truth actions are used. The end-to-end version of SimpleMTOD also achieves a BLEU score of 0.45. It should be noted that this is an improvement over the SimpleMTOD_Sub model score of 0.43. This indicates the importance of associating region related information. § DISCUSSION In order to understand the behaviour of SimpleMToD, we use gradient-based salience <cit.> provided with the Ecco framework <cit.>. Using Ecco, we inspect salience scores for all the tokens in the left side of the token of interest. In the heat-maps presented in this section, darker colors mean a higher salience score. It should also be noted that the model assigns high salience scores on separator tokens (such as <USB>, [ , ] ) that define the structure of the generation. While proper attention to the structure is of paramount importance, our discussion focuses on salience scores assigned to the rest of the tokens, which represent the semantics of the multimodal conversations. Effect of De-localization and Scene Descriptions: The introduction of de-localized tokens significantly improves the Object-F1 of MM-coref and joint accuracy of MM-DST. Accordingly, we first analyse the behaviour of the model when predicting co-references. Figures <ref> and <ref> show example utterances with and without scene descriptions respectively. In the case where scene description is not provided, the model puts a high salience on tokens `yellow' and `shirt', and predicts the token INV_146 which represents a yellow color shirt as shown in Table <ref>. (It should be noted that none of the metadata shown in the diagram are provided to the model explicitly and the model figures this out from globally consistent use of tokens). However, in this case, a particular catalogue item INV_146 is not present in the scene. When we observe the confidence values of the prediction from the last layer (shown in Table <ref>), it can be seen that the model is not quite certain about the prediction with 13.75 for INV_146 and 13.04 for INV_247, both of which represent yellow shirts. This is to indicate that even though the model has learnt to associate object attributes necessary for co-reference resolution, it lacks information to be certain about the prediction. To this end, we provide the model with a scene description as described in <ref>. When the scene descriptions are provided, SimpleMTOD correctly predicts the token INV_247 with 92.63% confidence and high salience score over the same token from the scene description, as well as tokens `shirt' and `yellow'. Additionally from Figure <ref> it can be noted that INV_199 also shows a high salience score. From the metadata, we can see it is a pink color shirt. However, there is a significant salience score over the token `yellow' that results in generating the correct token INV_247 over INV_199 (which is the second ranked token with only had 7.17 confidence). Extending the analysis, we modified the original utterance to “I need a pink shirt" and generated the next token, and SimpleMToD accordingly predicted the token INV_199 (with high confidence of 99.79%) as observed in Figure <ref>. Effect on Intent prediction: Even though scene descriptions play a key role in overall belief tracking as described earlier, the Intent-F1 score drops from 95.8% to 94.0% when the scene descriptions are encoded. In order to understand the effect, we inspect salience scores when predicting the user intent. It can be observed that when the scene descriptions are omitted, higher salience scores are assigned to the user utterance suggesting more focus on that. However, when the scene information is included, salience scores assigned to the utterance decreased to an extent, resulting in wrong predictions in certain cases. This is to indicate that scene descriptions are either redundant or act as a distractor when we consider intent-detection, which explains reduction in score. Furthermore, this behaviour aligns with our intuition that the intent parts of the user utterances are predominantly language-driven. Figure <ref> shows an example where omitting the scene information produces the correct intent of REQUEST:COMPARE, whereas our final version of SimpleMTOD wrongly predicted the intent as ASK:GET § RELATED WORK <cit.> are closely related to our work as they all model task-oriented dialogues in an end-to-end manner with GPT-2-like large-scale transformer-based architectures. However, all those models focus on text-only task-oriented dialogues. The GPT-2 adaptation <cit.>, which is provided as a baseline along with the SIMMC2.0 dataset, is also closely related to our work. However, this baseline represents visual objects by canonical ids and demonstrates subpar results to our model in all four tasks. Generative encoder-decoder models <cit.> are a promising alternative to decoder-only (GPT-2 based) dialogue models that have been extensively investigated in unimodal task-oriented dialogues. The MTN-baseline <cit.>, which we compare to, is based on the encoder-decoder architecture. While being inferior with respect to performance in both the tasks considered, this model involves sophisticated feature extraction process. <cit.> coined the term `de-lexicalization' for abstraction in neural dialogue state tracking tasks. This idea has been extensively used in goal oriented dialogues. Our notion of de-localized object representation is influenced by this work. § CONCLUSION We explore a simple, single generative architecture (SimpleMTOD) for several sub-tasks in multimodal task-oriented dialogues. We build on large-scale auto-regressive transformer-based language modeling, which has been effectively utilized in task-oriented dialogues, and formalize the multimodal task-oriented dialogue as a sequence prediction task. Our model employs a `de-localization' mechanism for visual object representation that ensures the consistency of those tokens throughout the dataset. Furthermore, we encoded spatial information of object instances with a very small number of special (globally consistent) tokens. Despite the simplicity in representing visual information, our model demonstrates comparable or better performance with models that heavily rely on visual feature extraction, on four multimodal sub-tasks in the SIMMC2.0 challenge. § FUTURE DIRECTIONS Most current vision-language research relies on fusing pixel-level vision information with token-level language representations. However, their applicability for dialogues where the language is sophisticated remain sparsely studied. In contrast, we explore a symbolic approach for representing visual information and combining it with auto-regressive language models. While we rely on smaller scale models (with 17 million parameters), our work is readily extendable for large language models (LLMs). Unlike pixel level visual representations, special tokens representing visual information being more similar to the word tokens which the LLMs area trained on, symbolic visual representation would facilitate effective transfer learning. SimpleMTOD represents visual information using carefully designed input tokens. Capturing these information through semantic scene-graphs, which would provide richer representation, and fusing them with LLMs would be an interesting future direction of research for multimodal dialogues. Development in knowledge-graph based language grounding would complement this line of work. § ACKNOWLEDGEMENTS This work is partially supported by the European Commission under the Horizon 2020 framework programme for Research and Innovation (H2020-ICT-2019-2, GA no. 871245), SPRING project, https://spring-h2020.eu acl_natbib § SIMMC 2.0 DATASET The SIMMC 2.0 dataset ( released under CC-BY-NC-SA-4.0 licence) [https://github.com/facebookresearch/simmc2 ] consists of three major components: * Dialogue Data: Includes system and user utterance with relevant annotations. Figure <ref> provide first 4 turns of a sample dialogue. * Scene Data: Set of scenes representing environments in which dialogues take place. Figure <ref> provide the scene related to the dialogue segment shown in Figure <ref>. Other than raw-images , an json file associated with each image provides detail of objects, such as bounding boxes and spatial relationships (left of, right of, over, under) among objects. * Meta-data: acts as a catalogue of items related to the dialogue corpus. Scene images are made-up by positioning instances of catalogue items in different configurations. Entries contain both visual and non-visual attributes of each item. Visual attributes of items from the meta-data file are not allowed to be used during inference.Figure <ref> shows a single entry in meta-data file. §.§ Data Statistics § TRAINING AND EVALUATION We conduct our experiments with the SIMMC 2.0 <cit.> dataset. Further, we follow the experimental setup of the SIMMC 2.0 challenge with the same dataset splits, inference time limitations, and performance metrics. Implementation: We conduct our experiments using PyTorch Huggingface’s transformers <cit.>. All SimpleMTOD model variants were initialized with Open AI GPT-2 pretrained weights and exhibits computational speed identical to Open AI GPT-2. We use Adam optimizer <cit.> with default parameter of Huggingface's AdamW implementation (lr=1e-3, eps= 1e-6, weight_decay=0). We use the GPT-2 tokenizer for encoding user and system utterances. However, we noticed that the default tokenizer encoder mechanism chunks special tokens introduced for visual object representation. Therefore, we implemented an encoding mechanism which selectively skips the default byte-pair encoding for object tracking tokens. Evaluation: We use the same evaluation metrics and evaluation scripts provided with the SIMMC2.0 challenge. Table <ref> shows metrics that are used for evaluating each benchmark task. § SALIENCE SCORES For the discussion we use input X gradient (IG) method from <cit.> as suggested in <cit.>. In the IG method of input saliency, attribution values are calculated across the embedding dimensions. With the values from embeddings dimension, the L2 norm is used to obtain a score per each token Then resulting values are normalized by dividing by the sum of the attribution scores for all the tokens in the sequence. Here we provide actual salience scores for heat-maps provided in the discussion in Section: <ref>.
http://arxiv.org/abs/2307.03969v2
20230708125936
Impact of noise on inverse design: The case of NMR spectra matching
[ "Dominik Lemm", "Guido Falk von Rudorff", "O. Anatole von Lilienfeld" ]
physics.chem-ph
[ "physics.chem-ph" ]
University of Vienna, Faculty of Physics, Kolingasse 14-16, AT-1090 Vienna, Austria University of Vienna, Vienna Doctoral School in Physics, Boltzmanngasse 5, AT-1090 Vienna, Austria University Kassel, Department of Chemistry, Heinrich-Plett-Str.40, 34132 Kassel, Germany [email protected] Departments of Chemistry, Materials Science and Engineering, and Physics, University of Toronto, St. George Campus, Toronto, ON, Canada Vector Institute for Artificial Intelligence, Toronto, ON, M5S 1M1, Canada Machine Learning Group, Technische Universität Berlin and Institute for the Foundations of Learning and Data, 10587 Berlin, Germany Despite its fundamental importance and widespread use for assessing reaction success in organic chemistry, deducing chemical structures from nuclear magnetic resonance (NMR) measurements has remained largely manual and time consuming. To keep up with the accelerated pace of automated synthesis in self driving laboratory settings, robust computational algorithms are needed to rapidly perform structure elucidations. We analyse the effectiveness of solving the NMR spectra matching task encountered in this inverse structure elucidation problem by systematically constraining the chemical search space, and correspondingly reducing the ambiguity of the matching task. Numerical evidence collected for the twenty most common stoichiometries in the QM9-NMR data base indicate systematic trends of more permissible machine learning prediction errors in constrained search spaces. Results suggest that compounds with multiple heteroatoms are harder to characterize than others. Extending QM9 by ∼10 times more constitutional isomers with 3D structures generated by Surge, ETKDG and CREST, we used ML models of chemical shifts trained on the QM9-NMR data to test the spectra matching algorithms. Combining both and shifts in the matching process suggests twice as permissible machine learning prediction errors than for matching based on shifts alone. Performance curves demonstrate that reducing ambiguity and search space can decrease machine learning training data needs by orders of magnitude. Impact of noise on inverse design: The case of NMR spectra matching O. Anatole von Lilienfeld August 12, 2023 =================================================================== § INTRODUCTION Current development times of novel molecular materials can span several decades from discovery to commercialization. In order for humanity to react to global challenges, the digitization<cit.> of molecular and materials discovery aims to accelerate the process to a few years. Long experiment times severely limit the coverage of the vastness of chemical space, making the development of self driving laboratories for autonomous robotics experimentation crucial for high throughput synthesis of novel compounds (Fig.<ref> a))<cit.>. To keep the pace of automated synthesis, fast and reliable characterization of reaction products through spectroscopic methods is required, an often manual, time intense and possibly error prone task. One of the most common methods to elucidate the structure of reaction products are nuclear magnetic resonance (NMR) experiments.<cit.> Through relaxation of nuclear spins after alignment in a magnetic field, an NMR spectrum, characteristic of local atomic environments of a compound, i.e. functional groups, can be recorded. In particular, and NMR experiments are routinely used by experimental chemists to identify the chemical structure or relevant groups just from the spectrum. For larger compounds, however, the inverse problem of mapping spectrum to structure becomes increasingly difficult, ultimately requiring NMR of additional nuclei, stronger magnets, or more advanced two-dimensional NMR experiments<cit.>. Computer-assisted structure elucidation algorithms aim to iteratively automatize the structure identification process<cit.>. Current workflows include repeated predictions of chemical shifts for candidate structure inputs through empirical or ab initio methods<cit.>. Albeit accurate even in condensed phase through use of plane-waves <cit.> or QM/MM setup <cit.>, the cost of density functional theory (DFT) calculations severely limits the number of candidate structures that can be tested, leaving the identification of unknown reaction products out of reach for all but the smallest search spaces. Data driven machine learning models leveraging experimental or theoretical NMR databases<cit.> provide orders of magnitude of speedup over ab initio calculations, reaching 1-2 ppm mean-absolute-error (MAE) w.r.t. experiment or theory, respectively<cit.>. However, while the stoichiometry of the reaction product is usually known, e.g. through prior mass spectrometry experiments, the number of possible constitutional isomers exhibits NP hard scaling in number of atoms, quickly spanning millions of valid molecular graphs already for molecules of modest size (Fig.<ref> b)). As such, the inverse problem of inferring the molecular structure from an NMR spectrum still poses a major challenge even for rapid solvers. Recent machine learning approaches tackle the inverse problem using a combination of graph generation and subsequent chemical shift predictions for candidate ranking<cit.>. First explored by Jonas<cit.>, a Top-1 ranking with 57% reconstruction success-rate was achieved using deep imitation learning to predict bonds of molecular graphs. Sridharan et al.<cit.> used online Monte Carlo tree search to build molecular graphs resulting in a similar Top-1 ranking of 57.2%. Huang et al.<cit.> relied on substructure predictions from which complete graphs can be constructed, reaching 67.4% Top-1 accuracy by ranking substructure profiles instead of shifts. A commonality between all algorithms is the subsequent ranking of candidates using spectra matching or other heuristics. Consequently, even though the correct query compound could be detected early, similar candidates might be ranked higher, making the ranking process as critical as the candidate search itself. In this work, we analyse the effectiveness of the NMR spectra matching task encountered in the inverse structure elucidation problem. As stagnating improvements<cit.> in chemical shift predictions due to limited public NMR data aggravate candidate rankings, results suggest that both the prediction error of machine learning models and the number of possible candidates are crucial factors for elucidation success. By systematically controlling the size of chemical search space and accuracy of chemical shifts, we find that higher error levels become permissible in constrained search spaces. Moreover, results indicate that increasing the uniqueness through including both and shifts in the matching process, rather than relying on a single type of shift, significantly reduces ambiguity and enhances error tolerance. To evaluate the spectra matching task throughout chemical compound space, we systematically control the accuracy of 1D and chemical shifts of the 20 most common stoichiometries in QM9-NMR<cit.> by applying distinct levels of Gaussian white noise. Note that while we focus on DFT based 1D NMR in this work, future studies could include experimental data and 2D NMR information. Comparisons amongst stoichiometries suggest that chemical spaces with increasing amounts of heteroatoms and number of constitutional isomers are harder to characterize than others. To test the spectra matching method on a large search space, we extended QM9-NMR to 56k C_7O_2H_10 constitutional isomers. Controlling the chemical shift accuracy through machine learning models trained at increasing training set sizes, performance curves again indicate a trade-off between search space and accuracy. Hence, as less accurate shift predictions become useful, results show that machine learning training data needs can be reduced by multiple orders of magnitude. § THEORY & METHODS §.§ NMR Spectra Matching Consider a query or spectrum with a set of N possible candidate constitutional isomer spectra. We chose the squared euclidean distance as a metric to rank candidate spectra against the query spectrum (see SI Fig.3 for comparison against other metrics): d(δ_q, δ_i) = ∑_j=1^n (δ_q,j - δ_i,j)^2, with δ being a sorted spectrum of n chemical shifts (or ), q being the query, i being the i-th of N candidates, and j being the j-th chemical shift in a spectrum, respectively. To use both and shifts simultaneously for spectra matching, a total distance can be calculated as follows: d_combined = d(δ^13C_q, δ^13C_i) + γ· d(δ^1H_q, δ^1H_i), with γ=64 being a scaling factor determined via cross-validation (see SI Fig.1) to ensure similar weighting. Final rankings are obtained by sorting all candidates by distance. The Top-1 accuracy is calculated as the proportion of queries correctly ranked as the closest spectrum, respectively. §.§ Elucidation performance curves To analyse the spectra matching elucidation accuracy, we systematically control the number of possible candidates N and the accuracy of chemical shifts, respectively. For each constitutional isomer set, we choose 10% as queries and 90% as search pool, respectively. Next, we randomly sample N spectra from the search pool, including the query spectrum. Each sample size is drawn ten times and the Top-1 accuracy averaged across all runs. To control the accuracy of chemical shifts, we apply Gaussian white noise (up to 1 or 10 σ for and , respectively) or use the machine learning error as a function of training set size (c.f. SI Fig.5 for learning curves). For each N and chemical shift accuracy, results are presented as elucidation performance curves (c.f. Fig.<ref> a-b)), showing the elucidation success as a function of chemical shift accuracy in terms of mean absolute error (MAE). §.§ Chemical Shift Prediction We relied on kernel ridge regression (KRR) for machine learning and chemical shifts as presented in Ref.<cit.>. We use a Laplacian kernel and the local atomic Faber-Christensen-Huang-Lilienfeld (FCHL19<cit.>) representation with a radial cutoff<cit.> of 4 . The kernel width and regularization coefficient have been determined through 10-fold cross-validation on a subset of 10'000 chemical shifts of the training set. §.§ Data The QM9-NMR<cit.> dataset was used in this work, containing 130'831 small molecules up to nine heavy atoms (CONF) with chemical shieldings at the mPW1PW91/6-311+G(2d,p)-level of theory. We used the 20 most common stoichiometries (Fig.<ref> b)), having a minimum of 1.7k constitutional isomers available in the dataset. To extend the QM9-NMR C_7O_2H_10 constitutional isomers space, we generated 54'641 SMILES using Surge<cit.>. 3D structures have been generated using ETKDG<cit.> and CREST<cit.> using GFN2-xTB/GFN-FF. Adding the structures to QM9, a total pool size of 56.95k C_7O_2H_10 isomers was obtained. For the training of chemical shift machine learning models, we selected C_8OH_12, C_8OH_10, C_8OH_14, C_7O_2H_8 and C_7O_2H_12 constitutional isomers, yielding a total of 143k and 214k training points, respectively. § RESULTS & DISCUSSION §.§ Spectra matching accuracy with synthetic noise To analyse the influence of noise and number of candidates on the elucidation success, we applied Gaussian noise to and shifts of C_7O_2H_10, C_5N_3OH_7 and C_8OH_14 constitutional isomers, respectively. Fig.<ref> a-b) depicts a sigmoidal shaped trend of Top-1 elucidation accuracies at increasing candidate pool sizes N_QM9 as a function of mean absolute error (MAE). Note that increasing the maximum candidate pool size leads to an offset of the trend towards less permissible errors. A possible explanation is the correlation of the density of chemical space with increasing numbers of candidate spectra N<cit.>. As shift predictions need to become more accurate, limiting N through prior knowledge of the chemical space could be beneficial. Similar findings have been reported by Sridharan et al.<cit.>, noting that brute force enumerations of chemical space lead to worse rankings than constrained graph generation. Note that while the trends in and elucidation are similar, less error is permissible when using shifts. To further reduce the ambiguity, we include both and shifts into the matching problem as per Eq.<ref>. Results suggest 50% and ∼150% more permissible and errors when both spectra are considered in the matching process (Fig.<ref> c)). Similar to how chemists solve the elucidation problem, the inclusion of more distinct properties increases the uniqueness and can improve the elucidation success. §.§ Extrapolating the search space Due to the limited amount of constitutional isomers in databases compared to the number of possible graphs faced during inverse design (Fig.<ref> b)), assessing the chemical shift accuracy for successful elucidation is severely limited. As such, we extrapolate elucidation performance curves to obtain estimates about chemical shift accuracies in candidate pool sizes larger than QM9. We fit each elucidation performance curve (Fig.<ref> a-b)), respectively, using a smoothly broken power law function: f(x) = (1+ (x/x_b)^d)^α with x_b controlling the upper bend and offset, d changing the curvature and α changing the tilt of the function (see SI Fig.2), respectively. The parameters of Eq.<ref> as a function of N can again be fitted using a power law function (see SI Fig.2) and extrapolated to the total number of graphs N_Surge, respectively. Results of the extrapolation (Fig.<ref> a-b) dashed) indicate significant differences in elucidation efficiency among stoichiometries. For instance, C_8OH_14 queries are potentially easier to elucidate than C_5N_3OH_7 structures. Possible reasons are the limited number of C_8OH_14 graphs compared to millions of C_5N_3OH_7 isomers. Moreover, the number of heteroatoms of the C_5N_3OH_7 stoichiometry might hamper the characterization when only relying on or , respectively. Hence, to solve the inverse structure elucidation problem using experimental data of compounds larger than QM9, reducing ambiguities through including both and shifts as well as to reduce the candidate space is critical for elucidation success. §.§ Trends in chemical space To analyse the elucidation efficiency throughout chemical space, we applied the Gaussian noise and extrapolation procedure to the 20 most common stoichiometries in QM9 (Fig.<ref> b)). Fig.<ref> a) shows the MAE required for 95% elucidation success as a function of N_Surge. Results suggest that less error is permissible for stoichiometries with large N_Surge and fewer carbon atoms. As such, using only shifts might not be sufficient to fully characterize the compound. Again, similar to how chemists use multiple NMR spectra to deduct chemical structures, additional information such as shifts are beneficial to extend the information content. In Fig. <ref> b), the error permissiveness of spectra matching using only (see SI Fig.4 for ) versus combining both and is being compared, revealing a linear trend between both. Note that the C_7NOH_7 stoichiometry shows the smallest benefit from adding additional information. Interestingly, a hierarchy for C_7NOH_X stoichiometries of different degrees of unsaturation is visible, indicating an inverse correlation between number of hydrogens and MAE (Fig. <ref> b) green). Similar hierarchies are also observed for other stoichiometries such as C_7O_2H_X and C_8OH_X (Fig. <ref> b) blue and orange). On average, the combination of and for spectra matching increases the error permissiveness of and by 85% and 261% (see SI Fig.4), respectively. §.§ Comparison to machine learned shift predictions To test the elucidation performance using machine learning predictions, we trained and KRR models at increasing training set sizes (see SI Fig.5 for learning curves) and predicted chemical shifts of 56k C_7O_2H_10 constitutional isomers. Results again show similar trends as observed with Gaussian noise (Fig.<ref> a-b)), however, indicate more permissive accuracy thresholds. For instance, KRR predictions at 2 ppm MAE can identify 64% of queries rather than only 17% suggested by the Gaussian noise experiment. The difference could be explained due the systematic, non uniform nature of the QM9<cit.> chemical space, influencing the shape and extrapolation of elucidation performance curves in Fig.<ref>. Moreover, Gaussian noise is applied to all shifts at random compared to possibly more systematic machine learning predictions. Note that the trade-off between error and N is consistent and that the exact parameters will depend on the machine learning model and the finite sampling of constitutional isomer space. To model possible experimental noise on query spectra, we apply Gaussian noise to query spectra and evaluate the elucidation performance of the best performing machine learning model (see insets in Fig.<ref> a-b)). Results indicate a halving of elucidation accuracy when the query spectrum contains up to 2 ppm MAE_Q in and 0.15 ppm MAE in error, respectively. Thus, in the presence of experimental measurement noise even higher prediction accuracies might be necessary. Combining both and spectra for matching improves the elucidation performance up to 90% (Fig.<ref> e)). Again, the combination of spectra for elucidation highlights the effectiveness of reducing the ambiguity of the matching problem by including additional properties. Investigating potential strategies to reduce the constitutional isomer search space, we constrained N based on functional groups (see SI Table 1). Randomly selecting functional groups present in each query, N can be reduced by 50% and 62% on average (see Fig.<ref> d) inset for distributions), respectively. Results in Fig.<ref> c-d) indicate an increase of the elucidation accuracy by 5% in and up to 10% for , respectively, in agreement with the elucidation performance in Fig.<ref> a-b). Note that the knowledge of two functional groups only led to marginal improvements. However, fragmentation could be more beneficial for larger compounds than present in QM9<cit.>, as reported by Yao et al.<cit.>. Using both and shifts on the reduced search space only lead to marginal improvements of 0.5% over the results of the full search space. §.§ Balancing search space and accuracy We use performance curves to analyse the relationship between the elucidation performance of C_7O_2H_10 queries, machine learning prediction errors and candidate pool sizes N. The systematic decay of performance curves (Fig.<ref> red and blue) again demonstrates that constraining N with prior knowledge allows for less accurate shift predictions to be applicable. Extrapolating the performance curves indicates a machine learning MAE of 0.93 ppm to correctly rank 90% of queries out of 56k possible candidates (Fig.<ref> red), 0.02 ppm lower than suggested by Gaussian noise. To reach an MAE of 0.93 ppm, four million training instances are required (Fig.<ref> orange). Using both and shifts requires two orders of magnitude less training data (Fig.<ref> blue). As such, facing expensive experimental measurements and ab initio calculations, more effective inverse structure elucidation could be achieved by balancing machine learning data needs through reduced search spaces and incorporation of additional properties. § CONCLUSION We have presented an analysis of the effectiveness of the NMR spectra matching task encountered in the inverse structure elucidation problem. By systematically controlling the predictive accuracy of and chemical shifts, we found consistent trends throughout chemical compound space, suggesting that higher errors become permissible as the number of possible candidates decreases. Note that while we relied on 1D ab initio NMR data, similar analysis could be performed using 1D or 2D experimental spectra. Applications to the most common constitutional isomers in QM9 highlight that chemical spaces with many heteroatoms are harder to characterize when only relying on a single type of chemical shift. Using both and chemical shifts increases the error permissiveness by 85% and 261% on average, respectively. Machine learning predictions for 56k C_7O_2H_10 compounds showed that using both or shifts increased elucidation success to 90% compared to only 64% and 36% when used alone, respectively. The usefulness of the analysis is expressed via performance curves, showing that training demands can be reduced by orders of magnitude compared to relying on specific shifts alone. We believe that as the accuracy of machine learning models to distinguish spectra is limited, constrained search spaces or inclusion of more distinct properties are necessary to improve candidate rankings. Rather than solely relying on more accurate models, future approaches could include explicit knowledge of chemical reactions, functional groups or data from mass spectrometry, infrared- or Raman spectroscopy<cit.>, respectively. Finally, explicitly accounting for atomic similarities and chemical shift uncertainties via the DP5 probability might further increase the confidence in structure assignments<cit.>. § ACKNOWLEDGEMENT O.A.v.L. has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 772834). O.A.v.L. has received support as the Ed Clark Chair of Advanced Materials and as a Canada CIFAR AI Chair. Icons in Fig.<ref> from DBCLS, Openclipart and Simon Dürr from bioicons.com under CC-BY 4.0 and CC0, respectively. § DATA & CODE AVAILABILITY The QM9-NMR dataset is openly available at <https://moldis.tifrh.res.in/data/QM9NMR>. The code and additional data used in this study is available at <https://doi.org/10.5281/zenodo.8126380>. § CONFLICT OF INTEREST The authors have no conflict of interest. § REFERENCES ieeetr
http://arxiv.org/abs/2307.04004v1
20230708161850
MAP-NBV: Multi-agent Prediction-guided Next-Best-View Planning for Active 3D Object Reconstruction
[ "Harnaik Dhami", "Vishnu D. Sharma", "Pratap Tokekar" ]
cs.RO
[ "cs.RO", "cs.MA" ]
MAP-NBV: Multi-agent Prediction-guided Next-Best-View Planning for Active 3D Object Reconstruction Harnaik Dhami* Vishnu D. Sharma* Pratap Tokekar *Equal contribution. Names are listed alphabetically. Authors are with the Department of Computer Science, University of Maryland, U.S.A. .This work is supported by the ONR under grant number N00014-18-1-2829. August 12, 2023 ===================================================================================================================================================================================================================================================================== We propose MAP-NBV, a prediction-guided active algorithm for 3D reconstruction with multi-agent systems. Prediction-based approaches have shown great improvement in active perception tasks by learning the cues about structures in the environment from data. But these methods primarily focus on single-agent systems. We design a next-best-view approach that utilizes geometric measures over the predictions and jointly optimizes the information gain and control effort for efficient collaborative 3D reconstruction of the object. Our method achieves 22.75% improvement over the prediction-based single-agent approach and 15.63% improvement over the non-predictive multi-agent approach. We make our code publicly available through our project website: <http://raaslab.org/projects/MAPNBV/> § INTRODUCTION Visual surveying and inspection with robots have been studied for a long time for a wide range of applications such as inspection of civil infrastructure <cit.> and large vehicles <cit.>, precision agriculture <cit.>, and digital mapping for real estate <cit.>. The utilization of robots in these applications is highly advantageous as they can access hard-to-reach areas with greater ease and safety compared to situations with direct human involvement. Recent work on making robots autonomous for these tasks make their use more appealing. This work focuses on one such long-studied problem of 3D object reconstruction <cit.>, where the objective is to digitally reconstruct the object of interest by combining observations from multiple vantage points. While it could be easier to achieve this in an indoor environment by carefully placing sensors around the object, the same can't be achieved for the outdoors and open areas. For the latter, the sensor(s), must be moved around the object to capture information from different viewpoints. This can be realized with sensors such as cameras and LiDARs mounted on unmanned aerial vehicles (UAVs). A UAV with unlimited power supply capacity could capture infinite observations for an almost perfect reconstruction of the object, but the real-world limitation of battery capacity adds another dimension to the problem: achieving an accurate 3D reconstruction as fast as possible. The trade-off between reconstruction accuracy and task duration in unknown environments is commonly addressed through Next-Best-View (NBV) planning, wherein a robot determines the optimal location for the next observation to maximize information gain. Numerous solutions have been proposed by the research community to tackle this problem, with a majority of them catering to single-agent systems <cit.>. However, deploying a team of robots instead of a single agent can enhance task efficiency multi-fold, while also offering additional benefits such as fault tolerance through redundancy. But the direct application of single-agent NBV methods to multi-agent systems does not translate well in terms of performance. This issue stems from the potential overlap between the individual observations. An efficient multi-agent NBV formulation requires coordination among robots to build a joint representation and minimize the overlap. In this work, we extend our previous work on prediction-driven single-agent NBV, Pred-NBV <cit.>, to a team of robots for 3D reconstruction to bring the advantages of the prediction-guided approach to a multi-agent system. We call this multi-agent prediction-based next-best-view method MAP-NBV. Pred-NBV <cit.> uses a 3D point cloud prediction network along with a geometric NBV approach while also considering the control effort required for object reconstruction. An important feature of Pred-NBV is that it doesn't require the partially observed point cloud to be centered at the full object center, an implicit assumption in many 3D reconstruction networks. Naively extending Pred-NBV to a team of robots would result in significant overlap as all the agents would move in the same direction to maximize individual information gain. This is inefficient as it would be more advantageous for the robots to move in different directions. MAP-NBV solves this issue by defining NBV measures over joint observation. We accomplish this by removing duplicate points in observations from multiple robots when calculating the information gain. Along with this, we account for the total control effort in our NBV objective, which results in efficient planning for the whole team. We make the following contributions in this work: * We propose a multi-agent, prediction-based NBV planning approach for active 3D reconstruction of various objects with a novel objective combining visual information gain and control effort. * We modify a single-agent baseline NBV algorithm based on <cit.> that uses frontier-based information gain, and extend its functionality to effectively operate in multi-agent settings. * We show that our method outperforms Pred-NBV <cit.>, a single-agent prediction-based algorithm, by 22.75% and the multi-agent version of a traditional NBV baseline <cit.> by 15.63%. We share the qualitative results and release the project code from our method on our project website[<http://raaslab.org/projects/MAPNBV/>]. § RELATED WORK The use of robots for data acquisition purposes is an extensively studied topic for various domains. Their usage range from infrastructure inspection <cit.> and environment monitoring <cit.> for real-world application to the real-world digitization for research datasets and simulations <cit.>. When the environment is unknown, active methods such as next-best-view (NBV) are used to construct an object model on the fly by capturing additional observations. A majority of the works on NBV planning use information-theoretic measures <cit.> for selection to account for uncertainty in observations <cit.>. The widely used frontier and tree-based exploration approaches also utilize uncertainty about the environment for guiding the robot motion <cit.>. Some works devise geometric methods which make inferences about the exact shape of the object of interest and try to align the observations with the inferred model <cit.>. Prediction-based NBV approaches have emerged as another alternative in recent years, where a neural network takes the robot and/or the environment state as the input and NBV pose or velocity as the output <cit.>. A majority of the existing work on NBV is focused on single robot systems. The task performance can be enhanced by adding more robots to the systems, but directly extending single-robot NBV approaches to multi-robot systems may result in sub-optimal performance due to significant overlap in observations. This issue led to the development of exploration algorithms specifically for multi-robot systems <cit.> with information-theoretic measures for determining NBV. Some recent works on multi-robot systems have explored the utilization of predictions for improvement in task efficiency. Almadhoun et al. <cit.> designed a hybrid planner that switches between a classical NBV approach and a learning-based predictor for NBV selection but uses a partial model obtained by robot observations only. Wu et al. <cit.> use a point cloud prediction model for plants to use the predicted point cloud as an oracle leading to better results than the traditional approaches. This method uses entropy-based information gain measures for NBV and is designed for plant phenotyping with robotic arms. These methods do not consider the control effort required which is important for UAVs with energy constraints when deployed for observing large objects such as airplanes and ships. Also, these works employ information theoretic NBV approaches. We aim to explore a prediction-based approach for geometric NBV selection. In this work, we extend Pred-NBV <cit.> which also uses point cloud prediction and build a multi-robot NBV planner. The prediction on the point cloud makes the pipeline modular and interpretable and can be improved by improving individual modules. We select NBV based on information gain, as well as control effort, making our approach more grounded in real-world limitations. § PROBLEM FORMULATION We are given a team of n robots, each equipped with a 3D sensor. The team flies around a closed object of volume 𝒱∈ℝ^3 and observes the point on its surface 𝒮⊂𝒱. The surface points s_i observed by the robot r_j from the view-point ϕ_k ∈Φ are represented as a voxel-filtered point cloud and the relationship between them is defined as s_i = f(r_j, ϕ_k). The robot r_j follows a trajectory ξ_r_j, consisting of multiple viewpoints, and keeps track of the points observed so far. The distance traveled by a robot between two poses ϕ_i and ϕ_j is represented by d(ϕ_i, ϕ_j). The point cloud observed by the team of robots is the union of the surface points observed by the individual robots over their respective trajectories, i.e., s_ξ = ⋃_i=1^n ⋃_ϕ∈ξ_r_i f(r_i, ϕ) and ξ represents the set of trajectories for each robot, i.e., ξ = {ξ_r_1, ξ_r_2,..., ξ_r_n}. The objective is to find a set of feasible trajectories ξ^* = {ξ_r_1^*, ξ_r_2^*, ..., ξ_r_n^*}, such that the team observes the whole voxel-filtered surface, while also minimizing the total distance traveled by the robots on their respective trajectories. ξ^* = _ξ∑_i=1^n ∑_j=1^| ξ_r_j| - 1 d(ϕ_j, ϕ_j+1) such that  ⋃_i=1^n ⋃_ϕ∈ξ_r_i f(r_i, ϕ) = 𝒮 Given a finite set of trajectories, if 𝒮, the object model is known, the optimal set of trajectories can be found with an exhaustive search. As the object model is not known apriori in an unknown environment, the optimal solution can not be found beforehand. Thus, each robot needs to move based on the partial observations of the team to determine the NBV to reconstruct the object's surface. Here we assume that each robot can observe the object at the start of the mission, which can be accomplished by moving the robots till they see the object. In this work, we define this problem in a centralized form; all the robots share their observations with a central entity that prescribes the NBV for each by solving the aforementioned objective. § PROPOSED APPROACH In this paper, we present Multi-Agent Pred-NBV (MAP-NBV), a model prediction-guided NBV approach for a team of robots. Figure <ref> shows the overview of our process, which consists of two parts: (1)3D Model Prediction, where we combine the observations from all the robots to build a partial model of the object and use PoinTr-C <cit.>, a 3D point cloud completion network, to predict the full shape of the objects, and (2) Multi-Agent NBV Algorithm, which uses the partial model and the predicted model to determine the NBV for the team, while trying to minimize the distance traveled. Our NBV solution performs a greedy selection over the candidate points to generate the trajectory, which also reduces the computation complexity. The following subsections provide further details of our approach. §.§ 3D Model Prediction To start, the target object is segmented out from the rest of the environment in the captured RGB images for each UAV. This allows the algorithm to focus on only the target infrastructure as opposed to also including other obstacles. Then, each of these segmented images is aligned with the captured depth image per UAV to segment the target object out. Point clouds are then generated per each segmented depth image. This gives us a point cloud per each UAV that contains points belonging only to the target object. Assuming a centralized system, each segmented point cloud per UAV is transformed into a central reference frame and concatenated together into a singular point cloud. This point cloud represents the entire multi-agent system's observations of the target object at the current timestamp. The point cloud concatenation can be replaced with a registration algorithm <cit.>, but we use concatenation due to its ease of use. Lastly, this current timestamp's point cloud is then concatenated with previous observations to get an up-to-date observation point cloud. This process is shown in Figure <ref>. In order to get an approximation of the 𝒱̂ of the full model 𝒱, we use PoinTr-C <cit.> a 3D point cloud completion network, developed by fine-tuning PoinTr <cit.> using curriculum learning over ShapeNet dataset <cit.>. Unlike PoinTr and similar point cloud completion networks, PoinTr-C doesn't make implicit assumptions about the knowledge of the center of the full model by fine-tuning over rotationally and translationally perturbed point clouds. Relaxing this assumption makes PoinTr-C more suitable for inputs from an unknown environment than PoinTr. The 3D point cloud of the object obtained as the union of the observed surface points goes as input to PoinTr-C and it predicts the full object point cloud 𝒱̂. PoinTr-C was trained over isolated point clouds and therefore requires object point clouds to be isolated from the scene. This can be realized with the help of distance-based filters and state-of-the-art segmentation networks<cit.> without any fine-tuning. An example of an input point cloud and a predicted point cloud is shown in Figure <ref>. §.§ Next-Best View Planner We use the predicted point cloud as an approximation of the ground truth point cloud for NBV planning. For this, we first generate a set of candidate poses around the partially observed object. From these, we select a set of n poses, corresponding to each robot, based on information gain and control effort. The information gain for the set of n viewpoints is defined as the number of new, unique points expected to be observed after the robots move to these viewpoints. The control effort is defined as the total distance covered by the robots in moving to the viewpoints. The number of new points varies in each iteration since the robots observe more of the surface of the object as they move to new locations. While PoinTr-C predicts the point cloud for the whole object, the robots can observe only the surface points. Hence, before counting the number of new points, we apply hidden point removal <cit.> to the predicted point cloud. We represent this relationship between the number of points observed and the trajectories traversed till time t by I({ξ_t), where ξ_t = {ξ_r_1, ξ_r_2, ..., ξ_r_n}_t represents the set of trajectories for all the robots till time t. To balance the information gain and control effort, we use a hyperparameter τ which is kept fixed throughout an episode. The robots select the candidate to pose set which results in at least τ% of the total possible information gain over all candidate poses. Thus, we formulate our multi-agent NBV objective as follows. {ϕ_r_1, ϕ_r_2, ..., ϕ_r_n}_t+1 = _ϕ∈𝒞∑_i=1^n d(ϕ_r_i, ϕ_r_it)  such that  ⋃_i =1^n I(ξ_r_it∪ϕ)/max_ϕ∈𝒞⋃_i =1^n I(ξ_r_it∪ϕ)≥τ In our experiments, we implement the information gain by first isolating the predicted points that can be observed from a given set of viewpoints and then taking a union of such points from each agent to identify the unique points in the joint observation. The number of the points thus obtained is used as the information gain. For finding the control effort, we use RRT-Connect <cit.> to find the path between a robot's current location to each candidate pose. The candidate poses are generated similar to Pred-NBV <cit.>, i.e. on circles at different heights around the center of the predicted object point cloud. One circle is at the same height as the predicted object center with radius 1.5 × d_max, where d_max is the maximum distance of a point from the center of the predicted point cloud. The other two circles are located above and below this circle 0.25 ×z-range away, with a radius of 1.2 × d_max. The viewpoints are located at steps of 30^∘ on each circle. We set τ = 0.95 for all our experiments. § EXPERIMENTS AND EVALUATION In order to gauge our method's effectiveness, we compare it with a non-predictive multi-agent baseline and a prediction-driven NBV approach which was developed for a single agent. While the first highlights the benefits of including predictions in the NBV pipeline, the latter supports the argument for using a team of robots. §.§ Setup We extend the setup in Pred-NBV <cit.> to work in a multi-agent setting. Similarly, we use Robot Operating System (ROS) Melodic and AirSim <cit.> on Ubuntu 18.04 for our simulation experiments. Multiple UAVs are spawned into the AirSim environment. We equipped each of the UAVs with a depth camera and an RGB camera. Each UAV published a segmented image using AirSim's built-in segmentation. We adapted the depth segmentation package from Pred-NBV to work with multiple UAVs. We then converted these segmented depth images into 3D point clouds. For our collision-free point-to-point planner, we use the MoveIt <cit.> software package implementing the work done by Köse <cit.>. §.§ Qualitative Example We evaluate MAP-NBV on the same 20 objects that were used in Pred-NBV to allow a direct comparison. The 20 objects consist of 5 different ShapeNet classes: airplane, rocket, tower, train, and watercraft. Examples of each class are shown in Figure <ref>. These classes represent diverse shapes and infrastructures that are regularly inspected. Figure <ref> shows the path followed by 2 UAVs as given by MAP-NBV in the C-17 airplane simulation. This environment includes other obstacles that are not of interest but still need to be accounted for in collision-free path planning. MAP-NBV finds a collision-free path for both UAVs while targeting the maximum coverage of the C-17 airplane. §.§ Comparison with Single-agent Baseline We compared the performance of MAP-NBV with a single-agent prediction-based NBV planner called Pred-NBV <cit.>. MAP-NBV is an extension of Pred-NBV designed for multi-agent scenarios. However, in single-agent cases, both algorithms function identically. In MAP-NBV, UAVs are spawned close together, ensuring that the initial environment information is virtually the same as in the single-agent Pred-NBV case. Consequently, the initial points observed and the initial shape completion predictions for both algorithms are highly similar. This means that MAP-NBV and Pred-NBV select their initial NBVs using nearly identical information. To demonstrate the immediate information gain of MAP-NBV over Pred-NBV, we compare the number of points observed after navigating to the first NBVs selected by the algorithms. Our findings, presented in Table <ref>, reveal that, on average, MAP-NBV observes 22.75% more points after the first iteration compared to Pred-NBV in the context of object reconstruction. These results are based on evaluations across 20 objects and 5 object classes. Furthermore, on average, each UAV in MAP-NBV flew a similar distance to the UAV in Pred-NBV. This similarity arises from both algorithms generating candidate viewpoints in the same manner and employing the same point-to-point planner. §.§ Comparison with Multi-agent Baseline We also compared the performance of MAP-NBV with a modified baseline NBV method <cit.> designed for multi-agent use. The baseline method employs frontiers to select the next-best views. Frontiers are points located at the edge of the observed space near unknown areas. We utilized the same modifications described in Pred-NBV <cit.>. Specifically, we used our segmented point cloud to choose frontiers near the target object. To ensure that the UAVs always face the target object, the orientation of all poses selected by the baseline aligns with the center of the observed target object point clouds. We further adapted this baseline method to function in a multi-agent setting. The pose for the first UAV is selected in the exact same manner as in the single-agent baseline. For each subsequent UAV, the remaining best pose is chosen, as long as it does not fall within a certain distance threshold compared to the previously selected poses in the current iteration of the algorithm. Both MAP-NBV and the baseline algorithm employ the same stopping criteria. The algorithm terminates if the total points observed in the previous step exceed 95% of the total points observed in the current step. Our evaluation, presented in Table <ref>, demonstrates that MAP-NBV observes, on average, 15.63% more points than the multi-agent baseline for object reconstruction across all 20 objects from the 5 different model classes. In our simulations, we utilized 2 UAVs for both algorithms. Furthermore, the MAP-NBV algorithm can be readily extended to accommodate more than just 2 robots. By incorporating additional UAVs, the algorithm can effectively leverage the collaborative efforts of a larger multi-agent system to improve object reconstruction performance and exploration efficiency. However, in our current evaluation, we utilized 2 UAVs for both algorithms due to limited computational resources. The simulations were computationally intensive, and our computer experienced significant slowdowns with just 2 robots in the simulation. Despite this limitation, the promising results obtained with 2 UAVs suggest that scaling up the algorithm to include more robots has the potential to yield even more significant improvements in performance. Additionally, Figure <ref> illustrates that MAP-NBV observes more points per step than the multi-agent baseline while also covering a shorter flight distance. § CONCLUSION We present a multi-agent, prediction-guided NBV planning approach for active 3D reconstruction. This method can be helpful in a variety of applications including civil infrastructure inspection. We show that our method is able to faithfully reconstruct the object point clouds efficiently compared to non-predictive multi-agent methods and single-agent prediction-based methods. Our NBV planning objective considers both information gain and control effort, making it more suitable for real-world deployment given the flight time limit imposed on UAVs by their battery capacity. In this work, we focus solely on geometric measures for information gain. Many existing works on NBV have developed sophisticated information theoretic measures. We will explore combining both types of measures in our future work. Also, we consider all possible viewpoint pairs for finding the NBV for the team, which hinders the scalability of MAP-NBV. We will look into methods to make this process more computationally efficient search over a larger candidate viewpoint set. IEEEtran
http://arxiv.org/abs/2307.09336v1
20230714121309
Correlated Short-Timescale Hard-Soft X-ray Variability of the Blazars Mrk 421 and 1ES 1959+650 using $\textit{AstroSat}$
[ "Susmita Das", "Ritaban Chatterjee" ]
astro-ph.HE
[ "astro-ph.HE" ]
firstpage–lastpage Similarity-based Memory Enhanced Joint Entity and Relation Extraction Witold Kościukiewicz1,20009-0001-0192-8850 Mateusz Wójcik 1,20009-0008-0547-9467 Tomasz Kajdanowicz20000-0002-8417-1012 Adam Gonczarek1 August 12, 2023 =========================================================================================================================================== We study simultaneous soft (0.7-7 keV) and hard (7-20 keV) X-ray light curves at a total of eight epochs during 2016-2019 of two TeV blazars Mrk 421 and 1ES 1959+650 observed by the SXT and LAXPC instruments onboard AstroSat. The light curves are 45-450 ks long and may be sampled with time bins as short as 600-800 sec with high signal to noise ratio. The blazars show a harder when brighter trend at all epochs. Discrete cross-correlation functions indicate that the hard and soft X-ray variability are strongly correlated. The time lag is consistent with zero in some epochs, and indicates hard or soft lag of a few hours in the rest. In the leptonic model of blazar emission, soft lag may be due to slower radiative cooling of lower energy electrons while hard lag may be caused by gradual acceleration of the high energy electrons emitting at the hard X-ray band. Assuming the above scenario and the value of the Doppler factor (δ) to be 10-20, the hard and soft lags may be used to estimate the magnetic field to be ∼ 0.1 Gauss and the acceleration parameter to be ∼ 10^4 in the emission region. Due to the availability of the high time resolution (∼ minutes to hours) light curves from AstroSat, the value of the illusive acceleration parameter could be estimated, which provides a stringent constraint on the theories of particle acceleration in blazar jets. galaxies: active - galaxies: jets - X-rays: galaxies - BL Lacertae objects: individual: Mrk 421 - BL Lacertae objects: individual: 1ES 1959+650 § INTRODUCTION Jets are ubiquitous in the Universe. They are present in active galactic nuclei (AGN), X-ray binaries, supernova remnants and young stellar objects. Blazars are a class of AGN, which contains a bright relativistic jet, the axis of which is aligned to the observer's line of sight within an angle ∼ 10^∘ <cit.>. Due to its orientation, jet emission is relativistically beamed in the observer's frame <cit.> causing the spectral energy distribution (SED) of blazars to be dominated by the jet emission. Jets emit non-thermal radiation over a wide range of the electromagnetic spectrum often from radio to very high energy γ-rays. SED of blazars consist of two broad peaks <cit.>: one at the IR to UV/X-ray wave bands and another at the keV to GeV energy range sometimes extending to TeV. The lower energy peak is due to synchrotron radiation by the relativistic electrons moving in the magnetic field present in the jet <cit.>. According to the “leptonic” model of jet emission, the higher energy peak is due to the up-scattering of the lower energy “seed” photons by the same electrons through the inverse-Compton (IC) process. The source of seed photons may be synchrotron radiation in the jet itself termed synchrotron self-Compton <cit.> or may be external to the jet, i.e., from the dusty torus or the broad line region which is called the external Compton (EC) process <cit.>. Alternatively, in the “hadronic scenario,” protons in the jet may be accelerated to ultra-relativistic energies and contribute to the higher energy emission via synchrotron process, proton-initiated cascades and the interaction of secondary particles with photons <cit.>, which are able to give rise to minutes-to-days timescale variability. Hadronic models have been used to satisfactorily model the higher energy (GeV-TeV) part of the SED of HSP blazars, such as Mrk 421 <cit.>. They find that in the hadronic scenario the size of the emission region is small (few gravitational radius of the central black hole), the magnetic field is large (∼50 Gauss) and required proton energy is up to 10^18 eV. We note that hadronic models are relatively less explored due to the above stringent requirements, complexity of the processes involved including several secondary particles, and leptonic models with less stringent constraints being able to satisfactorily fit the observed SEDs of blazars of different classes at different brightness states. However, if detection of neutrinos from the direction of flaring blazars, such as in the case of the blazar TXS 0506+056 <cit.>, becomes more commonplace and significant then the usage of hadronic models to fit blazar SEDs will become much more prominent. According to the position of the synchrotron peak (ν_sync) in the SED, blazars are divided into three classes <cit.>: LSP (ν_sync<10^14 Hz), ISP (10^14<ν_sync<10^15 Hz) and HSP (ν_sync>10^15 Hz), i.e., low, intermediate and high synchrotron peaked, respectively. The flux of blazars varies at minutes to years timescale. The nature of the variability is similar to red noise, i.e., longer timescale variability has larger amplitude <cit.>. The variability at weeks to months timescale is believed to be caused by the crossing of shock waves through the emission region which energizes the quiescent particles <cit.>. However, sometimes there is significant variability (factor of a few to even an order of magnitude) at minutes to days timescale, particularly at the X-ray and γ-ray energies. Sub-day timescale variability of blazars is caused by the acceleration and radiative cooling of the emitting particles <cit.>. While emission variability of blazars at multiple wave bands has often been probed to understand the physics of jets <cit.> the sub-day timescale fluctuations are not commonly explored because light curves with the required time resolution are often not available. That is because either the blazar is not bright enough to obtain high time resolution (∼ minutes-hours) monitoring with sufficient signal to noise ratio or the telescope time commitment for such continuous stare is too high. In this work, we have analyzed ∼ minutes to ∼ days timescale X-ray variability of two blazars, namely, Mrk 421 and 1ES 1959+650, obtained by AstroSat <cit.> to probe the acceleration and cooling timescale of the emitting high energy particles. §.§ Mrk 421 Mrk 421 is one of the closest blazars at redshift z=0.031 <cit.>. It is an HSP blazar with the synchrotron peak of its SED at X-ray frequencies and the high energy peak at ∼ 50 GeV <cit.>. It was the first extragalactic source to be detected at TeV energies <cit.>. Being one of the brightest few blazars in the X-ray wave band, Mrk 421 has been studied by many authors using a number of X-ray telescopes as well as other multi-wavelength observations <cit.>. Several authors have found a strong correlation between the hard and soft X-ray variability with time lag consistent with zero <cit.>. While the hard-soft time lag was consistent with zero in the 7-day-long ASCA observation of Mrk 421 in 1998, significant hard/soft lag (∼ hours) was found in shorter-timescale flares within those light curves <cit.>. <cit.> analyzed multiple XMM-Newton light curves from different epochs to find strong correlation with non-zero time lag in some of those cases. Using long-term X-ray observations during 2000-2015 by Chandra X-ray telescope, <cit.> found strong correlation with no significant time lag between the hard and soft X-ray variability. On the other hand, <cit.> found a hard lag (hard X-ray variability lagging those at the soft X-rays) in Mrk 421. Hard lag was also found with BeppoSAX observations in 1998 <cit.> and the hard lag had a decreasing trend with a decrease of the energy difference between the hard-soft energy band pairs used for correlation. §.§ 1ES 1959+650 1ES 1959+650 is a relatively nearby (z=0.048) HSP blazar <cit.>. It was one of the first few blazars to be detected at the TeV energies <cit.>. The most recent TeV observation has been by the MAGIC telescope <cit.>. It was first discovered at the radio frequencies with the Green Bank Telescope <cit.>. Later, it was observed at the optical wavelengths, in which it exhibited large-amplitude and fast variability <cit.>. The first detection at X-ray energies was by the Einstein-IPC <cit.>. Subsequently, it was observed by ROSAT in 1996, by BeppoSAX in 1997, and by Rossi X-Ray Timing Explorer (RXTE) in 2000 <cit.>. More detailed studies of its X-ray and multi-wavelength spectra have been carried out by many authors <cit.>. Recently, the X-ray spectra obtained from AstroSat's SXT and LAXPC instruments during a flaring event in 2017 have been analyzed and modeled in details <cit.>. In 1ES 1959+650, hard and soft X-ray cross-correlation has not been explored too often. <cit.> did not find any significant intra-day variability in two NuSTAR light curves of 1ES 1959+650 having a duration of ∼30 ks during 2014. Using those data, <cit.> found strong correlation between the soft and hard X-rays with time lag consistent with zero. In both these HSP blazars, the synchrotron peak is at the 0.1-10 keV range, which is observed by SXT and LAXPC. Therefore, in this work we study the hard and soft X-ray variability of these two blazars using AstroSat data to explore the fluctuations due to the highest energy electrons. Our goal is to study the cross-correlation precisely using well sampled short-term data. In section 2, we describe the reduction of the AstroSat data and present the hard and soft X-ray light curves of Mrk 421 and 1ES 1959+650. Then we calculate the hardness ratio from the hard and soft X-ray light curves and plot its time evolution on the hardness-intensity plane at different epochs. We carry out discrete cross-correlation of the hard and soft X-ray light curves along with rigorous and precise estimates of the time lag and its uncertainties, and the significance of the cross-correlation in section 3. Finally, in section 4, we discuss the results and their implications regarding the emission mechanism and physical properties of the jet. § ASTROSAT DATA AstroSat, India's first space telescope dedicated to Astronomy, was launched on 2015 September 28 <cit.>. It has five payloads, which are able to observe simultaneously at a range of wave bands from UV to very high energy X-rays (150 keV). All AstroSat data are publicly available in its online data repository[<https://astrobrowse.issdc.gov.in/astro_archive/archive/Home.jsp>] after the 1-year proprietary period. §.§ Soft X-Ray Telescope (SXT) We have obtained the soft X-ray (0.7-7 keV) light curves from the Soft X-ray Telescope (SXT), which is an imaging telescope onboard AstroSat <cit.>. The Level-1 data from SXT are stored in FITS format in the AstroSat data archive. We generate Level-2 data from Level-1 by running tool provided by SXT data analysis package (AS1SXTLevel2-1.3). The Level-2 data consist of cleaned event files of each orbit. The South Atlantic Anomaly (SAA) and Earth occultation effects are avoided by using the data from the `good time intervals.' The cleaned event files of individual orbits are then merged together by . We use the tool of HEASoft to extract the image, light curve and spectrum from the merged cleaned event file. We select the source region as a circle of radius 16' centered at the point source. To obtain the background light curve, we use SXT background spectra SkyBkg_comb_EL3p5_Cl_Rd16p0_v01.pha to get constant background counts in the desired energy range. The final light curves are obtained using the tool , which is also used to bin the data. §.§ Large Area X-ray Proportional Counter (LAXPC) We have obtained the hard X-ray (7-20 keV) light curves from the Large Area X-ray Proportional Counter (LAXPC) onboard AstroSat. In LAXPC, three co-aligned proportional counters detect X-ray photons between 3-80  keV with a large effective area of ∼ 6000 cm^2 <cit.>. The three detectors, LAXPC10, LAXPC20 and LAXPC30, have a field of view ∼1^∘× 1^∘. They record the signal in Event analysis (EA) and Broad Band Counting (BB) mode which are stored as Level-1 data. Currently, only LAXPC20 is working nominally <cit.>. Therefore, we only use the data from LAXPC20 and make light curve and spectrum of the source and the background using the LAXPCsoftware: format(A) package available in the AstroSat Science Support Cell (ASSC) website[<http://astrosat-ssc.iucaa.in>/]. To extract the background effects we use the “Faint source background estimation" scheme <cit.> because blazars are considered to be faint source for AstroSat unless they are in a particularly bright state. This scheme is applicable for sources, in which the count rate at 50-80 keV is less than 0.25 counts s^-1. We use the and tools to obtain the final background subtracted light curves with different lengths of time bins. §.§ Light Curves We show the soft (0.7-7 keV from SXT) and hard (7-20 keV from LAXPC) X-ray light curves of the blazars Mrk 421 and 1ES 1959+650 at a total of eight epochs during 2016-2019 obtained by the above process in Figures <ref> and <ref>. Our primary goal is to study the precise cross-correlation of the hard-soft X-ray variability. Therefore, we limit the lower end of the hard X-ray energy range to 7 keV for the LAXPC light curve to avoid any overlap with that of the SXT light curve in order to keep the soft and hard energy bands completely separate. We set the upper limit of the same to 20 keV because the LAXPC background noise starts to dominate above that energy <cit.>. Furthermore, we use a lower limit of 0.7 keV for our SXT light curves to avoid certain instrumental features <cit.> that are present at energies lower than that. The length of the light curves are 45-450 ks with time bins as short as 600 s to 800 s. We use the shortest bins for which the average relative uncertainty is less than 10%. §.§ Hardness Intensity Diagrams Hardness ratio is quantified here by the ratio of the count rate at the hard X-ray energy band (7-20 keV) to that at the soft X-ray energy band (0.7-7 keV). Hardness ratio is related to the spectral nature of the source. Figure <ref> shows the hardness-intensity diagram (HID), i.e., hardness ratio versus the observed photon flux at 0.7-7 keV of Mrk 421 and 1ES 1959+650 for two long exposures of each. We rebin the light curves to make any existing pattern in the figure clearly visible, and normalize the hardness and intensity values by their mean to facilitate comparison between epochs. We can see that the hardness ratio increases with increasing flux in all cases, which implies a harder when brighter trend. In some epochs, the evolution in the HID exhibits a loop pattern, e.g., a clockwise and counter clockwise loop structure present in the 2018 epoch of Mrk 421 and 2017 epoch of 1ES 1959+650, respectively while in the HID plot of 1ES 1959+650 in the 2016 epoch no clear pattern is present. § CROSS-CORRELATION ANALYSIS We cross-correlate the hard and soft X-ray light curves of both blazars for each of the epochs using the discrete cross-correlation function <cit.> method. The cross-correlation function between the hard and soft X-ray variability may indicate whether the fluctuations at those two bands are causally related. Absence or presence of non-zero time lag between the variability may provide information about the geometric and physical parameters of the emission region, e.g., magnetic field and energy of the emitting electrons. §.§ Uncertainty of Time Lag A unique aspect of the light curves we are using here is that they provide X-ray variability information at a very high time resolution, e.g., sub-hour in some cases. Therefore, if there is a sub-hr time delay between the hard and soft X-ray variability that may be detected with high significance from the cross-correlation analysis of those light curves. However, the uncertainty of such time delay need to be computed carefully in order to ensure an accurate interpretation of the results. For that purpose, we use the model independent Monte-Carlo method, namely, flux randomization and random subset selection or FR-RSS <cit.> to determine the time delay and its uncertainties. We modulate the flux values by adding noise to each data point drawn from a normal distribution with a mean equal to the mean observed uncertainty. We select a random subset of each of the light curves consisting of ∼ 63.3% of the data points. We make many realizations in both bands using the above flux randomization and random subset selection, and compute the cross-correlation function of each such pair. It generates a distribution of cross correlation results. We find the peak value of the correlation function and the corresponding time lag, termed, “peak lag” from the median of the distribution. Furthermore, we generate a distribution of “centroid lags” by calculating the mean time lag weighted by the corresponding correlation coefficients of the points, which are above 80% of the peak value. We find the centroid lag from the median of that distribution. §.§ Significance of Correlation The cross-correlation results of a pair of red noise light curves may be artificially high or low due to gaps in the data, irregular sampling or just by random chance associated with the stochastic nature of the variability. In order to test the significance of the cross-correlation results we simulate 100 each of hard and soft X-ray light curves from their power-spectral density (PSD) using the algorithm by <cit.>. We use the soft and hard X-ray power-spectral shape and slope from <cit.> for both sources. We cross-correlate the light curves simulated in the above method and find the 99 and 95 percentile levels of the distribution of cross-correlation values for each step of the time delays. We determine the significance of the DCCF values we obtain for the observed light curves by its comparison to the 99 and 95 percentile values obtained above. This provides an estimate of the probability that the value of the DCCF we obtain is high by chance and in turn provides a quantitative measure of its significance. §.§ Results The discrete cross-correlation function of each pair of hard and soft X-ray light curves of Mrk 421 and 1ES 1959+650 for all of the epochs are shown in Figure <ref>. The solid and dotted lines denote the 99% and 95% confidence levels calculated in the method discussed in the previous section. We find that the hard and soft X-ray variability are strongly correlated with at least 95% confidence level in all cases. If the cross-correlation function is symmetric in shape around its maximum then the peak and centroid lag values are approximately the same. However, in some cases the CCFs are not symmetric leading to a significant difference between the peak and centroid lag. Therefore, we have reported both in Tables <ref> and <ref>. In a few cases, e.g., in the epochs 2016 November 16 of 1ES 1959+650 and 2018 January 19-20 of Mrk 421, the time lags (centroid) are consistent with zero within their uncertainties. Some other light curves exhibit a significant lag of ∼hr, i.e., variability at the hard band lagging those at the soft X-rays or vice versa, termed as hard lag and soft lag, respectively. In the convention we use, a positive value of the time delay indicates hard lag and a negative time delay denotes a soft lag. For example, 1ES 1959+650 in the 2017 epoch and Mrk 421 in the 2017 January 3-8 epoch exhibit significant hard lag (centroid) while Mrk 421 shows a significant soft lag (centroid) during the 2017 January 24 epoch, beyond the corresponding uncertainty values. § DISCUSSION & CONCLUSION Here we have presented the hard and soft X-ray light curves of two HSP blazars, namely, Mrk 421 and 1ES 1959+650 at different epochs during 2016-2019 observed by the SXT and LAXPC instruments onboard AstroSat. In the hardness-intensity diagrams the sources exhibit a harder when brighter trend. It has been previously reported in several observations of Mrk 421 and 1ES 1959+650 <cit.>. The evolution in the hardness-intensity plane follows a clockwise or anti-clockwise path in some epochs. Such pattern indicates, for example, the higher energy variations propagate to lower energy or vice versa causing soft or hard lag, respectively <cit.>. In the leptonic model of emission from the blazar jets, the X-rays from the HSP blazars are produced by synchrotron radiation from the relativistic electrons in the jet <cit.>. Here we use the SXT and LAXPC energy ranges, i.e., 0.7-7 keV and 7-20 keV, respectively which include the lower energy peak of the SED as well as its decaying part beyond the peak energy. We have shown the spectra of both blazars at all the epochs discussed here along with the best-fit broken power-law model and spectral parameters in Appendix A. It shows that the 0.7-20 keV spectra at all the epochs are consistent with a single component supposedly due to synchrotron radiation and no significant additional component is present in any of the epochs. A more detailed analyses of the multi-epoch spectra will be carried out in a future paper. Several other authors, using various samples and observations, have concluded that in the HSP blazars, including the two discussed here, the decaying part of the synchrotron peak is located at the hard X-ray band (e.g., ∼10-20 keV) and the IC component does not contribute below 50 keV or even higher <cit.>. Therefore, the X-ray emission analyzed here is considered to be due to synchrotron emission only, and is, in fact, located at or near the lower energy peak of the SED. Hence, the electrons responsible for generating the X-rays are at the peak of their energy distribution. The cooling timescales of such high energy electrons are short (∼hr). Therefore, it is assumed that they are physically located in a compact region and undergo acceleration and cooling at approximately the same time. That is consistent with the strong correlation with zero or very short time delay between the hard and soft X-ray variability as we have found here. There may be short but non-zero time delay caused by cooling and acceleration processes <cit.>. For example, higher energy electrons cool faster in the synchrotron process because the synchrotron cooling timescale is inversely proportional to the energy of the emitting particles. Therefore, electrons producing hard X-rays will cool faster than those producing the soft X-rays causing the latter lagging the former, namely, a soft lag. On the other hand, if the acceleration timescale is comparable or longer than that for cooling it may take a significant amount of time to energize the electrons enough so that they can emit synchrotron radiation at the hard X-ray energies, which will cause the hard X-ray variability to lag those at the soft X-rays, namely, a hard lag. It is often assumed that in blazar jets the particles are accelerated by moving shocks through, e.g., a first order Fermi acceleration mechanism <cit.>. The acceleration timescale (t_acc) in that case is proportional to the final energy achieved by the particles while synchrotron cooling timescale (t_cool) is inversely proportional to the energy of the particle. The particles will reach the highest energy when t_acc≃ t_cool, which indicates that below the highest energy, t_acc << t_cool <cit.>. For HSP blazars, the electrons that emit X-rays are at the peak of their energy distribution and we may assume t_acc≃ t_cool. That is consistent with the observed result that we see zero time lag in some cases between the hard and soft X-ray variability while in some cases we do find a small amount of soft or hard lag. In particular, hard lag indicates the acceleration timescale is longer than our time resolution and we are able to probe it through our light curves and inter-band time delays. The soft/hard lag may be expressed as below: τ_soft= t_cool(E_soft) - t_cool(E_hard) τ_hard= t_acc(E_hard) - t_acc(E_soft) The energy dependent acceleration and cooling timescales may be given as the following <cit.>: t_acc (E) = 9.65 × 10^-2 (1+z)^3/2ξ B^-3/2δ^-3/2 E^1/2  s t_cool (E) = 3.04 × 10^3 (1+z)^1/2 B^-3/2δ^-1/2 E^-1/2  s Here, E, z, B and δ are the energy of the emitted photon, redshift of the blazar, magnetic field at the emission region and the Doppler factor, respectively. ξ is the so-called `acceleration parameter' or `acceleration efficiency'. Substituting Equations (<ref>) and (<ref>) into Equations (<ref>) and (<ref>), one gets <cit.>: B δ^1/3 = 209.91 ×(1+z/E_s)^1/3[1-(E_s/E_h)^1/2/τ_soft]^2/3  G B δξ^-2/3 = 0.21× (1+z)E_h^1/3[1-(E_s/E_h)^1/2/τ_hard]^2/3  G Here, E_s and E_h are the logarithmic mean energy of the corresponding soft and hard energy bands, respectively. Assuming a range of δ∼ 10-20 we calculate the magnetic field in the emission region. In the case of Mrk 421, considering the soft lag (centroid lag) and the hard lag, we obtain the value of B to be in the range 0.05-0.12 G. Consequently, the acceleration parameter is ξ∼ 10^4. This value is consistent with that obtained by <cit.> using the observations of Mrk 421 in 1998. In the case of 1ES 1959+650 there is a non-zero time lag in 2017 which is a hard lag. From this time lag, we obtain B δξ^-2/3 = 1.0 × 10^-3. If B ∼ 0.1 G and δ = 10, the value of ξ is ∼ 10^4. From theoretical modeling of the time-resolved X-ray spectra during a flare in 2017, <cit.> estimated the acceleration timescale in this source to be 2 × 10^5 s. The acceleration parameter ξ is the ratio of the mean free path to the gyroradius (λ/r_g) in the acceleration region <cit.>. We obtain ξ≫1 although in other astrophysical plasma, such as, the interstellar medium or supernova remnants ξ≃ 1 <cit.>. A large value of the acceleration parameter implies that the mean free path of scattering is large compared to the gyroradius (r_g). For relativistic electrons r_g = γ m c^2/e B. Assuming B=0.1 G, r_g ∼ 10^4 γ cm. That implies the mean free path of the electrons with γ∼ 10^2 will be 10^10 cm for ξ = 10^4. If we assume the electrons are moving at a speed close to that of light, they will traverse ∼ 10^5 λ in an interval equal to the acceleration timescale ∼ 10^5 s. Therefore, they will undergo ∼ 10^5 scatterings in the acceleration region to be accelerated to the maximum energy (γ∼ 10^5) starting from γ∼ 10^2. That indicates the energy amplification in each scattering, assuming it is uniform at all values of initial energy, is 7 × 10^-3%. If we make the same assumptions as above the electrons will undergo 10^3 scatterings in order to reach γ = 10^5 from γ = 10^4. In that case, the energy amplification is 0.23% in each scattering. Due to the high time resolution light curves used in this work, we can put a meaningful constraint on the mechanism and efficiency of the acceleration of particles in the blazar jets. § DATA AVAILABILITY The data utilized in this work are available in ISRO's online data repository. Weblink:<https://astrobrowse.issdc.gov.in/astro_archive/archive/Home.jsp> The software for AstroSat data reduction are available in the website of the AstroSat Science Support Cell (ASSC). Weblink:<http://astrosat-ssc.iucaa.in> § ACKNOWLEDGEMENTS We thank the anonymous referee for suggestions that improved the manuscript. This work has used data from the Indian Space Science Data Centre (ISSDC) under the AstroSat mission of the Indian Space Research Organisation (ISRO). We acknowledge the POC teams of the SXT and LAXPC instruments for archiving data and providing the necessary software tools. SD and RC thank ISRO for support under the AstroSat archival data utilization program, and IUCAA for their hospitality and usage of their facilities during their stay at different times as part of the university associateship program. RC thanks Presidency University for support under the Faculty Research and Professional Development (FRPDF) Grant, RC acknowledges financial support from BRNS through a project grant (sanction no: 57/14/10/2019-BRNS) and thanks the project coordinator Pratik Majumdar for support regarding the BRNS project. We thank Jayashree Roy, Sunil Chandra, Ritesh Ghosh, Savithri Ezhikode, Gulab Dewangan and Ranjeev Misra for useful discussion regarding AstroSat data analysis. mnras § MULTI-EPOCH SPECTRA We carry out the spectral modeling of 1ES 1959+650 and Mrk 421 at all the epochs discussed in this work using (version 12.10), a tool of HEASoft. The command helps to group the source spectrum with response and background files. We group all channels by minimum 50 counts per channel. For SXT we use background from blank-sky observations, i.e., SkyBkg_comb_EL3p5_Cl_Rd16p0_v01.pha and the response sxt_pc_mat_g0to12.rmf which are provided by SXT POC. We correct the given ARF sxt_pc_excl00_v04_20190608.arf with corresponding selected source region of radius 16' by using SXTARFModule. In LAXPC we group with the response of LAXPC20 and background which is extracted using the faint source background scheme. We fit SXT and LAXPC spectrum jointly in the energy range 0.7-20 keV using the broken power-law model with Galactic absorption correction by . A constant factor representing the cross-normalization between the two instruments is fixed to 1 for SXT and kept as free parameter for LAXPC. We consider 3% systematic error for combined fitting <cit.> as suggested by the instrument team. The best-fit parameters are listed in Table <ref> and the spectra with best-fit models are shown in Figures <ref>and <ref>.
http://arxiv.org/abs/2307.05235v1
20230711130911
Quantitative Comparison of Nearest Neighbor Search Algorithms
[ "Hanitriniala Malalatiana Rakotondrasoa", "Martin Bucher", "Ilya Sinayskiy" ]
cs.DS
[ "cs.DS" ]
Does pre-training on brain-related tasks results in better deep-learning-based brain age biomarkers? Bruno M. Pacheco1 Victor H. R. de Oliveira1 Augusto B. F. Antunes2 Saulo D. S. Pedro3 Danilo Silva1, for the Alzheimer’s Disease Neuroimaging Initiative Data used in preparation of this article were obtained from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database (adni.loni.usc.edu). As such, the investigators within the ADNI contributed to the design and implementation of ADNI and/or provided data but did not participate in analysis or writing of this report. A complete listing of ADNI investigators can be found at: <http://adni.loni.usc.edu/wp-content/uploads/how_to_apply/ADNI_Acknowledgement_List.pdf> August 12, 2023 ======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== We compare the performance of three nearest neighbor search algorithms: the Orchard, ball tree, and VP-tree algorithms. These algorithms are commonly used for nearest-neighbor searches and are known for their efficiency in large datasets. We analyze the fraction of distances computed in relation to the size of the dataset and its dimension. For each algorithm we derive a fitting function for the efficiency as a function to set size and dimension. The article aims to provide a comprehensive analysis of the performance of these algorithms and help researchers and practitioners choose the best algorithm for their specific application. 2 § INTRODUCTION Nearest neighbor searches arise in a wide variety of applications such as image retrieval, recommender systems, anomaly detection, and genomics (<cit.>, <cit.>, <cit.>, <cit.>). In a nearest neighbor search problem, one is given a finite set of points S that is the subset of a larger usually infinite set of points X on which a metric d( · ,  · ) is defined obeying the usual axioms for a metric space. Given a query point q∈ X, the nearest neighbor search recovers a point s∈ S (not necessarily unique) such that d(q,s) is minimized <cit.>. Let us call the result NN(q, S). In the case where the result is not unique, this may not be a function as we can allow NN to return any of the minima rather than an exhaustive list. Special techniques exist for the case where X is Euclidean and of low dimension. For X=E^1 the point can be ordered and well-known binary search techniques can be applied <cit.>. For the E^2 special techniques exist that divide the plane into zones, and similar techniques can be generalized to E^k where k is small (<cit.>, <cit.>). However, in many cases, X cannot be embedded into a Euclidean space, and even when that may be possible, the structure of the set S is such that its dimension is fractional or much lower than that of the space into which it can be embedded so that algorithms applicable to a general metric space with no special structure beyond the usual axioms are most useful. A brute force solution to the nearest neighbor problem is trivial. If N=| S|, calculating d(q,s) for each s∈ S requires O(N) effort and is guaranteed to return the correct solution with probability one. The objective is to do better: to find the solution by making as few comparisons as possible. This often involves arranging the elements S into a suitable pre-computed data structure, which may be exploited to find NN(q,s) with a number of comparisons significantly smaller than N. The property of the metric space that allows unnecessary comparisons to be avoided is the triangle inequality. Considering the trivial case when X=E^1 sets the lower bound on the possible efficiency of the algorithm to be O(log _2(N)) which results from a balanced binary tree. Here we explore by numerical experiment the efficiency of three nearest neighbor search algorithms applicable to the case where X is a metric space with no special structure: the Orchard algorithm, the ball tree algorithm, and the VP (vantage point)-tree algorithms. We find a novel fitting functions for the efficiency of the algorithm compared to the brute force algorithm described above. § ALGORITHMS §.§ Orchard Algorithm The Orchard algorithm, invented by Michael T. Orchard <cit.>, exploits the triangle inequality to reduce the distance calculated in searching for the nearest neighbor. Orchard algorithm is simple and fast, particularly in high dimension (dim X< 64, for example) <cit.>. The Orchard algorithm relies on a pre-computed array of size 𝒪(N^2) where S_pointer[i,j] (where i=0 … (N-1) and j=0 … (N-2)) points to the j-th closest point to S_i. In other words, S_pointer[i,:  ] includes all the (N-1) points not equal to S_i arranged according to increasing distance from S_i. Setting up this table requires 𝒪(N^2) operations and 𝒪(N^2) storage space is required. <ref> illustrates the idea behind the algorithm. The idea is to eliminate all candidates outside the circle centered on the initial candidate. Given a query (shown in red), it begins by selecting a point at random that is supposed to be the closest candidate. Knowing the distance r between the query and the chosen nearest candidate allows one to skip computing the distance between the query and the other points outside the circle with radius 2r centered on the chosen nearest candidate. Suppose we have access to the distances d(q, S_i) between a query point and a selected candidate, as well as the distances d(S_j, S_i) between the selected candidate and other candidates. In the search process, an optimization technique arises when d(S_j, S_i) exceeds twice the distance d(q, S_i). In such cases, there is no need to compute the distance d(q, S_j) between the query point and the other candidate S_j. This property is derived from the triangle inequality. The search algorithm begins by randomly selecting a point S_i from a given set of points S. Then it calculates the distance d(S_i, q) between this selected point and a query point q ∈ X. Next, the algorithm proceeds to iterate through each point S_j in S_pointer[i,:]. If d(S_j, S_i) < 2d(q, S_i), the algorithm calculates the distance between the current point S_j and the query point q. If d(S_j, q) < d(S_i, q), the algorithm updates the index from i to j, indicating that the current point S_j is more relevant. The algorithm continues this iterative process within the updated subset S_pointer[i,:] until relevant points have been considered. §.§ Ball Tree Algorithm The ball tree algorithm organizes the points of S into a balanced binary tree, so that each element s of S is assigned to a unique node of the tree <cit.>. Each node 𝒩 additionally includes a real number μ≥ 0 (i.e., the ball radius) such that μ =max_l∈ℒd(l,s) and min_r∈ℛd(r,s)≥μ as well as pointers (possibly NULL) to the left and right child nodes. Here ℒ is the set of all descendant points of the left branch of 𝒩, and ℛ is the set of all the descendant points of the right branch of 𝒩, and the cut is chosen such that |ℒ| =|ℛ| or |ℒ| =|ℛ| +1. Leaf nodes have no children, and node having only a left child are allowed. The process of setting up the ball tree data structure may be described recursively. A node 𝒩 is generated from a set of remaining points S' by selecting and removing a random point s. Define S̅'=S'-{ s } . S̅' is partitioned into two disjoint subsets ℒ and ℛ as described above. The left child node 𝒩_ℒ is generated from ℒ by applying the same procedure described above, and similarly the right child node 𝒩_ℛ is generated from ℛ in the same way. The storage space of the ball tree structure is O(N), and the computation required to set up this structure is O(Nlog N). §.§ Vantage Point Tree In the ball tree algorithm, in the process of setting up the ball tree data structure, after S' was partitioned into two disjoint subsets ℒ and ℛ, the points l and r where randomly drawn from ℒ and ℛ to continue the tree downward. While these random choices provide a fast way to set up the ball tree structure, these choices are not necessarily the most efficient for carrying out a subsequent query search using the ball tree structure. The vantage point algorithm rather than making a random choice attempts to optimize the choice of l and r to improve the efficiency of the subsequent query searches. <cit.> has shown with an example in a [0,1] bounded metric space that points near the space's corners can provide the best vantage point. In the ball tree algorithm, a vantage point v∈ S' was chosen at random. But some vantage point choices are better than others. A good vantage point has few points near the boundary at r=μ (v), which is the middle value of distances from v to each point in the random subsample S'-{v}, dividing the set into equal halves. The vantage point is optimized so that the second moment ⟨( d(p,v)-μ (v) ) ^2 ⟩ _p∈ S'-{ v } is maximized. This choice minimizes the probability of encountering the situation illustrated in Fig. <ref>(c) where the search path bifurcates, leading to more distance evaluations. In practice only a random subset of candidate vantage points are explored, and the second moment above is approximated using only a random subsample of S'-{ v } . A vantage points needs to be good but not necessarily the very best. The vantage point method leads to a tree in which less branches need to be explored, leading to behavior more closely ressembling O(Nlog (N)). §.§ Nearest Neighbor Query Using Ball Tree Structure As the query algorithm is common to both the ball tree and VP algorithms, we discuss here the query search based on this pre-computed tree data structure. Both these algorithms attempt to apply divide-and-conquer in analogy to a binary search tree for totally ordered data. This analogy however is imperfect. In the binary search, one descends the tree only once without ever having to backtrack, whereas here in general one must backtrack at least sometimes, and in the worst-case scenario the entire tree must be traversed. The ball tree algorithm provides the capability to find not only one nearest neighbor for a given query point q but also multiple nearest neighbors within a specified radius ϵ the from q defined at the start of the search. On the other hand, if we only require the single nearest neighbor, we can update the value of ϵ with each minimum distance encountered during the search. This strategy allows us to continually refine the search space and focus on identifying the closest point to q. When searching for the closest point to a given point, we start at the root node and work our way down the tree, comparing the distance between the point we are looking for and the ball of each node. If the distance between the point we are looking for and the ball is greater than the distance between the point we are looking for and the nearest point already found, we can ignore that branch of the tree altogether, as it won't contain the nearest point. Given a query q and a query radius ϵ. One wishes to find all objects o with a distance less than ϵ, i.e. d(q,o)<ϵ. When visiting a node S_i with a ball radius r, one must decide whether to visit the left or right child of that node. If d(S_i, q) > r + ϵ, the search in the left child may be excluded, and if d(S_i, q) < r - ϵ, the search in the right child may be excluded. However, we cannot avoid the search on the left and right if r-ϵ<d(S_i,q)<r+ϵ <cit.>. Figure <ref> shows the three cases. §.§ Worst Case Scenario The triangle inequality alone offers a powerful tool to avoid unnecessary comparisons but it is not in all cases possible to improve on the brute force algorithm. Suppose S is such that d(S_i,S_j)=D for all i,j=0 … (N-1) and moreover the query point q is such that d(S_i,q)=D as well. Such a set can be embedded in N-dimensional Euclidean space. In this case, none of the algorithms above improve on the brute force algorithm. Perturbing each of the points by a small amount so that the distances fall in the interval [D-ϵ, D+ϵ] where ϵ is sufficiently small in less contrived but does not change the situation. The worst case scenario just described is not as contrived or exceptional as might seem at first sight. Let us take N points distributed according to a Gaussian in D-dimensional Euclidean space. p(𝐱)=(2π/D)^D/2exp[-1/2D 𝐱^T𝐱] As D →∞, all the points very nearly lie on the unit sphere. The fluctuation in the radius is of order 𝒪(1/√(D)). The distances between pairs of points are similar. This is known as the “curse of dimensionality.” § BENCHMARK DESCRIPTION In order to compare and benchmark nearest neighbor algorithms, we define two-parameter family test cases, which we hope are representative of these algorithms in real word applications. The two parameters are the number of points N and dimension D. The N points are independently drawn from a Gaussian distribution p(𝐱) ∼exp[-1/2𝐱^T𝐱] where 𝐱∈ℝ^D. Distances are computed using the Euclidean metric. One may consider generalizing by replacing 𝐱^T𝐱 with 𝐱^T𝐂^-1𝐱 where the covariance matrix 𝐂 is positive definite. The overall scale of 𝐂 is irrelevant: 𝐂↦λ𝐂, λ >0 does not produce a set of points with different behavior. However, the condition number of 𝐂, K=λ_max/λ_min can produce different behavior where K≫ 1, for example when the effective dimension depends on N. The Gaussian form however probably is not so material. However, the assumption of lack of clustering or Poissonian statistics could be important. One imagines the distribution of points with hierarchical clusters, where behavior might differ substantially. We did not investigate these issues because we could not find a family of test cases that would be representative of all the types of behavior that could be encountered. § QUANTITATIVE COMPARISONS The metric to compare the three algorithms among themselves and with the brute force algorithm is f the ratio of the number of comparisons made compared to the number of comparisons required with the brute force algorithm, which is equal to N. This fraction is a random variable, as the test problem was defined above as the outcome of a random process. Consequently, we compute an average over many realizations. <ref> shows f as the function of the number of points for d=3. With this metric the Orchard algorithm outperforms the Ball tree, although the increased storage requirement and pre-computation for the Orchard algorithm should be considered as well. The VP-tree algorithm however outperforms both the ball tree and Orchard algorithms. In all cases, as the number of the points increases, the mean fraction of distances decreases. Additionally, the utilization of vantage point selection proves to enhance search performance compared to random selection. On the log-log plot one observes linear behavior indicating that a power law forf gives a good fit. <ref> explores datasets of various sizes and dimensions, with | S| = 1000, 3000, and 9000 and dimension d from 2 to 32. These results show that the Orchard algorithm outperforms both the Ball tree and VP-tree algorithms in terms of computational efficiency. The Orchard algorithm is observed to compute fewer distances compared to the other two algorithms. However, despite this advantage, it is the VP-tree algorithm that emerges as the more favorable choice, primarily due to its minimal memory requirements. The fraction of distances evaluated is accurately approximated by the sigmoid function f(D,N)= 1/1+ exp [-β (D-αlog(N)]. For each algorithm the parameters α and β are fitted to the data using generalized least squares. The best fit parameters are: α=1.75 and β=0.3 for the Orchard algorithm; α=0.96 and β=0.6 for the ball tree algorithm; and α=1.25 and β=0.65 for the VP tree algorithm, as shown in <ref>. § SUMMARY The emprical fitting function found here provides a good predictor of the performance of the three algoritms studied. The Orchard algorithm performs well for smaller data sets whereas the VP-tree algorithm is more efficient for larger data sets. The ball tree algorithm is the least efficient of the three algorithms. For a real application, it may not always be obvious what value to use for the dimension D as the effective dimension may be less than the dimension of the ambient space or it may even be fractional. Nevertheless, we believe that the two-parameter benchmark used here is representative of a broad range of applications. § ACKNOWLEDGEMENTS HMR thanks NITheCS for support during her Master's studies at UKZN. The three authors thank the NITheCS program on Bioinformatics, Genomics, and Advanced Medicine for its support. johd
http://arxiv.org/abs/2307.04974v1
20230711022924
Determination of matter radius and neutron-skin thickness of $^{60,62,64}$Ni from reaction cross section of proton scattering on $^{60,62,64}$Ni targets
[ "Shingo Tagami", "Tomotsugu Wakasa", "Masanobu Yahiro" ]
nucl-th
[ "nucl-th", "nucl-ex" ]
  (1,1)>     (1,1)<  
http://arxiv.org/abs/2307.06040v1
20230712093516
Rhythm Modeling for Voice Conversion
[ "Benjamin van Niekerk", "Marc-André Carbonneau", "Herman Kamper" ]
eess.AS
[ "eess.AS", "cs.LG", "cs.SD" ]
Submitted July 2023 Shell et al.: Bare Demo of IEEEtran.cls for IEEE Journals Rhythm Modeling for Voice Conversion Benjamin van Niekerk, Marc-André Carbonneau, Herman Kamper B. van Niekerk and H. Kamper are with the Department of Electrical and Electronic Engineering, Stellenbosch University, South Africa (e-mails: [email protected] and [email protected]). M.-A. Carbonneau is with Ubisoft La Forge, Montréal (e-mail: [email protected]). August 12, 2023 =============================================================================================================================================================================================================================================================================================================================================================== Voice conversion aims to transform source speech into a different target voice. However, typical voice conversion systems do not account for rhythm, which is an important factor in the perception of speaker identity. To bridge this gap, we introduce Urhythmic—an unsupervised method for rhythm conversion that does not require parallel data or text transcriptions. Using self-supervised representations, we first divide source audio into segments approximating sonorants, obstruents, and silences. Then we model rhythm by estimating speaking rate or the duration distribution of each segment type. Finally, we match the target speaking rate or rhythm by time-stretching the speech segments. Experiments show that Urhythmic outperforms existing unsupervised methods in terms of quality and prosody. voice conversion, rhythm conversion, speaking rate estimation § INTRODUCTION From a slow, purposeful oration to rapid, excitable chatter, rhythm conveys emotion and intent in speech. Rhythm and speaking rate are also important cues for identifying different speakers <cit.>. Despite its role in communication, typical voice conversion systems do not model the target speaker’s rhythm, reproducing the prosody of the source speech instead. Consider the pair of utterances shown in Figure <ref>. Both contain the word but differ in rhythm. Most noticeable is the speaking rate—the source utterance is spoken more slowly than the target. Besides speaking rate, features like prolonged vowels or brief pauses characterize individual rhythm and are influenced by accent <cit.>, gender <cit.>, or even historical period <cit.>. Our goal is to better convert speaker identity by modeling the natural rhythm of the target speaker. Some recent work explores rhythm conversion using sequence-to-sequence models <cit.> or forced alignment <cit.>. However, training these systems requires parallel speech or text transcriptions, which are costly and time-consuming to collect. Unsupervised methods such as AutoPST <cit.>, UnsupSeg <cit.>, and DISSC <cit.> lift this restriction by modeling rhythm without annotations or parallel data. However, there is still a gap in quality and prosody compared to natural speech. For example, AutoPST and DISSC discard some content information, resulting in poor intelligibility. UnsupSeg only models rhythm globally—ignoring fine-grained details. To tackle these problems, we propose Urhythmic[Code and checkpoints: <https://github.com/bshall/urhythmic>][Audio samples: <https://ubisoft-laforge.github.io/speech/urhythmic/>], an unsupervised approach to rhythm conversion and control. Building on self-supervised speech representations <cit.>, Urhythmic models both global and fine-grained characteristics of rhythm. As the foundation of Urhythmic, we divide source audio into discovered, variable-duration segments approximating sonorants (vowels, approximants, and nasals), obstruents (fricatives and stops), and silences. Based on this segmentation, we estimate speaking rate by counting the number of sonorants per second of speech. For fine-grained modeling, we approximate the duration distribution of each segment type. Finally, we time-stretch the entire utterance or individual segments to match the target speaking rate or rhythm. Our main contributions are: * We propose Urhythmic, a voice and rhythm conversion system that does not require text or parallel data. * We develop global and fine-grained methods based on speaking rate estimation or segment duration modeling. * We show that both methods outperform existing approaches. However, fine-grained modeling more effectively matches the target speaker's pattern of pauses and silences. § PROPOSED METHOD A simple method for rhythm modeling is to use time-aligned transcriptions to estimate speaking rate. Alternatively, we can capture finer-grained characteristics by modeling the duration distribution of individual phones or syllables. Then, we can alter rhythm by time-stretching a source utterance to match the target speaking rate or duration distributions. However, this approach requires text transcriptions and forced alignment, which are not available in many voice conversion applications. To remove the need for transcriptions, we segment speech into sonorants, obstruents, and silences without supervision. We model the duration of these segments to characterize rhythm. Fig. <ref> shows an overview of our approach. First, the content encoder translates input audio into speech units (Sec. <ref>). Next, the segmentation and clustering block groups similar units into short segments. The segments are then combined into coarser groups corresponding to sonorants, obstruents, and silences (Sec. <ref>). The rhythm modeling block estimates speaking rate or models the duration distribution of each group (Sec. <ref>). The time-stretching block down/up-samples the speech units to match the target rhythm or speaking rate (Sec. <ref>). Finally, the vocoder converts the speech units into an audio waveform. §.§ Content Encoding The content encoder aims to extract speech representations that capture linguistic content but discard speaker-specific details. By replacing the speaker information, we can convert source speech to a target voice. To achieve this, we encode source audio into a sequence of soft speech units ⟨𝐬_1, …, 𝐬_T ⟩. Soft units were proposed as an alternative to discrete speech representations for voice conversion <cit.>. While discretization acts as a bottleneck to remove speaker information <cit.>, it also discards some linguistic content—increasing mispronunciations in converted speech. To avoid this problem, <cit.> trains a soft content encoder to predict a distribution over discrete units. By modeling uncertainty, the soft units retain more content information and improve voice conversion. Concretely, each soft unit parameterizes a distribution over a dictionary of discrete speech units: p(i |𝐬_t) = exp(sim(𝐬_t, 𝐞_i) / τ)/∑_k=1^K exp(sim(𝐬_t, 𝐞_k) / τ), where i is the index of the i^th discrete unit, 𝐞_i is a corresponding embedding vector, and sim(·, ·) computes the cosine similarity between the soft and discrete units. §.§ Segmentation and Clustering The segmentation and clustering block groups speech into variable-duration segments. First, we partition the soft units into short segments based on <cit.>. We frame segmentation as an optimization problem, maximizing the similarity between the units within a segment. Next, using hierarchical clustering, we merge the segments into larger groups approximating sonorants (vowels, approximants, and nasals), obstruents (fricatives and stops), and silences. In the first step, we partition the soft units into a sequence of contiguous segments ⟨ g_1, …, g_N ⟩. Each segment g_n = (a_n, b_n, i_n) is defined by a start index a_n, an end index b_n, and a single representative discrete unit i_n. We assess the quality of a given segmentation by scoring it according to the soft unit predictor: ℰ(𝐬_1:T, g_1:N) = ∑_g_n ∈ g_1:N∑_t = a_n^b_nlog p(i_n |𝐬_t) + γ (b_n - a_n). Here, a higher score corresponds to a better segmentation. The last term in the summation is a regularizer encouraging longer segments, with γ controlling the importance of the term. Note that without the regularizer, the optimal segmentation places each soft unit in its own segment. We can find the best segmentation: g^⋆_T = _g_1:Nℰ(𝐬_1:T, g_1:N), by applying dynamic programming to the recurrence: ℰ(𝐬_1:T, g^⋆_T) = min_t<T( ℰ(𝐬_1:t, g^⋆_t) + min_gℰ(𝐬_t+1:T, ⟨ g ⟩) ). Fig. <ref> row (d) shows an example segmentation partitioning the utterance into short sub-phone units. In preliminary experiments, we found that these shorter segments are not ideal for rhythm modeling. So to combine segments into larger groups, we hierarchically cluster the dictionary of discrete units and merge adjacent segments belonging to the same cluster. Fig. <ref> visualizes the dendrogram constructed through agglomerative clustering. We label each segment according to the three main branches of the dendrogram. Then we join adjacent segments with the same label. Fig. <ref> row (c) shows the merged segments. Referring to the phonetic transcription in row (b), we see that the larger segments approximate sonorants, obstruents, and silences. To validate this observation, we color each discrete unit in Fig. <ref> by the most frequently overlapping sound type (vowel, approximant, nasal, fricative, stop, or silence). The dendrogram clearly clusters the units by sound type, with the three main branches representing sonorants, obstruents, and silences. Note that phonetic transcriptions are only used for analysis and are not required by our approach. §.§ Rhythm Modeling Building on the segmentation block, we propose two methods for rhythm modeling. The first method estimates the global speaking rate. The second models the duration of individual segments, providing a finer-grained characterization of rhythm. Speaking rate is typically measured in syllables per second <cit.>. With time-aligned transcriptions, we can calculate the syllable rate by simply counting the number of syllables and dividing by the total duration. Without transcriptions, we count sonorant segments as an approximation. For example, row (c) of Fig. <ref> has four sonorant segments (labeled 1) over a duration of 0.94 seconds, giving an estimated rate of 4.26 segments per second. Since sonorants generally correspond to syllable nuclei <cit.>, this approximation should correlate with the syllable rate. We measure the correlation in Sec. <ref>. To estimate the speaking rate, we need to identify which clusters correspond to sonorants, obstruents, and silences. We classify the clusters based on energy and voicing features. First, we mark silent intervals using an energy-based voice activity detector. Then for each cluster, we calculate the percentage overlap with the marked intervals. The cluster with the highest overlap is labeled as silence. To distinguish between sonorants and obstruents, we apply a similar method to voicing flags extracted by a pitch detector. Specifically, we label the cluster that most frequently overlaps with voiced speech as sonorant. Our second approach aims to model finer-grained rhythm information. Instead of estimating the global speaking rate, we model the duration distribution of each cluster to capture variations in pauses, vowel length, etc. Historically, duration modeling has been an important component in text-to-speech <cit.> and speech recognition systems <cit.>. In particular, <cit.> and <cit.> apply parametric models to phone durations, showing that the gamma distribution provides a good fit. Following this work, we model the duration of each cluster as an independent gamma distribution. We use maximum likelihood estimation to fit the shape and rate parameters. §.§ Time-Stretching To adjust rhythm, we up/down-sample the extracted soft units using linear interpolation. By stretching the entire utterance, we can modify the overall speaking rate. Alternatively, we can stretch individual segments for finer-grained control. Based on global or fine-grained time-stretching, we propose two methods for rhythm conversion. For the global method, we first estimate the speaking rate of the source and target speakers. Then we stretch the source utterance according to the ratio between the rates. In the fine-grained approach, we stretch individual segments to match the target duration distributions. Specifically, we use inverse transform sampling to map between the source and target distributions: * For each cluster c ∈{0,1,2}, we find the cumulative distribution functions for the source and target speakers: F_src, c and F_tgt, c. * Given a segment of source speech with duration x>0 belonging to cluster c, we compute u = F_src, c(x). * We find the corresponding target duration y = F_tgt, c^-1(u) and stretch the segment according to the ratio y/x. § EXPERIMENTAL SETUP Focusing on any-to-one conversion, we conduct three experiments to evaluate Urhythmic. The first experiment investigates the correlation between the syllable rate and estimated speaking rates. The second experiment compares the rhythm of the converted and target speech. The last experiment assesses naturalness, intelligibility and speaker similarity. We compare Urhythmic to three state-of-the-art unsupervised rhythm conversion systems: AutoPST <cit.>, UnsupSeg <cit.>, and DISSC <cit.>. We use the official pretrained models for each baseline. Since UnsupSeg did not release the voice conversion component of their model, we apply their rhythm conversion method to unmodified outputs from Urhythmic. We evaluate speaking rate estimation on LibriSpeech <cit.>. For the rhythm conversion experiment, we pick the three fastest and three slowest speakers from VCTK <cit.>. To avoid conflating accent and speaker identity, we limit the selection to a single region (Southern England). For the subjective evaluations, we use the first 24 parallel utterances from each speaker. For segmentation, we set the regularizer weight to γ=2. We use HiFi-GAN <cit.> as the vocoder and adapt the generator to produce 16 kHz audio directly from soft speech units. We pretrain the vocoder on LJSpeech <cit.> for 3M steps. For each target speaker, we finetune a separate model for 50k steps. §.§ Speaking Rate Estimation In the first experiment, we measure the correlation between the true syllable rate and estimated speaking rates. Using forced alignments <cit.>, we calculate the average syllable rate for each speaker in the LibriSpeech dev and test split. We remove utterances containing out-of-vocabulary words and filter silences from the alignments. For DISSC, we estimate speaking rate by counting the number of deduplicated discrete units per second. For AutoPST, we use the self-expressive autoencoder to define segment boundaries at points where the similarity between neighboring frames drops below a threshold. We count the number of segments per second to estimate speaking rate. Table <ref> reports the Pearson correlation coefficients r with 95% confidence intervals. Urhythmic outperforms the baselines, showing a stronger correlation with the syllable rate. This indicates that the speech segments discovered by Urhythmic allow for more accurate modeling of speaking rate. §.§ Rhythm Conversion The second experiment compares the rhythm of the converted and target speech. We align text transcriptions to the set of parallel utterances from VCTK and compute three metrics: phone length error (PLE), word length error (WLE), and total length error (TLE) <cit.>. These metrics measure duration differences at distinct scales, with lower error rates indicating a closer match to the target rhythm. To avoid requiring parallel data for evaluation, we propose additional metrics comparing phone duration distributions between the converted and target speech. We use forced alignments to measure phone durations, group the data by sound type, and calculate the Wasserstein distance between the empirical distributions. Smaller distances represent more similar duration distributions, implying better rhythm conversion. Table <ref> reports TLE, WLE, and PLE for Urhythmic and the baseline systems. As a reference, we also include voice converted speech from Urhythmic without rhythm modification. Urhythmic improves all three metrics. Our global and fine-grained methods give comparable results at word and phone scales; however, fine-grained conversion substantially reduces TLE. The improvement is explained by better silence modeling since TLE includes silences (unlike the other metrics). We can clearly see this distinction in Table <ref>, which reports Wasserstein distances broken down by sound type. While our global and fine-grained methods perform similarly across the different sound types, fine-grained modeling substantially improves the conversion of pauses and silences. §.§ Naturalness, Intelligibility, and Speaker Similarity Next, we evaluate the naturalness, intelligibility, and speaker similarity of the converted speech. To assess intelligibility, we measure word error rate (WER) using the Whisper-Small speech recognizer <cit.>. Lower WER indicates better intelligibility since the content of the source speech remains recognizable after conversion. Following <cit.>, we evaluate speaker similarity using a trained speaker-verification system. An equal error rate (EER) of 50% indicates high speaker similarity since the verification system cannot distinguish between converted speech and genuine examples from the target speaker. Finally, we conduct subjective evaluations to assess naturalness and speaker similarity. Using Prolific <cit.>, we recruited 84 English-speaking raters for each evaluation. We followed the P.808 <cit.> recommendation and recorded 576 ratings (96 per method). For naturalness, we report a mean opinion score (MOS). For speaker similarity, we follow the same/different protocol from the Voice Conversion Challenges <cit.>. We pair each converted example with a reference from the target speaker and ask evaluators to rate their similarity on a four-point scale. We aggregate the ratings into a mean similarity score (SIM). Table <ref> reports results for intelligibility (WER), naturalness (MOS), and speaker similarity (EER and SIM). Urhythmic outperforms the baselines across all four metrics. Our global and fine-grained methods perform similarly with both improving subjective similarity scores compared to the no-modification reference. We suspect this is because the evaluators account for differences in rhythm and prosody in their assessments of speaker similarity. Finally, Urhythmic achieves comparable WER and MOS to the no-modification reference, demonstrating that our approach to rhythm conversion has minimal impact on intelligibility and naturalness. § CONCLUSION We proposed Urhythmic, an unsupervised approach to rhythm and voice conversion. We presented methods for modeling global and fine-grained characteristics of rhythm. The global method estimates overall speaking rate, while the fine-grained method models the duration distribution of discovered speech units. Results show that the estimated speaking rate correlates well with the syllable rate, and that fine-grained conversion accurately models the target speaker's rhythm. Finally, Urhythmic outperforms other unsupervised rhythm conversion systems in subjective and objective evaluations. IEEEtran
http://arxiv.org/abs/2307.04588v1
20230710143024
Extremal numbers and Sidorenko's conjecture
[ "David Conlon", "Joonkyung Lee", "Alexander Sidorenko" ]
math.CO
[ "math.CO" ]
Reliable Devices Yield Stable Quantum Computations The manuscript is authored by UT-Battelle, LLC under Contract No. DE-AC05-00OR22725 with the U.S. Department of Energy. The U.S. Government retains for itself, and others acting on its behalf, a paid-up nonexclusive, irrevocable worldwide license in said article to reproduce, prepare derivative works, distribute copies to the public, and perform publicly and display publicly, by or on behalf of the Government. The Department of Energy will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan: https://www.energy.gov/doe-public-access-plan. Samudra Dasgupta^1, 2^*, and Travis S. Humble^1,2^† ^1Quantum Science Center, Oak Ridge National Laboratory, Oak Ridge, Tennessee, USA ^2Bredesen Center, University of Tennessee, Knoxville, Tennessee, USA ^*[email protected], ORCID: 0000-0002-7831-745X ^†[email protected], ORCID: 0000-0002-9449-0498 February 2023 ==================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Sidorenko's conjecture states that, for all bipartite graphs H, quasirandom graphs contain asymptotically the minimum number of copies of H taken over all graphs with the same order and edge density. While still open for graphs, the analogous statement is known to be false for hypergraphs. We show that there is some advantage in this, in that if Sidorenko's conjecture does not hold for a particular r-partite r-uniform hypergraph H, then it is possible to improve the standard lower bound, coming from the probabilistic deletion method, for its extremal number ex(n,H), the maximum number of edges in an n-vertex H-free r-uniform hypergraph. With this application in mind, we find a range of new counterexamples to the conjecture for hypergraphs, including all linear hypergraphs containing a loose triangle and all 3-partite 3-uniform tight cycles. § INTRODUCTION An r-graphon W:[0,1]^r → [0,1] is an r-variable symmetric measurable function.[We note that this is different from the usual definition of hypergraphons (see, for instance, <cit.>), where 2^r-2 variables are used to model limits of r-uniform hypergraphs. Such an approach is required to make the space complete, which is not necessary for our purposes.] Given an r-uniform hypergraph (or r-graph) H, the homomorphism density t_H(W) of H in W is t_H(W) := ∫∏_u_1⋯ u_r∈ E(H)W(x_u_1,x_u_2,…,x_u_r) dμ^ v(H). An r-graph H is said to be Sidorenko if t_H(W) ≥ t_K_r(W)^ e(H) = (∫ W dμ^r)^ e(H) for all r-graphons W:[0,1]^r → [0,1], where K_r denotes the r-graph with one edge. In graph-theoretic terms, an r-graph H is Sidorenko if quasirandom r-graphs contain asymptotically the minimum number of copies of H taken over all r-graphs with the same order and edge density. A celebrated conjecture of Sidorenko <cit.> (see also the closely related conjecture of Erdős and Simonovits <cit.>) says that a graph H is Sidorenko if and only if it is bipartite. The necessity of the bipartiteness condition is straightforward to verify, but its sufficiency remains wide open despite significant attention in recent years <cit.>. It is also tempting to make an analogous conjecture for r-uniform hypergraphs H, namely, that H is Sidorenko if and only if it is r-partite. Unfortunately, as already observed in <cit.>, this is false, with the 3-uniform loose triangle with vertex set {1, 2, …, 6} and edges {1, 2, 3}, {3, 4, 5}, {5, 6, 1} being a counterexample. However, as we will show, there is still something to be gained if the conjecture fails to hold, in that we can improve the lower bound for the extremal number of any r-uniform hypergraph H for which Sidorenko's conjecture is false. Given a natural number n and an r-graph H, the extremal number (n,H) is the maximum number of edges in an n-vertex H-free r-graph. It is known that for any fixed r-graph H, there exists a non-negative number π(H) such that (n,H) = (π(H) + o(1)) nr and that π(H) = 0 if and only if H is r-partite. With very few exceptions (see, for example, <cit.> for classical results and <cit.> and its references for more recent developments), the problem of estimating (n,H) more accurately in the degenerate case where H is r-partite is wide open. In general, the best known lower bound comes from a simple application of the probabilistic deletion method and says that for any fixed r-partite r-graph H there exists some constant γ > 0 such that (n,H) ≥γ n^r- v(H)-r/ e(H)-1. Our first result improves this estimate for non-Sidorenko r-graphs. For any non-Sidorenko r-graph H, there exist constants c, γ >0 such that (n,H) ≥γ n^r- v(H)-r/ e(H)-1+c. One reason this result is interesting is that, by a result of Ferber, McKinley and Samotij <cit.>, any polynomial gain over the deletion bound for the extremal number of an r-graph H implies an optimal counting result for the number of H-free r-graphs on n vertices. Thus, we have the following corollary of <ref>. For any non-Sidorenko r-graph H, there exists C > 0 and an infinite sequence of positive integers n such that |ℱ_n(H)| ≤ 2^C ·(n, H), where ℱ_n(H) is the set of all labelled H-free r-graphs with vertex set {1, 2, …, n}. We note in passing that results similar to <ref> and <ref> were obtained recently by Conlon, Pohoata and Zakharov <cit.> for H = K_2,2,…,2, the complete r-partite r-graph with two vertices in each part. However, since Sidorenko's conjecture does hold for these graphs through some standard applications of the Cauchy–Schwarz inequality, their proof proceeds along very different lines, making use of a multilinear variant of Bukh's random algebraic method <cit.>. Motivated by <ref> and its application <ref>, much of this paper is devoted to finding examples of r-partite r-graphs for which Sidorenko's conjecture is false. For instance, if we define the r-uniform loose triangle to be the r-graph with vertex set {1, 2, 3, …, 3r-3} and edges {1, 2, …, r}, {r, r+1, …, 2r-1}, {2r-1, …, 3r-3, 1}, then we have the following result. Note that here a linear r-graph is an r-graph where every pair of edges intersect in at most one vertex. Any linear r-graph that contains a loose triangle is not Sidorenko. By the celebrated (6,3)-theorem of Ruzsa and Szemerédi <cit.>, which states that dense linear r-graphs contain loose triangles, we have the following corollary. For any integer r ≥ 3 and any c > 0, there exists k_0 such that any linear r-graph with k ≥ k_0 vertices and at least c k^2 edges is not Sidorenko. While the extremal number is known exactly for some sparse linear r-graphs such as loose paths and cycles <cit.>, these results, applied in conjunction with <ref>, give the first polynomial improvement on the lower bound for the extremal number of a broad range of linear r-graphs. In a somewhat different direction, we look at the tight cycles C_ℓ^(r) with vertex set {1, 2, …, ℓ} and edges {i, i+1, …, i+r-1} for all i = 1, 2, …, ℓ, where addition is taken mod ℓ. From an extremal viewpoint, these are some of the most closely studied hypergraphs (see, for example, <cit.>). We will show that, at least for certain choices of ℓ and r, they are again not Sidorenko. In the statement below, we also consider the r-graphs C_ℓ^(r) - e obtained by deleting a single edge e from C_ℓ^(r). C_k^(3) is not Sidorenko for any k ≥ 4, C_k^(3) - e is not Sidorenko for any k ≥ 7 and C_2r^(r) is not Sidorenko for any odd r ≥ 3. There are some recent results <cit.> that determine the Turán densities of C_k^(3) and C_k^(3)-e when k is sufficiently large and not divisible by 3. <ref> gives the first non-trivial improvement on the lower bounds for the extremal numbers of C_k^(3) and C_k^(3)-e when k is divisible by 3. We also give some examples of r-graphs with the stronger property that they are not common. By saying that an r-graph H is common, we mean that t_H(W) + t_H(1-W) ≥ 2^1- e(H) for every r-graphon W:[0,1]^r → [0,1]. In graph-theoretic terms, an r-graph H is common if the number of monochromatic copies of H in a two-colouring of the edges of K_n^(r) is asymptotically minimised by a quasirandom colouring. The study of such graphs is a central topic in Ramsey theory and we refer the interested reader to <cit.> for further context and additional references. For us, the important point is that if an r-graph is Sidorenko, it is automatically common, so non-common r-graphs are automatically not Sidorenko. As it involves some further notation, we will hold off on giving a full description of our main result in this direction until Section <ref> and instead give an illustrative example. For r odd, the grid r-graph G_r whose vertices are the points of the r × r grid and whose edges are the 2r horizontal and vertical lines of the grid is not common. Unlike our previous results, this does not allow us to give an improved bound for (n, G_r), since, by considering all of the edges containing a fixed vertex, we get the simple lower bound (n, G_r) ≥n-1r-1, which is considerably better than the deletion bound. However, the grid graphs are an interesting and well-studied family (see, for example, <cit.>), so we believe the fact that its odd members are not common is an interesting result in its own right. § LOWER BOUNDS FOR THE EXTREMAL NUMBER In this short section, we will use the tensor power trick to prove Theorem <ref>, the statement that the deletion bound may be improved for counterexamples to Sidorenko's conjecture. We will need the following standard result from the theory of graph limits (see, for example, <cit.>), obtained by sampling n vertices v_1, v_2, …, v_n independently and uniformly at random from [0,1] and placing an edge on each v_i_1, v_i_2, …, v_i_r with i_1 < i_2 < … < i_r independently with probability W(v_i_1, v_i_2, …, v_i_r). Let W be an r-graphon. Then there exists a sequence (G_n)_n=1^∞ of r-graphs such that |V(G_n)|=n and t_F(G_n) converges to t_F(W) for every fixed r-graph F. The tensor product G⊗ H of two r-graphs G and H is the graph with vertex set V(G) × V(H) where ((x_1, y_1), (x_2, y_2), …, (x_r, y_r)) ∈ E(G⊗ H) if and only if (x_1, x_2, …, x_r) ∈ E(G) and (y_1, y_2, …, y_r) ∈ E(H). For N a positive integer, we may then define G^⊗ N inductively by G^⊗ 1 = G and G^⊗ N = G ⊗ G^⊗ N-1. A key property of these tensor powers, which we will need below, is that t_H(G^⊗ N) = t_H(G)^N for any graphs G and H and any positive integer N. Let W be an r-graphon for which t_H(W) < t_K_r(W)^ e(H). If (G_m)_m=1^∞ is a sequence of r-graphs with |V(G_m)| = m given by <Ref>, then, provided m is sufficiently large, t_H(G_m) < t_K_r(G_m)^ e(H). Let G be an r-graph from this sequence for which this is the case and let α_0:=t_K_r(G)=r! e(G)/ v(G)^r and β_0:=t_H(G), so that β_0 < α_0^ e(H). We will assume that G is taken sufficiently large that β_0/α_0 ≥ v(G)^r - v(H). Set n:= v(G)^N, α := α_0^N and β:=β_0^N. Then G^⊗ N is an n-vertex graph with α n^r/r! edges and at most β n^ v(H) labelled copies of H. Let c' := e(H)logα_0 - logβ_0/log v(G), so that c' > 0 and β = α^ e(H) n^-c'. Crucially, the number of copies of H in G^⊗ N is significantly smaller than the random count of roughly α^ e(H)n^ v(H), allowing us to apply the deletion method more efficiently. Indeed, if we take a random subgraph (G^⊗ N)_p of G^⊗ N where every edge appears independently with probability p, the expected number of edges X in this subgraph is p α n^r/r! and the expected number of copies Y of H is at most (p α)^ e(H) n^ v(H) -c'. Note that the condition β_0/α_0 ≥ v(G)^r - v(H) is equivalent to α_0^ e(H)-1≥ v(G)^r+c'- v(H), which in turn implies that α^ e(H)-1≥ n^r+c'- v(H). Therefore, there is some p < 1 such that (p α)^ e(H)-1 = n^r+c'- v(H)/(2 r!). But then 𝔼[Y] ≤ (p α)^ e(H) n^ v(H) -c' = p α n^r/2 r! = 1/2𝔼[X], so, by linearity of expectation, 𝔼[X-Y] ≥1/2𝔼[X] = p α n^r/2 r!. Therefore, there must exist a graph for which we can delete an edge from every copy of H and still leave at least p α n^r/2 r!≥γ n^r- v(H)-r/ e(H)-1+c edges, where γ>0 is an absolute constant and c = c'/( e(H)-1). This yields the required conclusion when n is a power of v(G), but, at the possible expense of replacing γ with a smaller number, we can easily interpolate between these values. § NON-SIDORENKO HYPERGRAPHS §.§ Linear hypergraphs Recall that an r-graph is linear if every pair of edges shares at most one vertex. The girth of a linear r-graph is the length of the shortest (loose) cycle in the graph. We shall prove the following statement that slightly generalises <Ref>. In the proof, we will also need to know that, for s≤ r, the (s-1)-skeleton of an r-graph H is the s-graph obtained by replacing each r-edge of H by a copy of K_r^s, the complete s-graph on r vertices, and simplifying multiedges. If H is a linear r-graph of odd girth, then H is not Sidorenko. Consider the weighted r-graph on {-1, 1} where the edge (x_1, x_2, …, x_r) ∈{± 1}^r receives the weight f(x_1,…,x_r) = 1-c∑_i<jx_ix_j. For c ≤ 1/k2, f is a non-negative symmetric function with t_K_r(f) = 1.[For convenience, we phrase our example in terms of weighted r-graphs f rather than r-graphons W, but it is easy to convert between the two settings. We also allow f to take values larger than 1, but, since the inequality (<ref>) is homogeneous, it is sufficient for f to be bounded and non-negative.] Observe that a monomial in the expansion of ∏_v_1… v_r ∈ E(H)f(x_v_1,…,x_v_r) has zero average whenever it contains a variable of odd degree. Thus, the non-vanishing terms in the expansion of t_H(f) correspond to `Berge' even subgraphs F of the 1-skeleton of H, those subgraphs F where every vertex has even degree and every (2-)edge e∈ E(F) extends to a unique r-edge in H. Moreover, every such F receives the weight (-c)^ e(F). Therefore, if g is the girth of H and g is odd, t_H(f) = 1 -Kc^g + O(c^g+1), where K denotes the number of shortest loose cycles. By choosing c>0 small enough, we see t_H(f)<1 = t_K_r(f)^ e(H), so H is not Sidorenko. It is also possible to generalise <ref>, although we did not find any concrete applications of this more general result. Indeed, by replacing f with f(x_1,…,x_r) = 1-c∑_i_1< … < i_s x_i_1⋯ x_i_s for any s ≤ r, one can show that if the smallest subgraph F of the (s-1)-skeleton of H where every vertex has even degree and every s-edge e ∈ E(F) extends to a unique r-edge in H has an odd number of edges, then H is not Sidorenko. Since, in an s-uniform hypergraph F, ∑_v ∈ V(F) d(v) = s · e(F), such a subgraph F can only exist when s is even. §.§ Tight cycles Recall that C_ℓ^(r) denotes an r-uniform tight cycle of length ℓ. Since C_ℓ^(r) and C_ℓ^(r) - e can only be r-partite when ℓ is a multiple of r, in order to prove Theorem <ref>, it will suffice to study tight cycles of the form C_kr^(r). Given an r-graph H, let κ_m(H) denote the number of subgraphs of H with m edges and no degree-one vertices. As captured by the following proposition, the polynomial P_H(x) := ∑_i=1^ e(H)κ_i(H) x^i will play an important role in the proof of <Ref>. Let r be odd and H be a subgraph of C_kr^(r). If H is Sidorenko, then P_H(x) ≥ 0 for all x ∈ [-1,0]. Suppose that P_H takes a negative value on [-1,0]. Then there exists c ∈ (0,1) such that P_H(-c) < 0. For ε∈(0,1), let f_ε be the function on [0,1] defined by f_ε(x) = ε     if x≤1/1+ε -1    otherwise. Then ∫_0^1 f_ε dμ=0 and, for any fixed integer d>1 and ε sufficiently small, ∫_0^1 (f_ε)^d dμ = (-1)^d ε + O(ε^2). Let g_ε(x_1,…,x_r) := ∏_i=1^r f_ε(x_i), so that t_G(g_ε)=0 whenever G has a vertex of degree one. Moreover, for every n-vertex r-graph G with degree sequence d_1,…,d_n≥ 2, t_G(g_ε) = (-1)^∑_i=1^nd_iε^ v(G) +O(ε^ v(G)+1) = (-1)^ e(G)ε^ v(G) +O(ε^ v(G)+1), since r· e(G)=∑_i=1^n d_i and e(G) have the same parity. Let h_ε,c := 1 + cg_ε, noting that this function is non-negative. Then t_H(h_ε,c) = 1 + ∑_G ⊆ H c^ e(G) t_G(g_ε), where the sum is taken over all non-empty edge subsets of H, which can be seen as subgraphs G of H. In any subgraph of the tight cycle C_kr^(r), degrees of consecutive vertices of the cycle differ by at most one. Thus, in a non-empty subgraph G with no vertices of degree one, no isolated vertices exist and, hence, all but those G with minimum degree at least two vanish in (<ref>). Therefore, κ_m(H) counts the number of m-edge subgraphs of H on kr vertices with minimum degree at least two. It then follows that t_H(h_ε,c) = 1 + ∑_G ⊆ H, δ(G)≥ 2 c^ e(G) t_G(g_ε) =1 + ε^kr∑_G ⊆ H, δ(G)≥ 2 (-c)^ e(G) + O(ε^kr+1) = 1+ε^krP_H(-c) +O(ε^kr+1). Therefore, for sufficiently small ε>0, t_H(h_ε,c) < 1. But ∫ h_ε,c dμ^r = 1, so this contradicts our assumption that H is Sidorenko. We will now use <ref> to prove the following three results, which together make up <ref>. C_3k^(3) is not Sidorenko for k ≥ 2. C_3k^(3)-e is not Sidorenko for k ≥ 3. C_2r^(r) is not Sidorenko for any odd r ≥ 3. For the proofs, we will need to better understand the functions κ_m(H) for the r-graphs H under consideration. κ_i(C_3k^(3))=0 for i<2k and κ_2k+i(C_3k^(3)) = 3k/k+2ik+2i3i for 0 ≤ i ≤ k. A subgraph G of C_3k^(3) with i edges such that each vertex has degree 2 or 3 must be obtained from C_3k^(3) by removing 3k-i disjoint edges. But the number of disjoint edges cannot exceed k, and the number of ways to select 1 ≤ j ≤ k independent edges in C_3k^(3) is 3k/j3k-1-2jj-1. Thus, κ_i(C_3k^(3))=0 for i<2k and κ_2k+i(C_3k^(3)) = 3k/k-i3k-1-2(k-i)k-i-1 = 3k/k-ik+2i-1k-i-1 = 3k/k+2ik+2ik-i = 3k/k+2ik+2i3i for 0 ≤ i ≤ k-1. Moreover, κ_3k(C_3k^(3))=1. κ_i(C_3k^(3)-e)=0 for i<2k and κ_2k+i(C_3k^(3)-e) = k+2i-13i for 0 ≤ i ≤ k-1. The statement follows from <ref> and the fact that κ_i(C_kr^(r)-e) = kr-i/krκ_i(C_kr^(r)). We also need to verify some elementary inequalities. For integers k ≥ 2 and i≥ 1, (k+2i+1)(k+2i)(k-i)/(3i+3)(3i+2)(3i+1)≤k^3+k^2/60. It is easy to check that each of the ratios k+2i+1/3i+3, k+2i/3i+2, k-i/3i+1 decreases with i. Hence, (k+2i+1)(k+2i)(k-i)/(3i+3)(3i+2)(3i+1)≤(k+3)(k+2)(k-1)/120 = k^3 + 4k^2 + k - 6/120. We therefore need to show that k^3 + 4k^2 + k - 6 ≤ 2(k^3+k^2), which is equivalent to F(k) := k^3 - 2k^2 - k + 6 ≥ 0. But this follows since F(2)=4 and F'(k) = 3 k^2 - 4k - 1 = 3k(k-2) + 2k - 1 > 0 for k ≥ 2. For integers k ≥ 3 and i≥ 1, (k+2i+1)(k+2i)(k-i-1)/(3i+3)(3i+2)(3i+1)≤7/600 (k^3-k). It is easy to check that each of the ratios k+2i+1/3i+3, k+2i/3i+2, k-i-1/3i+1 decreases with i. Hence, (k+2i+1)(k+2i)(k-i-1)/(3i+3)(3i+2)(3i+1)≤(k+3)(k+2)(k-2)/120 = k^3 + 3k^2 - 4k - 12/120. We therefore need to show that k^3 + 3k^2 - 4k - 12 ≤7/5 (k^3-k), which is equivalent to F(k) := 2k^3 - 15k^2 + 13k + 60 ≥ 0. But this follows since F(3)=18, F(4)=F(5)=0 and F'(k) = 6k^2 - 30k + 13 = 6k(k-5) + 13 > 0 for k ≥ 5. We are already in a position to prove Theorems <ref> and <ref>. By <ref>, it will be sufficient to find some x ∈ [-1,0] such that P_C_3k^(3)(x) < 0. The coefficients of P_C_3k^(3) are given by <ref>. It is easy to check that P_C_6^(3)(x) = x^4 (3+6x+x^2) is negative at x=-2/3. Thus, we may assume that k ≥ 3. For a fixed k and 1 ≤ i ≤ k, set A_i := 3k/k+2ik+2i3i (30/k^3+k^2)^i . By <ref>, for 1 ≤ i ≤ k-1, A_i+1/A_i = (k+2i+1)(k+2i)(k-i)/(3i+3)(3i+2)(3i+1)·30/k^3+k^2 ≤ 1/2. Set x=30/k^3+k^2. As A_2j≤1/2 A_2j-1≤ A_2j-1, we get x^-2k P_C_3k^(3)(-x) = 3 + ∑_i=1^k (-1)^i A_i ≤ 3 - A_1 + A_2 ≤ 3 - 1/2 A_1 = 3 - 1/2 3k/k+2 k+23 30/k^3+k^2 = 3 - 15/2 < 0, as required. By <ref>, it will be sufficient to find some x ∈ [-1,0] such that P_C_3k^(3)-e(x) < 0. The coefficients of P_C_3k^(3)-e are given by <ref>. It is easy to check that P_C_9^(3)-e(x) = x^6 (1+4x+x^2) is negative at x=-2/3. Thus, we may assume k ≥ 4. For a fixed k and 1 ≤ i ≤ k-1, set B_i := k+2i-13i (300/7(k^3-k))^i. By <ref>, for 1 ≤ i ≤ k-2, B_i+1/B_i = (k+2i+1)(k+2i)(k-i-1)/(3i+3)(3i+2)(3i+1)·300/7(k^3-k) ≤ 1/2. Set x = 300/7(k^3-k). As B_2j≤1/2 B_2j-1≤ B_2j-1, we get x^-2k P_C_3k^(3)-e(-x) = 1 + ∑_i=1^k-1 (-1)^i B_i ≤ 1 - B_1 + B_2 ≤ 1 - 1/2 B_1 = 1 - 1/2k+13 300/7(k^3-k) = 1 - 25/7 < 0, as required. For the proof of <ref>, we need to do a little more work. Consider an m-element subset A⊆_2r and assume that its elements are cyclically ordered as A=(x_0,x_1,…,x_m=x_0). We say that x_i is good if x_i+1-x_i-1∈{2,…,r} and bad otherwise. The number of m-element subsets A⊆_2r that have at least one bad element is 2r(m-2)rm-1 for m ≥ 4. Suppose A=(x_0, x_1, …, x_m=x_0) is such a subset. Notice that if x_i and x_j are two distinct bad points, then i-j = ± 1. Hence, there is either just one bad point or there are two consecutive bad points. Thus, there exists a unique index j such that x_j is good and x_j-1 is bad. Without loss of generality, we may assume that j=1. x_1 can have any of the 2r possible values. We will assume that x_1=0 and show that there are then exactly (m-2) rm-1 choices for x_0,x_2,…,x_m-1. As x_1 is good, x_2-x_0 ∈{2,…,r}. As x_0 is bad, x_1-x_m-1∈{r+1,…,2r-1}, so x_m-1∈{1,…,r-1}. If x_2=i, there are r-i choices for x_0 and (r-1)-im-3 choices for x_3,…,x_m-1. Thus, the total number of choices is ∑_i=1^r-1 (r-i) (r-1)-im-3 = ∑_i=1^r-1 (m-2) r-im-2 = (m-2) rm-1, as required. Note that a subgraph H of the tight cycle C_2r^(r) has no degree-one vertices if and only if the set A of initial vertices of edges in H contains no bad elements. Thus, we have the following immediate corollary of <ref>. κ_i(C_2r^(r)) = 0 if i ≤ 3, 2ri - 2r(i-2)ri-1 if 4 ≤ i ≤ 2r. By <ref>, since 2r(m-2)rm-1 = 0 for r+2 ≤ m ≤ 2r, P_C_2r^(r)(x) = ∑_i=4^2r2ri x^i - ∑_i=4^r+1 2r(i-2)ri-1 x^i = ∑_i=0^2r2ri x^i - (1 + 2rx + r(2r-1) x^2 + 2/3 r(r-1)(2r-1) x^3) - 2r ∑_i=1^r+1 (i-2)ri-1 x^i + 2r (-x + 1/2 r(r-1) x^3) = (1+x)^2r + 2r ∑_i=1^r+1ri-1 x^i - 2r ∑_i=1^r+1 (i-1)ri-1 x^i - (1 + 4rx + r(2r-1) x^2 + 1/3 r(r-1)(r-2) x^3) = (1+x)^2r + 2rx(1+x)^r - 2r^2 x^2 (1+x)^r-1 - (1 + 4rx + r(2r-1) x^2 + 1/3 r(r-1)(r-2) x^3). If r ≥ 16, set x = -1/r. Then (1+x)^2r < e^-2 and (1+x)^r + (1+x)^r-1≥ 2e^-1, so that P_C_2r^(r)(x) < e^-2 - 4e^-1 + 4/3 + 2/3r^2 < -0.002849 + 2/3r^2 < 0. If r ≤ 16, set x = -2/r. Then (1+x)^2r < e^-4 and 4(1+x)^r + 8(1+x)^r-1 = 4((1+x)^r + (1+x)^r-1) + 4(1+x)^r-1 > 8e^-2 + 4e^-2 = 12e^-2, so that P_C_2r^(r)(x) < e^-4 - 12e^-2 + 5/3 - 4/r + 16/3r^2 < 0.060959 - 4/r + 16/3r^2 < 0. Therefore, by <ref>, C_2r^(r) is not Sidorenko for any odd r ≥ 3. To conclude this section, we note that we have also used <ref> to show that C_kr^(r) is not Sidorenko for all values of k and r with r ≥ 5 odd and kr ≤ 30. This suggests, and we conjecture, that C_kr^(r) is not Sidorenko for any odd r≥ 5 and k ≥ 2. § NON-COMMON HYPERGRAPHS Recall that an r-graph H is common if t_H(W) + t_H(1-W) ≥ 2^1- e(H) for any r-graphon W:[0,1]^r → [0,1] and that every Sidorenko hypergraph is automatically common. By substituting W=1+f/2, we can rewrite the requirement for H to be common as t_H(1+f)+t_H(1-f) ≥ 2 for any r-variable symmetric measurable function f:[0,1]^r → [-1,1]. By expanding out, this inequality is equivalent to ∑_G ⊆ H, e(G) ≡ 0 mod 2, e(G)>0 t_G(f) ≥ 0. If <ref> fails for some function f, then H is not common and, hence, is not Sidorenko. To state the main result of this section, we need some definitions. Following Camarena et al. <cit.>, we say that an r-graph H is positive if t_H(W) ≥ 0 for any r-variable symmetric function W:[0,1]^r → [-1,1]. When r ≥ 3, we say that an r-graph is 2-connected if the removal of a single vertex or a single edge does not disconnect it, while, for r = 2, we just mean the usual notion, that a graph is 2-connected if it is not disconnected by the removal of a single vertex. Let r be odd or r=2. If an r-graph H has a non-positive 2-connected subgraph with 2m edges and every other subgraph with an even number of edges not exceeding 2m is either non-positive and 2-connected or has a vertex of degree 1, then H is non-common. When r ≥ 3, examples coming from <ref> are quite plentiful. To see this, we will make use of the following proposition from <cit.>. Here the Levi graph of an r-graph H is the bipartite graph L(H) with vertex set V(H) ∪ E(H) where v ∈ V(H) and e ∈ E(H) are adjacent if and only if v ∈ e in H. Note that for r ≥ 3 the r-graph H is 2-connected if and only if its Levi graph L(H) is 2-connected in the usual sense. If an r-graph H is positive, then its Levi graph L(H) is positive. When r is odd, L(H) is positive if and only if H is positive. Consider the half-octahedron G, the 3-graph with vertices 1,2,3,4,5,6 and edges {1,3,5}, {1,4,6}, {2,3,6}, {2,4,5}. Its Levi graph L(G) has 10 vertices and 12 edges. With a single exception, all positive graphs with at most 10 vertices are classified in <cit.> and L(G) is not one of them. Hence, by <ref>, G is non-positive. Therefore, if all 4-edge subgraphs of a 3-graph H are either isomorphic to G or have a vertex of degree 1, then, by <ref>, H is non-common and non-Sidorenko. Recall that G_r is the grid r-graph whose vertices are the points of the r × r grid and whose edges are the 2r horizontal and vertical lines of the grid. It was shown in <cit.> that G_r is not positive for odd r. Since any proper subgraph of G_r has a vertex of degree 1, <ref> implies that G_r is non-common for odd r. Moreover, if we add more edges to G_r without creating new subgraphs whose minimum vertex degree is at least 2, then the resulting r-graph will remain non-common. The next statement was proved in <cit.> for r=2, but the proof can be repeated verbatim for an arbitrary r. An r-graph G is positive if and only if every connected r-graph that occurs among the connected components of G an odd number of times is positive. We also note the following result. In the proof, we often consider the tensor product f⊗ g of r-variable symmetric functions f and g, defined by (f⊗ g)((x_1,y_1),…,(x_r,y_r)) = f(x_1,…,x_r) g(y_1,…,y_r), where we identify each ((x_1,y_1),…,(x_r,y_r)) with a point in [0,1]^r through a measure-preserving bijection from [0,1]^2r to [0,1]^r. If the r-graphs G_1,…,G_k are not positive, then there exists a function f such that t_G_i(f) < 0 for all i=1,…,k. We use induction on k. The base case k=1 is trivial, so we consider the induction step going from k-1 to k ≥ 2. For each j=1,…,k, by the induction hypothesis, there exists a function f_j such that t_G_i(f_j) < 0 for all i ≠ j. If necessary, we perturb f_j by a little bit to ensure that t_G_j(f_j) ≠ 0 while preserving t_G_i(f_j) < 0 for all i ≠ j. If t_G_j(f_j) < 0, then f_j is the function we need. Thus, we may assume that t_G_j(f_j) > 0. If k is even, f=f_1 ⊗⋯⊗ f_k satisfies t_G_i(f) < 0 for all i=1,…,k. Suppose then that k is odd. Let G be the union of disjoint copies of G_1,…,G_k. By <ref>, G is not positive, so there exists a function f_0 such that t_G(f_0) < 0. Notice that t_G(f_0) = ∏_i=1^k t_G_i(f_0). We may assume that t_G_i(f_0) > 0 for 1 ≤ i ≤ m and t_G_i(f_0) < 0 for m+1 ≤ i ≤ k, where m is even. If m=0, then f_0 is the function we need. If m>0, then f=f_0⊗ f_1 ⊗⋯⊗ f_m satisfies t_G_i(f) < 0 for all i=1,…,k, so we again have the required function. The following result and its corollaries are the key to proving <ref>. Note that we call an r-variate symmetric measurable function f zero-averaging if ∫ f(x_1,…,x_r-1,x_r) dx_r = 0 for any x_1,…,x_r-1. If H is a non-positive 2-connected graph with an even number of edges, then there exists a zero-averaging function f:[0,1]^2→ [-1,1] such that t_H(f)<0. Our plan is to construct a {± 1}-weighted graph Γ such that every vertex has a vanishing sum over the weights of its incident edges and t_H(Γ)<0. Let U:[0,1]^2→ [-1,1] be a measurable symmetric function, i.e., a signed graphon, that satisfies t_H(U)<0. By using the standard decomposition U=U^+ - U^- and applying <Ref> to find appropriate graphs approximating the graphons U^+ and U^-, one may obtain a {± 1}-weighted graph G such that t_H(G)<0. Let d:= v(G) and s:= v(H) for brevity and write w_H(G) = d^s t_H(G). We may then assume that w_H(G)<-d^s-1/2 by replacing G with a blow-up by a sufficiently large factor. Note that, as H has an even number of edges, w_H(-G)=w_H(G), where -G denotes the graph with edge weights of opposite sign from G. For any sufficiently large even n, there is a d-regular n-vertex bipartite graph F with girth greater than s (see, for example, <cit.>). Partition the edges of F into d perfect matchings, colouring the edges of each matching with one of the d colours from [d]. Now consider the line d-graph of F whose vertices are the edges of F and whose d-edges are the collections of d edges in F incident to each vertex of F. A {± 1}-weighted d-graph ℱ is defined by assigning +1 or -1 to each edge in this line d-graph, depending on which side of the bipartition the corresponding vertex of F lies. The required zero-averaging weighted graph Γ is then obtained by replacing each d-edge in ℱ by a copy of G, where we map each vertex of V(G)=[d] to the corresponding coloured vertex (with the colouring inherited from the matchings) and multiply the weight on each edge of G by the {± 1}-weight on the corresponding edge of ℱ. We claim that t_H(Γ)<0. To prove this, we say that a (weighted) homomorphism from H to Γ is good if the homomorphic image of H lies in one d-edge of ℱ. The weighted sum of good homomorphisms is a negative number less than n· w_H(G)< -n d^s-1/2, where we used the fact that w_H(G)=w_H(-G). On the other hand, there may be some homomorphic images of H that are not entirely covered by a single d-edge, which we call bad. As the girth of F is larger than the number of vertices in H, the unique minimal collection of d-edges whose union contains a fixed bad image of H must form a d-hypertree in ℱ, where uniqueness follows from the fact that ℱ is a linear hypergraph. As ℱ is linear, deleting a vertex v lying in the intersection of two d-edges in a d-hypertree disconnects the subgraph of Γ induced on the vertex set of the d-hypertree. In particular, if a bad image of H contains v, then the image of H is disconnected once v is deleted. As H is a 2-connected graph, the bad image of H must therefore be degenerate, i.e., there are at least two vertices of H that are mapped to v. Suppose that there are t+1 edges in a d-hypertree 𝒯⊆ℱ for some t≥ 1. Then there are (t+1)d-t vertices in 𝒯, exactly t of which have degree two (here we used that each vertex in ℱ has degree exactly two). If 𝒯 is a minimal cover of a bad homomorphic image of H, then there are t disjoint pairs of vertices in H, each of which maps to a unique one of the t vertices of degree two. Thus, there are at most s2^t((t+1)d-t)^s-2t bad homomorphic copies of H whose minimal cover is 𝒯. Given a d-hypertree 𝒯 with t+1 edges, one can recover the tree in F corresponding to the union of the d-edges of 𝒯 in three steps: replace each d-edge by the corresponding vertex in F; connect those vertices that correspond to intersecting d-edges; and turn each degree-one vertex in a d-edge e of 𝒯 into a leaf adjacent to the unique vertex of F which corresponds to e. Note that the leaves added in the last step are determined once the first two steps give a (t+1)-vertex tree in F. Thus, there are at most nd^t isomorphic images of 𝒯 in ℱ. Therefore, there are at most s^2t(t+1)^s-2tnd^s-t bad homomorphic images of H whose minimal cover is isomorphic to 𝒯. Hence, as 1≤ t<s and the number of distinct d-hypertrees with t+1 edges and maximum degree two is bounded as a function of t, the number of bad homomorphic images of H is at most Cnd^s-1 for a constant C=C(s). This is asymptotically smaller than -nd^s-1/2, the upper bound for the weighted sum of good homorphisms provided d is sufficiently large, so that t_H(Γ) is negative, as required. Let r ≥ 3 be odd. If H is a non-positive 2-connected r-graph with an even number of edges, then there exists an r-variate zero-averaging function h such that t_H(h) < 0. By <ref>, the Levi graph L(H) of H is a non-positive 2-connected graph with an even number of edges. Therefore, by <ref>, there exists a 2-variate zero-averaging function f such that t_L(H)(f) < 0. Consider the r-variate symmetric measurable function h given by h(x_1,…,x_r) = ∫∏_i=1^r f(x_i,y) dy. It is easy to see that h is zero-averaging and t_H(h) = t_L(H)(f) < 0. If G_1,…,G_k are non-positive 2-connected graphs each with an even number of edges, then there exists a zero-averaging function f such that t_G_i(f) < 0 for all i=1,…,k. Using <ref>, we proceed exactly as in the proof of <ref>. Let r ≥ 3 be odd. If G_1,…,G_k are non-positive 2-connected r-graphs each with an even number of edges, then there exists a zero-averaging function f such that t_G_i(f) < 0 for all i=1,…,k. Using <ref>, we proceed exactly as in the proof of <ref>. We are now in a position to prove <ref>. Let us assume that m in the statement of the theorem is as small as possible, that is, there are non-positive 2-connected subgraphs G_1,…,G_k with 2m edges and every other subgraph with an even number of edges not exceeding 2m has a vertex of degree 1. By <ref> (if r=2) or <ref> (if r is odd), there exists a function f such that S := t_G_1(f) + ⋯ + t_G_k(f) < 0 and t_G(f)=0 for any r-graph G that has a vertex of degree 1. Hence, for ε > 0 sufficiently small, ∑_G ⊆ H, e(G) ≡ 0 mod 2, e(G)>0 t_G(ε f) = ε^2m S + O(ε^2m+1) < 0, so H is not common. § CONCLUDING REMARKS We say that an r-graph H is locally Sidorenko if there exists ε > 0 such that t_H(W) ≥ t_K_r(W)^ e(H) for all r-graphons W with W-1/2_≤ε. That is, H is locally Sidorenko if the required inequality holds for all r-graphons which are sufficiently close to the uniform graphon 1/2, where closeness is measured in terms of the cut norm (see, for example, <cit.>). Since it is probably difficult to give a complete characterisation of those r-graphs which are Sidorenko, we instead conclude by asking for a characterisation of locally Sidorenko r-graphs. Which r-graphs are locally Sidorenko? It was shown by Lovász <cit.> that every bipartite graph is locally Sidorenko and later, by Fox and Wei <cit.>, that a graph is locally Sidorenko if and only if it is either a forest or has even girth. The results of Section <ref> are all proved by showing that the relevant r-graphs are not locally Sidorenko and may help give some hints as to what a full characterisation should look like. However, at present, we have no concrete conjectures, even in the r-partite case most relevant to us. Indeed, despite Theorem <ref>, it is already open to determine which tight cycles are locally Sidorenko for r ≥ 4. plainurl
http://arxiv.org/abs/2307.04396v1
20230710075746
Diffusion and fluctuations of open charmed hadrons in an interacting hadronic medium
[ "Kangkan Goswami", "Kshitish Kumar Pradhan", "Dushmanta Sahu", "Raghunath Sahoo" ]
hep-ph
[ "hep-ph", "hep-ex", "nucl-ex", "nucl-th" ]
http://arxiv.org/abs/2307.06288v1
20230712163546
Local Limit Theorems for Energy Fluxes of Infinite Divisible Random Fields
[ "José Ulises Márquez-Urbina", "Orimar Sauri" ]
math.PR
[ "math.PR", "60G60, 60F99 (primary) 60E07, 60G57, 60K40, 60D99, 60H05 (secondary)" ]
Local Limit Theorems for Energy Fluxes of Infinite Divisible Random Fields A,B]J.U.Joé Ulises Márquez-Urbina[label=e1][email protected] C]O.Orimar Sauri[label=e2][email protected] [A]Centro de Investigación en Matemáticas, Unidad Monterrey, Apodaca, Mexico[presep=, ]e1 [B]Consejo Nacional de Ciencia y Tecnología, CDMX, Mexico[presep=, ]e1 [C]Department of Mathematical Sciences, Aalborg University, Aalborg Ø., Denmark[presep=, ]e2 We study the local asymptotic behavior of divergence-like functionals of a family of d-dimensional Infinitely Divisible Random Fields. Specifically, we derive limit theorems of surface integrals over Lipschitz manifolds for this class of fields when the region of integration shrinks to a single point. We show that in most cases, convergence stably in distribution holds after a proper normalization. Furthermore, the limit random fields can be described in terms of stochastic integrals with respect to a Lévy basis. We additionally discuss how our results can be used to measure the kinetic energy of a possibly turbulent flow. Energy Flux Infinitely Divisible Random Fields Limit Theorems for Random Fields Stokes Theorem Surface Measure Tanget Fields § INTRODUCTION §.§ Overview Kinetic energy is the energy associated with a body due to its motion. Formally, the kinetic energy of a body is defined as the work needed to accelerate it from rest to its stated velocity. In this work, we are interested in the local behaviour of the kinetic energy of a turbulent flow. In physics, turbulence refers to the chaotic and unpredictable motions found in some fluids which is typically characterised by abrupt changes in pressure and flow velocity. In a turbulent flow, the kinetic energy flux measures the amount of energy being injected or extracted from the fluid enclosed in a region. Thus, it is a proxy of energy dissipation in a turbulent flow. Understanding turbulence is considered one of the last open problems of classical physics. As part of fluid dynamics, turbulence can be studied via the Navier-Stokes equations. However, this approach has proven to be very challenging; therefore, numerous efforts have been made to develop phenomenological models that reproduce some of the key stylized features of turbulent fluids. Such models aim to produce tools that can be employed in practical situations or allow the understanding of some turbulence elements. Ambit processes stand out among these phenomenological models due to their flexibility and theoretical properties. These stochastic processes were introduced as models for turbulent velocity flows in <cit.>; they provide a robust framework to describe spatio-temporal phenomena and have been applied in different contexts like finance (<cit.>), tumor growth (<cit.>), and turbulence (<cit.>). In broad terms, ambit processes are a general class of spatio-temporal stochastic processes defined as stochastic integrals with respect to an independently scattered and infinitely divisible random measure. We refer the reader to <cit.> for more details on ambit stochastics. This article studies local limits of general energy fluxes over smooth manifolds for a subclass of vector-valued ambit fields. Besides the purely mathematical interest, studying (kinetic) energy fluxes of random fields could shed some light on the conditions the chosen model requires to fulfill in order to reproduce key features present in turbulent flows. §.§ Related work From the perspective of modeling turbulence, there is some literature related to the present work. <cit.> introduced the class of ambit processes and proposed employing them to model the energy dissipation of a turbulent flow. <cit.> discussed for the first time the use of ambit processes to model a turbulent velocity field; in that article, the authors also discussed some relevant questions required to aim for a complete theory of ambit processes for turbulence. <cit.> proposed specific ambit random fields capable of reproducing any covariance structure. In particular, it was shown that in the isotropic and incompressible case, the kernel is expressible in terms of the energy spectrum; the models developed are applied to atmospheric boundary layer turbulence. <cit.> discusses the use of ambit random fields for the description of 2-dimensional turbulence. In that work, the author discusses the construction of 2-dimensional homogeneous and isotropic ambit fields that are divergence-free but not invariant under the parity operation. On the other hand, to the best of our knowledge, the questions addressed in the present work have only been previously considered in two manuscripts. The first one is the work in <cit.> discussed before. In this set-up, one can describe energy fluxes via classical vector calculus. In a more broad framework, <cit.> studies the flux and circulation of a 2-dimensional subclass of ambit random fields, determining local limits for those functionals under proper normalization. Namely, it is shown that in most cases they converge stably in distribution towards stationary random fields given in terms of line integrals of a Levy basis over the boundary of the original underlying ambit set. Other mathematical works that have considered similar functionals to the one studied in this article can be found in the theory of statistical mechanics and microstructures in continuum mechanics (e.g., <cit.>); in that area, for example, the macroscopic excess free energy is defined as a surface integral. Although some limiting behavior is addressed in this theory, they do not study the limits of functionals of random fields defined by integrals with respect to the Haussdorff measure as we do in the present work. §.§ Main contributions of this article We study the asymptotic behavior in divergence-like limits for fluxes of infinitely divisible random fields of the form X(p)=∫_A+p F(p,q) L(dq) p∈ℝ^d, where F is continuous and A compact. More precisely, we determine conditions for the convergence, as r↓ 0, of normalised functionals of the form ℰ_r=∫_S_rϕ (X(y))· u(y)ℋ^d-1(dy), where S_r=rM+p_0 is the boundary of a region V_r=r𝔇+p_0, ϕ is a function with polynomial growth, u_S_r is the outward unit normal to S_r, and ℋ^d-1 denotes the (d-1)-Hausdorff measure in ℝ^d. It turns out that the rate of convergence of ℰ_r strongly depends on whether L is of finite variation or not. In the latter situation, our central assumption is that the law of the “small jumps” of L belongs to the domain of attraction of an α-stable distribution. In the finite variation case, we further show that the kinetic energy flux converges in probability under the classical normalization | S_r|. In both situations, the limit processes can be expressed in terms of stochastic integrals with respect to a Lévy basis over regions uniquely determined by the geometry of A. Finally, by considering (ℰ_tr)_t≥0 as a sequence of continuous-time stochastic process, we show that the limiting process of such sequence is not only self-similar but also absolutely continuous, regardless of whether L is of finite variation or not. The organization of the paper is as follows. In Section 1, we introduce the basic probabilistic and geometrical concepts and results that will be used in our work. Section 3 describes our main results regarding the asymptotic behaviour of energy fluxes and related functionals. Due to the technical nature of our proofs, most of them will be presented in Section 4. § PRELIMINARIES This part is devoted to introducing the basic notations as well as to recall several basic results and concepts that will be used through this paper. §.§ Stable convergence and Lévy bases In this work, the inner product and the norm of vectors x,y∈ℝ^d will be represented by x· y and x, respectively. Throughout the following sections (Ω,ℱ,ℙ) will denote a complete probability space. For a sequence of random vectors (ξ_n)_n≥1 defined on (Ω,ℱ,ℙ), we write ξ_n=o_ℙ(1) whenever ξ_nℙ→0, as n→∞. Furthermore, given a sub-σ-field 𝒢⊆ℱ and a random vector ξ (defined possibly on an extension of (Ω,ℱ,ℙ)), we say that ξ_n converges 𝒢-stably in distribution towards ξ, and write ξ_n𝒢-d⟶ξ, if for any 𝒢-measurable random variable (r.v. from now on) ζ, (ξ_n,ζ)→(ξ,ζ) weakly as n→∞. Within the preceding framework, if (X_n(t))_t∈ T,n∈ℕ is a sequence of random fields defined on (Ω,ℱ,ℙ), we will write X_n𝒢-fd⟶X if the finite-dimensional distributions (f.d.d. for short) of X_n converge 𝒢-stably toward the f.d.d. of X. For a concise exposition of stable convergence, see <cit.> and references therein. Let μ be a measure on ℬ(ℝ^d), the Borel sets on ℝ^d, and define ℬ_b^μ(ℝ^d):={A∈ℬ(ℝ^d):μ(A)<∞}. The ℝ^m-valued random field L={L(A):A∈ℬ_b^μ(ℝ^d)} will be called a separable Lévy basis with control measure μ if it satisfies the following: * For every A∈ℬ_b^μ(ℝ^d), L(A) is infinitely divisible (ID for short). * L(A) and L(B) are independent whenever A,B∈ℬ_b^μ(ℝ^d) and A∩ B=∅. * Given a disjoint sequence { A_n} _n≥1⊆ℬ_b^μ(ℝ^d) such that ∪_n=1^∞A_n∈ℬ_b^μ(ℝ^d), it holds almost surely (a.s. for short) L(∪_n=1^∞A_n)=∑_n≥1L(A_n). * For every A∈ℬ_b^μ(ℝ^d) and z∈ℝ^m, we have that 𝔼(exp(𝐢z· L(A)))=exp(μ(A)ψ(z)), where ψ(z):=𝐢γ· z-1/2z·Σ z+∫_ℝ^m\{0}(e^𝐢z· x-1-𝐢z· x1_ x≤1)ν(dx), with γ∈ℝ^m, Σ a m× m positive definite matrix and ν a Lévy measure, i.e. ν({0})=0 and ∫_ℝ^m\{0}(1 x ^2)ν(dx)<∞. When μ=Leb, in which Leb represents the Lebesgue measure on ℝ^d, L is called homogeneous. The ID random vector associated with the characteristic triplet (γ,Σ,ν) is known as the Lévy seed of L, and it will be denoted by L'. As usual, (γ,Σ,ν) will be called the characteristic triplet of L and ψ its characteristic exponent. Any non-zero Lévy measure on ℝ^m admits a polar decomposition ν(B)=∫_𝕊^m-1∫_0^∞1_B(ru)ρ_u(dr)λ(du), where 𝕊^m-1 is the unitary sphere in ℝ^m, λ is a finite measure on 𝕊^m-1, and {ρ_u:u∈𝕊^m-1} is a family of Lévy measures on (0,∞) such that the mapping u↦ρ_u(B) is measurable for all A∈ℬ((0,∞)). Let 0<α≤2 and λ a finite measure on 𝕊^m-1. A separable Lévy basis is called strictly α-stable if its Lévy seed is distributed according to a strictly α-stable distribution with spectral measure λ; that is, L' is centred Gaussian with covariance Σ if α=2, while for 0<α<2 the characteristic triplet of L' has no Gaussian component (Σ=0), its Lévy measure admits the polar decomposition ν(B)=∫_𝕊^m-1∫_0^∞1_B(ru)dr/r^1+αλ(du), and γ=∫_𝕊^m-1uλ(du)/(1-α) if α≠1, while if α=1, γ can be arbitrary but with the restriction that ∫_𝕊^m-1uλ(du)=0. For α<2, the characteristic exponent of a strictly α-stable Lévy basis can be written as ψ_α(z):= -∫_𝕊^m-1| z· u|^αφ_α(z,u)λ(du) if α≠1; -∫_𝕊^m-1| z· u|φ_α(z,u)λ(du)+𝐢γ z if α=1, where φ_α(z,u)= 1-𝐢ρsign(z· u)tan(πβ/2) if α≠1; 1+𝐢2/πsign(z· u)log(| z· u|) if α=1. For the facts and concepts discussed in this section, we refer the reader to <cit.>. §.§ Geometrical preliminaries For any A⊆ℝ^d, we let -A={-x:x∈ A}. Furthermore, we denote by A,A̅,∂ A, and A^c the interior, the closure, the boundary, and the complement of A, respectively, and we put A^*=A̅^̅c̅. An open set 𝔇⊆ℝ^d is said to be a Lipschitz domain if its boundary can be locally described as the graph of a Lipschitz function defined on an open set of ℝ^d-1. We will say that a (d-1)-dimensional manifold M⊆ℝ^d is Lipschitz if it is the boundary of a Lipschitz domain. For s>0, the s-dimensional Hausdorff measure will be represented by ℋ^s. Now, fix A⊆ℝ^d a closed set. The metric projection on A, Π_A:ℝ^d→ A, is the set function Π_A(q):={p∈ A:d_A(q)= p-q}, where d_A(q):=inf_p∈ A p-q. We set UnpA:={q∈ℝ^d:∃!p∈ A s.t. d_A(q)= p-q}. The set UnpA is measurable and such that Leb(ℝ^d\UnpA)=0. Under the previous notation, the reduced normal bundle and the reach function of A are given, respectively, by N(A)={(Π_A(q),(q-Π_A(q))/ q-Π_A(q)):q∈Unp(A)\ A}⊆∂ A×𝕊^d-1, and, δ_A(q,u):=0 for (q,u)∈ N(A)^c, while for (q,u)∈ N(A), δ_A(q,u):=inf{t≥0:q+tu∈Unp(A)^c}. Following <cit.>, we will say that a closed set A⊆ℝ^d is gentle if: * For all bounded B∈ℬ(ℝ^d), ℋ^d-1(N(∂ A)∩(B×𝕊^d-1))<∞. * For ℋ^d-1-almost all x∈∂ A, there are non-degenerate balls B_i⊆ A and B_o⊆ A^* containing x. Thus, if A⊆ℝ^d is a gentle set, then: * For ℋ^d-1-almost all x∈∂ A, there is n=n_A(x)∈𝕊^d-1 such that (x,n)∈ N(A) and (x,-n)∈ N(A^*). Furthermore, the mapping x↦(x,n_A(x)) is measurable. * It holds that Leb(∂ A)=0. If in addition A is compact, we also have that ℋ^d-1(∂ A)<∞. * Any translation of A is gentle since for all p∈ℝ^d, d_A+p(q)=d_A(q-p) and Π_A+p(q)=Π_A(q-p)+p. For r≥0, the r-parallel set of A is defined as A_⊕ r:={q∈ℝ^d:d_A(q)≤ r}. For a more detailed exposition of the geometrical terms introduced above, see <cit.> and <cit.>. § LIMIT THEOREMS FOR ENERGY FLUXES Through this section we fix m,d∈ℕ, with d≥2, p_0∈ℝ^d, and a bounded Lipschitz domain 𝔇⊆ℝ^d. We further assume that M, the boundary of 𝔇, is a (d-1)-dimensional compact manifold. For the rest of this paper ∂_kf will represent the partial derivative of a function with respect to its kth variable. The energy flux of a field X through the region ℜ=r𝔇+p_0 is defined as ℰ_r≡ℰ_r(p_0) =∫_rM+p_0ϕ(X(y))· u_rM+p_0(y)ℋ^d-1(dy), r>0, where u_rM+p_0 denotes the unit outward vector of rM+p_0. The scalar quantity ℰ_r represents the flux (integral) of the vector field ϕ(X(·)) across ℜ. When ϕ(X(·)) is a vector field associated with a physical quantity, ℰ_r provides a measure of the physical element passing through the boundary of ℜ. For instance, when X is the velocity vector field of a fluid and ϕ(x)= x ^2x, the scalar quantity ℰ_r measures the kinetic energy flow rate over ℜ. In such a case, ℰ_r is referred to as the kinetic energy flux. Another example within the previous framework is when ϕ(x)=x. In this situation, ℰ_r quantifies the amount of fluid passing through ℜ. If we assume that 0∈𝔇, then p_0∈ℜ. Thus, the quantity ℰ_r(p_0)/|ℜ| converges to the divergence of the random field ϕ(X(·)) at p_0 as r→ 0. Therefore, when ℰ_r represents the kinetic energy flux of a fluid, the normalized integral ℰ_r(p_0)/|ℜ| converges to the divergence of the kinetic energy at p_0, as |ℜ|→ 0. For turbulent fluids, this quantity represents a proxy of the energy dissipation at p_0. Note that by the Divergence Theorem, 1/r^d-1ℰ_r=∫_M[ϕ(X(p_0+ry))-ϕ(X(p_0))]· u_M(y)ℋ^d-1(dy). This relation illustrates that ℰ_r can be interpreted as the “average" (on M) of the increments of ϕ(X(·)) projected onto the direction of the outward vector of M. In consequence, the analysis of the local behaviour of energy fluxes reduces to study the asymptotic behaviour (as r↓0) of the functional Z^ϕ,r(t,f):=∫_M[ϕ(X(p_0+rty))-ϕ(X(p_0))]· f(y)ℋ^d-1(dy), t≥0, where f is a measurable function. In this paper, we focus on the case when f∈ L^2(ℋ^d-1⇂_M) and X is the ID field given by X(p):=∫_A+pF(p,q)L(dq), p∈ℝ^d. Here L denotes an ℝ^m-valued homogeneous Lévy basis with characteristic triplet (γ,Σ,ν), F:ℝ^d×ℝ^d→ℝ^d× m is of class C^1, and A⊆ℝ^d a compact set. Note that (<ref>) means that ith element of X(p) follows the dynamics X^(i)(p)=∑_j=1^m∫_A+pF^(i,j)(p,q)L^(j)(dq), i=1,…,d. Since each L^(i) is a homogeneous Lévy basis, and F is continuous, the integrals in (<ref>) are well defined in the sense of <cit.>. §.§ Main Results In this part, we present our main findings on the functionals introduced above. We start by verifying that Z^ϕ,r is well-defined for a large class of test functions. Recall that a function ϕ:ℝ^d→ℝ^d is said to be of polynomial growth of order β≥0 if there is some C>0, such that ϕ(x)≤ C(1+ x ^β), ∀ x∈ℝ^d. Let X be defined as in (<ref>). If ϕ:ℝ^d→ℝ^d is measurable and of polynomial growth of order β≥0, then for all t≥0 and for every f∈ℒ^2(ℋ^d-1⇂_M) ℙ( | Z^ϕ,r(t,f) |<∞)=1. If X models the velocity vector field of an incompressible fluid, then for ϕ(x)=x, necessarily we must have that r^-1 Z^ϕ,r(1,u_M)=1/r^dℰ_r→0, as r↓0. This will rarely be the case in our framework. In fact, as pointed out in <cit.>, the asymptotic behaviour of Z^ϕ,r strongly depends on whether L is of finite variation or not. Our study in the latter case is performed under the following assumption. [A_α] For a given 1<α≤2, the characteristic triplet of L, (γ,Σ,ν), satisfies the following: * If α=2, Σ≠0. * For 1<α<2, Σ=0 and ν admits the polar decomposition in (<ref>). Furthermore, there is a non-zero λ-integrable function K such that as s↓0 s^αρ_u(s,∞)→ K(u), λ-a.a, and sup_u∈𝕊^m-1sup_0≤ s≤ Ts^αρ_u(s,∞)<∞, ∀ T>0 . In the 1-dimensional case, i.e. when m=1, it is well known that (<ref>) implies that the distribution of the “small jumps” of L belongs to the domain of attraction of an α-stable distribution. Not surprisingly, the same result holds in the multivariate context under [stableattractassump]Assumption A_α, see Lemma <ref> below. Finally, we would like to emphasize that (<ref>) is fulfilled if (<ref>) holds and either the support of λ is finite or ρ_u does not depend on u. Examples of infinitely divisible distributions on the real line satisfying [stableattractassump]Assumption A_α are discussed in <cit.>, <cit.>, and references therein. In view of Proposition <ref>, we will also restrict to test functions ϕ of polynomial growth. Thus, for N∈ℕ and β≥ N, C_β^N will denote the family of functions ϕ:ℝ^d→ℝ^d of class C^N such that D^jϕ(x)≤ C(1+ x ^β-j), j=0,1,2,…,N, where D^jϕ denotes the vector containing all the partial derivatives of ϕ of order j. A key example is ϕ(x)= x ^2x (the test function associated to the kinetic energy) which belongs to C_3^2. Next, we introduce some auxiliary random fields that will be used for the representation of the limit of Z^ϕ,r. Recall that the support function of a compact set M is defined as h_M(q):=sup{q· p:p∈ M}, q∈ℝ^d. We associate to a gentle compact set (see Section <ref>) A⊆ℝ^d the following σ-finite measures on ℬ(ℝ^+×ℝ^d×𝕊^d-1): μ^±_M,A(B):=∫_0^∞∫_∂ A1_B(s,x+p_0,± n_A(x))h_M(± n_A(x))^+ℋ^d-1(dx)ds, where x^+=max{0,x}. For every (s,x,n)∈ℝ^+× N(A), f∈ L^2(ℋ^d-1⇂_M), and t≥0, put G(t,f,s,x,n):=∫_Mf(y)^'1_[h_M(n)s,+∞)(ty· n)ℋ^d-1(dy)F(p_0,x), where F is the kernel function representing X in (<ref>). Now, for a given ℝ^m-valued homogeneous Lévy basis satisfying [stableattractassump]Assumption A_α, we construct (on an extension of (Ω,ℱ,ℙ)) two ℝ^m-valued independent separable Lévy bases Λ_α^+ and Λ_α^- fulfilling the following: They are strictly α-stable, independent of ℱ, and their control measures are μ^+_M,A and μ^-_M,A, respectively. Additionally, their seed satisfies that: * If α=2, it has covariance Σ. * If 1<α<2, its spectral measure is λ̅(du)=K(u)λ(du). Finally, we let Y^α(t,f):=∫_(0,t]× N(A)G(t,f,s,x,n)·[Λ_α^+(dsd(x,n))-Λ_α^-(dsd(x,n))]. Under the preceding notation, we have: Let [stableattractassump]Assumption A_α hold. Suppose that A is a compact gentle set. Then, for all ϕ∈ C_β^2, as r↓0, r^-1/αZ^ϕ,r(t,f)ℱ-fd⟶∑_i,j=1^dDϕ(X(p_0))^(i,j)Y^α(t,𝐞_j⊗𝐞_if). Here Dϕ denotes the Jacobian of ϕ and 𝐞_j the jth element of the canonical basis of ℝ^d. The finite variation case substantially differs from the preceding framework. More precisely: Assume that Σ=0 and ∫_ℝ^m(1 x )ν(dx)<+∞. Suppose in addition that A is a compact gentle set. Then, for all ϕ∈ C_β^2, t≥0, and f∈ L^2(ℋ^d-1⇂_M), as r↓0, 1/rZ^ϕ,r(t,f)ℙ→t∑_i,j=1^dDϕ(X(p_0))^(i,j)𝒟_X^(i,j)(f,p_0), where 𝒟_X^(i,j)(f,p_0):=∫_Mf(y)^'𝐞_i⊗𝐞_jDX(p_0)yℋ^d-1(dy), in which for i,k=1,…,d, and γ_0=γ-∫_x≤ 1xν(dx), we have DX(p)^(i,k)=∑_j=1^m∫_A+p∂_k+dF^(i,j)(p,q) γ_0^(i)dq + ∫_A+p∂_kF^(i,j)(p,q)L^(j)(dq). The following remarks are in order: * As mentioned above, Z^ϕ,r(t,f) can be seen as the average of the increments ϕ(X(p_0+rty))-ϕ(X(p_0)) over M. Therefore, in the terminology of <cit.>, the limits appearing in Theorems <ref> and <ref> can be seen as the average over M of all the tangent fields around p_0 of the field ϕ(X(·)). In fact, our techniques show that if the assumptions of Theorem <ref> hold, then the sequence r^-1/α[ϕ(X(p_0+rty))-ϕ(X(p_0))], converges stably in distribution towards Dϕ(X(p_0))∫_(0,t]× N(A)F(p_0,x)[Λ_α^+(dsd(x,n))-Λ_α^-(dsd(x,n))], where Λ_α^± as above but we replace M by {y}. A similar result holds under the set-up of Theorem <ref>. * Note that Y^α is degenerated when F(p_0,·+p_0) vanishes in ∂ A, e.g. when ∂ A=𝕊^d-1 and F(p,q)=(1-p-q^2)G(p,q), for some vector-valued function G. In fact, by looking at the proof of Theorem <ref>, in this situation and as long as A is gentle, the conclusion of Theorem <ref> remains valid if we replace γ_0 by γ in (<ref>). This result is valid independently of whether [stableattractassump]Assumption A_α is satisfied or not. * There are other special situations in which Theorem <ref> can be extended (irrespectively of the behaviour of F(p_0,·) in ∂ A) in the infinite variation case. For instance, if ϕ(x)=x and L is strictly 1-stable with spectral measure λ and drift γ, our methods show that 1/rZ^ϕ,r(t,f)ℱ-fd⟶∑_iY^1(t,f^i𝐞_i)+t𝒟_X^(i,i)(p_0), where Y^1 is defined in the same way as Y^α but Λ_α^+ and Λ_α^- are replaced by non-trivial strictly 1-stable Lévy bases with spectral measure λ and drift γ. §.§ Processes induced by energy fluxes In this subsection, we study some probabilistic properties of the class of processes induced by the limit of energy fluxes of the form of (<ref>). We start by describing the local behaviour of the energy flux associated with X. In light of the relation ℰ_rt=(rt)^d-1Z^ϕ,r(t,u_M), we deduce from Theorems <ref> and <ref>, and the classical Divergence Theorem that: Let X be as in (<ref>) where A is a compact gentle set. Then, for every ϕ∈ C_β^2, the following holds: * Under [stableattractassump]Assumption A_α, as r↓0, 1/r^d-α-1/αℰ_rtℱ-fd⟶t^d-1∑_i,j=1^dDϕ(X(p_0))^(i,j)Y^α(t,𝐞_j⊗𝐞_iu_M). * If Σ=0 and ∫_ℝ^m(1 x )ν(dx)<+∞, then as r↓0, 1/Leb(r𝔇)ℰ_rtℙ→t^d∑_i,j=1^dDϕ(X(p_0))^(i,j)DX(p_0)^(j,i). It is clear that the nature of the limit processes appearing in the previous result can be described solely by the process (Y^α(t,𝐞_j⊗𝐞_iu_M))_t≥0. For instance, using the spectral representation (<ref>) of Y^α together with its independence from ℱ, we easily deduce that the limit processes in Corollary <ref> are self-similar of index d-α-1/α and d, respectively. Therefore, for the rest of this section, we focus on studying the process (Y^α (t,𝐞_j⊗𝐞_iu_M))_t≥0. For notational convenience, from now on we will write Y_t^α instead of Y^α (t, 𝐞_j⊗𝐞_i u_M). Our next goal is to describe the path properties of Y^α when M is an affine transformation of the sphere of the form M=T𝕊^d-1, in which T is an invertible d× d matrix. Note that by self-similarity Y^α cannot be differentiable at 0 unless it is identically zero (see Remark <ref>). Surprisingly, however, the paths of Y^α are typically absolutely continuous. These findings are described in the next result, in which we will use the following notation: ℌ(ℓ,n):={y∈ℝ^d:y· n=ℓ}, n∈𝕊^d-1, ℓ∈ℝ, and φ(ρ):=(1-ρ^2)^d-1/2 , -1≤ρ≤1. Let M be as in (<ref>). Then, for all 1<α<2, the process (Y_t^α)_t≥0 admits a modification that has absolutely continuous paths almost surely with derivative dY_t^α/dt=∫_(0,t]× N(A)g(t,s,x,n)·(Λ_α^+(dsd(x,n))-Λ_α^-(dsd(x,n))), where g(t,s,x,n):=∂_tφ(s/t)(n·𝐞_i)𝐞_j^'F(p_0,x)ℋ^d-1(T(D_1∩υ(n)^⊥)), s≤ t, in which υ(n):=T^'n/‖ T^'n‖, υ(n)^⊥=ℌ(0,υ(n)) and D_1 is the unit open disk. If d≥3, then the same result holds for α=2. The proof consists in verifying that for μ^±_M,A-a.a. (s,x,n)∈ℝ^+×ℝ^d×𝕊^d-1 ∫_s^tg(r,s,x,n) dr = G(t, 𝐞_j⊗𝐞_iu_M,s,x,n), and that the stochastic Fubini theorem can be applied. From (<ref>) in Subsection <ref> below, we have that for μ^±_M,A-a.a. (s,x,n)∈ℝ^+×ℝ^d×𝕊^d-1 with 0<s<t and h_M(n)>0 it holds that G(t,𝐞_j⊗𝐞_iu_M,s,x,n) = (n·𝐞_i) 𝐞_j^' F(p_0,x)ℋ^d-1(𝔇∩ℌ(h_M(n)s/t , n)). In what follows we fix such (s,x,n). Using that h_M(n)=‖ T^'n‖ it follows easily that 𝔇∩ℌ(h_M(n)s/t,n)=T(D_1∩ℌ(s/t,υ(n))). Therefore we can parametrize 𝔇∩ℌ(h_M(n)s/t,n) as ψ(z)=√(1-(s/t)^2)Tz+s/tυ(n), z∈𝕊^d-1∩υ(n)^⊥. Since 𝕊^d-1∩υ(n)^⊥ is a d-2 dimensional sphere embedded in υ(n)^⊥, we can apply the Area Formula (see, for instance, Section 3.3 in <cit.>) to deduce that ℋ^d-1 (𝔇∩ℌ(h_M(n)s/t,n)) = φ(s/t)ℋ^d-1(T(D_1∩υ(n)^⊥)). Equation (<ref>) now follows easily from (<ref>) and (<ref>). Note that the former implies that, for all t>0, almost surely Y_t^α=∫_(0,t]× N(A)∫_s^tg(r,s,x,n)dr·[Λ_α^+(dsd(x,n))-Λ_α^-(dsd(x,n))]. Therefore, in order to finish the proof, we need to verify that the stochastic Fubini theorem can be applied for all 1<α<2, and for α=2 if d≥3. Since each entry of Λ_α^±(dsd(x,n)) are separable 1-dimensional strictly α-stable Lévy basis with control measure μ^±_M,A, according to <cit.> (c.f. Lemma 3 in <cit.>) we can swap the order of integration in (<ref>) whenever ∫_0^t(∫_(0,r] × N(A)|| g(r,s,x,n) ||^αμ^±_M,A(ds d(x,n)) )^1/αdr<∞. This is easily obtained by noting that the inner integral equals to C× r^1-α∫_0^1|φ^'(y)y|^αdy, where, due to the continuity of F and the compactness of the sphere, C=∫_∂ A||(n_A(x)·𝐞_i)𝐞_j^'F(p_0,x)ℋ^d-1(T(D_1∩υ(n_A(x))^⊥))||^α‖ T^'n_A(x)‖ℋ^d-1(dx)<∞. Thus, (<ref>) holds if and only if either 1<α<2 and d arbitrary or α=2 and d≥ 3. § PROOFS Throughout this section, L will denote a homogeneous Lévy basis with characteristic triplet (γ,Σ,ν) and characteristic exponent ψ. The non-random positive constants will be denoted by the generic symbol C>0, and they may change from line to line. If A,B⊆ℝ^d, we set A⊕ B={x+y:x∈ A,y∈ B}, and A⊖ B={x∈ℝ^d:x-B⊆ A}. Let D_1 be the unit disk in ℝ^m. We will assume without loss of generality (w.l.o.g. from now on) that M⊆ D_1. The following fact, which is a straightforward extension of Proposition 2.6 in <cit.>, will be constantly used in our proofs: If f: ℝ^d→ℝ^N× m is integrable w.r.t. L, then the characteristic exponent of the infinitely divisible random vector ξ = ∫_ℝ^d f(q) L(dq) is given by 𝒞(zξ):=log𝔼(exp(𝐢z·ξ))=∫_ℝ^dψ(z^'f(q))dq, z∈ℝ^N. Now, thanks to the Lévy-Itô decomposition for Lévy basis (see <cit.>, c.f. <cit.>), we may and do assume that the random field defined in (<ref>) admits the representation X(p) = ∫_A+pF(p,q)γdq+∫_A+pF(p,q)W(dq) +∫_A+pF(p,q)J_S(dq)+∫_A+pF(p,q)J_B(dq) =: X^(1)(p)+X^(2)(p)+X^(3)(p)+X^(4)(p), where γ∈ℝ^m, W, J_S and J_B are independent ℝ^m-valued homogeneous Lévy basis with characteristic triples (0,Σ,0), (0,0,ν⇂_D_1) and (0, 0, ν⇂_D_1^c), respectively. Here, ν⇂_D denotes the restriction of ν to D. Moreover, X^(4) can be written as X^(4)(p) = ∫_A+p∫_ℝ^m F(p,q)x 1_D_1^c(x) N(dqdx), in which N is a Poisson random measure on ℝ^d×ℝ^m independent of (W,J_S) and with intensity ϱ=Leb⊗ν. Note that the latter integral is ℙ-a.s. well-defined in the Lebesgue sense. If we also have that ∫_ℝ^m(1 x )ν(dy)<∞, X can be further decomposed as X(p) = ∫_A+pF(p,q)γ_0dq+∫_A+pF(p,q)W(dq) +∫_A+p∫_ℝ^m F(p,q)xN(dqdx) =: X̃^(1)(p)+X^(2)(p)+X^(5)(p), where we have let γ_0 = γ-∫_D_1 x ν (dx). §.§ Proof of Proposition <ref> Fix p_0∈ℝ^d and put H(p,q):=F(p,q)1_A(q-p). From (<ref>), it is enough to show that X^(4)(p_0+rty)≤ Cξ_t,r, for some positive (finite a.s.) r.v. ξ_t,r, and that for, all β≥0, m_β=𝔼( X^(i)(p_0+rty) ^β)≤ C, i=1,2,3; uniformly on y∈ M. For simplicity and notational convenience, for the rest of the proof we set p_0=0. Thus, by letting g(q,x):=1_A_⊕ rt(q) x1_D_1^c(x), we see that (<ref>) holds if we set ξ_t,r:=∫_A_⊕ rt∫_ℝ^mg(q,x)N(dqdx), because F is continuous and A+rty⊆ A⊕ rtM⊆ A_⊕ rt. The ℙ-a.s. finiteness of ξ_t,r follows from the fact that ∫1 gdϱ<∞ and Lemma 12.13 in <cit.>. On the other hand, since F is continuous and A compact, it is clear that (<ref>) holds for i=1,2. Now, in view that H(p,·) has compact support, X^(3) has finite moments of all orders and from Corollary 1.2.6. in <cit.>, for every θ≥2, m_θ≤ C∫_A+p∫_D_1( F(p_0+rty,q)x ^2 F(p_0+rty,q)x ^θ)ν(dx)dq≤ C, once again by the continuity of F. This easily implies that (<ref>) is also valid for i=3. §.§ Proof of Theorems <ref> and <ref> Before presenting the proof of Theorems <ref> and <ref>, let us make some remarks and establish some basic results. First, since A+p_0 is gentle, w.l.o.g. we may and do assume that p_0=0. If this is not the case, replace X by the ID field X̃(p) = ∫_A+p_0+p F(p_0+p,q) L(dq). Define Y_r(t,f)=∫_M⟨ X(try)-X(0),f(y)⟩ℋ^d-1(dy), t∈ℝ, f∈ L^2(ℋ^d-1⇂_M). According to Lemma <ref> below, Z^ϕ,r(t,f) = ∑_i,jDϕ(X(p_0))^(i,j)Y_r(t,f^(i)𝐞_j) + o_ℙ(1). Thus, by the properties of stable convergence, it is enough to show that Theorems <ref> and <ref> hold when we replace Z^ϕ,r by Y_r, i.e. when ϕ(x)=x. Furthermore, in view that f↦ Y_t^r(f) is linear, we only need to verify that the stated convergence holds for (Y_r(t_1,f),…,Y_r(t_n,f)), for arbitrary t_0:=0<t_1≤⋯≤ t_n and fixed f∈ L^2(ℋ^d-1⇂_M). Next, fix θ_1,…,θ_n∈ℝ and 𝐓=(t_1≤⋯≤ t_n), and define H(p,q):=F(p,q)1_A(q-p), as well as k_l^(1)(r,q) :=∫_Mf(y)^'[F(rt_ly,q)-F(0,q)]ℋ^d-1(dy); k_l^(2)(r,q) :=∫_Mf(y)^'[H(rt_ly,q)-F(0,q)]ℋ^d-1(dy); k_l^(3)(r,q) :=∫_Mf(y)^'H(rt_ly,q)ℋ^d-1(dy). From Lemma 3 in <cit.> and its subsequent remark, we have that ℙ-a.s. ∑_l=1^nθ_lY^r(t_l,f)=Ψ_r(𝐓,f)+Λ_r(𝐓,f)+Φ_r(𝐓,f), where Ψ_r(𝐓,f):= ∑_l=1^nθ_l∫_A∩(∩_k=1^nA⊖ rt_kM)k_l^(1)(r,q)L(dq); Λ_r(𝐓,f):= ∑_l=1^nθ_l∫_A\(∩_k=1^nA⊖ rt_kM)k_l^(2)(r,q)L(dq); Φ_r(𝐓,f):= ∑_l=1^nθ_l∫_(∪_k=1^nA⊕ rt_kM)\ Ak_l^(3)(r,q)L(dq). The following result shows that, in most cases, the leading terms are Λ_r(𝐓,f) and Φ_r(𝐓,f). Suppose that A is a compact gentle set, and F is of class C^1. Then, for all f∈ L^2(ℋ^d-1⇂_M), 1/rΨ_r(𝐓,f)ℙ→∑_l=1^nθ_lt_l∫_Mf(y)^'∇ X(p_0)yℋ^d-1(dy), where ∇ X(p)^(i,k)=∑_j=1^m∫_A+p∂_kF^(i,j)(p,q)L^(j)(dq). For i=1,…,d,j=1,…,m, let R_F^(i,j)(t,q,y):=1/r(F^(i,j)(rty,q)-F^(i,j)(0,q))-t∑_k=1^d∂_kF^(i,j)(0,q)y^(k). By the Mean-Value Theorem and the C^1 property of F we have that |∫_Mf(y)^(i)R_F^(i,j)(t,q,y)ℋ^d-1(dy)|≤ C∫_M f(y) yℋ^d-1(dy)<∞, and, as r↓0, ∫_Mf(y)^(i)R_F^(i,j)(t,q,y)ℋ^d-1(dy)→0, due to the Dominated Convergence Theorem. Applying this to (<ref>) give us that, for all t∈ℝ, 1/r∫_A∫_Mf(y)^'[F(rty,q)-F(0,q)]ℋ^d-1(dy)L(dq)ℙ→t∫_Mf(y)^'DX(0)yℋ^d-1(dy). Hence, it is left to show that, for all t∈ℝ, ∫_A\(∩_k=0^nA⊖ rt_kM)∫_Mf(y)^'[F(rty,q)-F(0,q)/r]ℋ^d-1(dy)L(dq)ℙ→0. This is easily obtained by noticing that, by virtue of (<ref>), the characteristic exponent of the latter integral equals ∫_A\(∩_k=0^nA⊖ rt_kM)ψ[z∫_Mf(y)^'[F(rty,q)-F(0,q)/r]ℋ^d-1(dy)]dq, z∈ℝ, and, due to (<ref>) and the continuity of ψ, it is bounded up to a constant by ∑_k=1^nLeb(A\ A⊖ rt_kM)→0, as r↓0, thanks to Theorem 1 in <cit.>. We are now ready to present a proof of our main results. From our discussion above, Theorem <ref>, Lemma <ref> and Remark <ref> below, it is enough to study the limit behaviour of the functionals Λ_r(𝐓,f) and Φ_r(𝐓,f), defined in (<ref>), in the following cases: Case 1: L is Gaussian with covariance matrix Σ≠0. Case 2: L strictly α-stable, 1<α<2, with spectral measure λ̅(du)=K(u)λ(du). Case 3: Σ=0, ν≡0 and L=γ_0Leb, where γ_0∈ℝ^m. Now, since Λ_r(𝐓,f) and Φ_r(𝐓,f) are independent, we can examine them separately. More specifically, by letting Y^α,±(t,f):=±∫_(0,t]× N(A)G(t,f,s,x,n)·Λ_α^±(dsd(x,n)), 1≤α≤2, where Λ_1^±=γ_0μ^±_M,A, we will show that in all cases (α=2 in Case 1, 1<α<2 in Case 2, and α=1 in the last case) it holds that 1/r^1/αΛ_r(𝐓,f)ℱ-d→∑_l=1^nθ_lY^α,+(t_l,f); 1/r^1/αΦ_r(𝐓,f)ℱ-d→∑_l=1^nθ_lY^α,-(t_l,f). Furthermore, by arguing as in the proof of Theorem 6 in <cit.>, it is sufficient to verify that the convergence in (<ref>) holds only weakly. For the rest of the proof, we restrict our attention to Λ_r(𝐓,f) since the arguments used in this case can be easily extrapolated to Φ_r(𝐓,f). Put I_𝐓,f^r(q):=∑_l=1^nθ_l∫_Mf(y)^'[H(rt_ly,q)-F(0,q)]ℋ^d-1(dy). Then from (<ref>) and the strict stability of L, we have that 𝒞(z1/r^1/αΛ_r(𝐓,f))=1/r∫_A\(∩_k=1^nA⊖ rt_kM)ψ_α(zI_𝐓,f^r(q))dq, z∈ℝ, where ψ_α is as in (<ref>) but λ is replaced by λ̅(du)=K(u)λ(du) in Case 2, and ψ_α(w)= -1/2w^'Σ w in Case 1; 𝐢γ_0· w in Case 3. Now set (see Section <ref>) u≡ n_A(x), δ_+=δ(x,u) and δ_-=δ(x,-u). By the continuity of F and ψ_α, ψ_α(zI_𝐓,f^r(·)) is locally bounded. Therefore, we can apply Proposition 4 (Steiner's formula for gentle sets) along with the arguments of Theorem 1 in <cit.> (c.f. <cit.>) to conclude that 𝒞(z1/r^1/αΛ_r(𝐓,f))=1/r∫_∂ Ag_r(x,z)ℋ^d-1(dx)+o(1), in which we have let g_r(x,z):=∫_-δ_-^0ψ_α(zI_𝐓,f^r(x+su))1_(∩_k=1^nA⊖ rt_kM)^c(x+su)ds. Next, we focus on showing that for H^d-1-a.a. x∈∂ A 1/rg_r(x,z)→∫_0^t_nψ_α(-z∑_l=1^nθ_lG(t_l,f,s,x,-u))dsh_M(-u)^+, r↓0. To do this, we first note that since A is gentle, for H^d-1-a.a. x∈∂ A, δ_+,δ_->0 and B_+:=B_δ_+(x+δ_+u)⊂ A^c; B_-:=B_δ_-(x-δ_-u)⊆ A; where B_R(p) is a ball of radius R>0 and centre p. Using this and the relation (A⊖ rtM)^c=A^c⊕ rtM, we deduce that for every t>0 it holds that 1_(A⊖ rtM)^c(q)= 1 if q∈B_++rtp; 0 if q-rtM⊆ B_-, for some p∈ M. Fix x∈∂ A satisfying (<ref>) and choose p_-(x)∈ M such that -p_-· u=h_M(-u). From (<ref>), we infer that for r small enough and δ_+>s>-δ_-, 1_(∩_k=1^nA⊖ rt_kM)^c(q)= 1 if s>o_+(rt_n)-rt_nh_M(-u) 0 if s<-max_k{rt_kh_M (-u) + o_- (rt_k)}, where q=x+su and o_±(r)=(δ_±-√(δ_±^2-r^2)). Hence 1/rg_r(x,z)= ∫_-t_nh_M(-u)^0ψ_α(zI_𝐓,f^r(x+rsu))ds1_h_M(-u)>0+o(1), where we further used that ψ_α(zI_𝐓,f^r(·)) is locally bounded. Let us now compute the limit of I_𝐓,f^r(x+rsu). To do this, thanks to Theorem 10.10 in <cit.>, we may and do assume that ℋ^d-1(M∩{y:ty· u=s})=0. Reasoning as in (<ref>) and (<ref>), we conclude that for all t≥0 and q_r(t)=x+sru-rty, 1_A(q_r(t))= 1 if ty· u>s+o_-(tr)/r 0 if ty· u<s-o_+(tr)/r . Thus, I_𝐓,f^r(q)=- ∑_l=1^nθ_l∫_Mf(y)^'1_t_ly· u<s-o_+(rt_l)/rℋ^d-1(dy)F(0,q_r(t_l))+o(1) → -∑_l=1^nθ_l∫_Mf(y)^'1_(-∞,s](t_ly· u)ℋ^d-1(dy)F(0,x), where we also used (<ref>). The convergence in (<ref>) follows now by applying this to (<ref>) and a simple change of variables. Finally, in view that | g_r(x,z)|≤ C∑_k=1^n∫_-δ_+^01_A_⊕ rt_k^*(x+sn_A(x))ds≤ Cr, the limits in (<ref>) can now be easily obtained by the Dominated Convergence Theorem, (<ref>), and the fact that 1_[sh_M(u),+∞)(ty· u)=0 for s>t. For the sake of the reader, let us summarize our findings: We have shown that under the assumptions of Theorem <ref> and <ref>, it holds that for, 1<α≤2, 1/r^1/α∑_l=1^nθ_lY^r(t_l,f) =1/r^1/αΨ_r(𝐓,f)+1/r^1/αΛ_r(𝐓,f)+o_ℙ(1) ℱ-d→∑_l=1^nθ_lY^α,+(t_l , f) + ∑_l=1^nθ_lY^α,- (t_l,f), and 1/r∑_l=1^nθ_lY^r(t_l,f)= 1/rΨ_r(𝐓,f)+1/rΛ_r(𝐓,f)+1/rΦ_r(𝐓,f) ℙ→ ∑_l=1^nθ_lY^1,+(t_l,f)+∑_l=1^nθ_lY^1,-(t_l,f) +∑_l=1^nθ_lt_l∫_Mf(y)^'∇ X(p_0)yℋ^d-1(dy), respectively. The conclusions of Theorem <ref> follow immediately from (<ref>). Finally, using that ∫_0^tG(t,f,s,x,± u)ds=t/h_M(u)∫_Mf(y)^'(± y· u)^+ℋ^d-1(dy)F(p_0,x), h_M(± u)>0, Fubini's Theorem and the Divergence Theorem, we deduce that the sum of Y^1,+(t,f) and Y^1,-(t,f) equals t∑_i,k,m∫_M∫_A+p_0f(y)^(i)∂_k+dF^(i,j) (p_0,x) γ_0^(i)y^(k)dq ℋ^d-1(dy), which combined with (<ref>) concludes the proof of Theorem <ref>. §.§ Two fundamental approximations In this subsection, we show that in the proof of Theorems <ref> and <ref>, it is enough to concentrate on the case when ϕ is the identity function and L a strictly α-stable Lévy basis. Let X be as in (<ref>), with A a compact gentle set and Y_r as in (<ref>). Then, for all ϕ∈ C_β^2, f∈ L^2(ℋ^d-1⇂_M), and 1<α≤2, we have that, as r↓0, 1/r^1/α Z^ϕ,r(t,f)-∑_i,jDϕ(X(p_0))^(i,j)Y_r(t,f^(i)𝐞_j)ℙ→0. If in addition Σ=0 and ∫_ℝ^m(1 x )ν(dx)<∞, then (<ref>) also holds for α=1. By the Mean-Value Theorem, for all ϕ∈ C_β^2, it holds that ϕ(x)-ϕ(y)-Dϕ(y)(x-y)≤ C(1+ y ^β-2)(‖ x-y ^2 x-y ^β), x,y∈ℝ^d. As a result of this, the norm of ϕ(X(p_0+rty))-ϕ(X(p_0))-Dϕ(X(p_0))[X(p_0+rty)-X(p_0)], is bounded up to a random constant (that only depends on X(p_0), f, and M) by X(p_0+rty)-X(p_0) ^2 X(p_0+rty)-X(p_0) ^β. In consequence, it is enough to show that for all i=1,2,3,4 (remember the decomposition (<ref>)), as r↓0, 1/r^1/α∫_M f(y) X^(i)(p_0+rty)-X^(i)(p_0) ^β_0ℋ^d-1(dy)ℙ→0, β_0:=2β. For simplicity and notational convenience, for the rest of the proof, we set p_0=0. Now, for i=1,…,4, we write X^(i)(rty)-X^(i)(0) =∫_A∩ A⊖{rty}(F(rty,q)-F(0,q))L_i(dq) +∫_A⊕{rty}\ AF(rty,q)L_i(dq) +∫_A\ A⊖{rty}(H(rty,q)-F(0,q))L_i(dq) where L_i is the Lévy basis associated to X^(i) via (<ref>) and H as in (<ref>). Using the fact that A⊕ rtM⊆ A_⊕ rt and (A⊖ rtM)^c⊆ A^c⊕ rtD_1, as well as the C^1 property of F, we obtain that X^(1)(rty)-X^(1)(0)≤ C(r+Leb(A_⊕ rt\ A)+Leb(A\ A⊖ rtD_1)). Similarly, by Gaussianity and Corollary 1.2.6. in <cit.>, we infer that 𝔼( X^(2)(rty)-X^(2)(0) ^β_0)≤ C(r^2+Leb(A_⊕ rt\ A)+Leb(A\ A⊖ rtD_1))^β_0/2. and 𝔼( X^(3)(rty)-X^(3)(0) ^β_0)≤ C(r^β_0+Leb(A_⊕ rt\ A)+Leb(A\ A⊖ rtD_1)), respectively. An application of the previous estimates and Theorem 1 in <cit.> show that (<ref>) is valid for i=1,2,3. On the other hand, using (<ref>) and arguing as above, we obtain that uniformly on y∈ M, ℙ-a.s. X^(4)(rty)-X^(4)(0) ^β_0 ≤ C(rχ+ξ_r)^β_0 where χ:=∫_A∫_ℝ^m x1_D_1^c(x)N(dqdx), and ξ_r:=∫_(A_⊕ rt\ A)∪(A\ A⊖ rtD_1)∫_ℝ^m x1_D_1^c(x)N(dqdx). Therefore, in order to see that (<ref>) is also satisfied for i=4, we only need to check that r^-1/αβ_0ξ_rℙ→0. The previous relation is easily obtained by noting that r^-1/αβ_0ξ_r is infinitely divisible with characteristic exponent Leb((A_⊕ rt\ A)∪(A\ A⊖ rtD_1))∫_ x >1(e^𝐢zr^-1/αβ_0 x-1)ν(dx), z∈ℝ, which is bounded up to a constant by r (due to Theorem 1 in <cit.>). Now suppose that Σ=0 and ∫_ℝ^m(1 x )ν(dx)<∞, in such a way that (<ref>) takes the form X(p)=X̃^(1)(p)+X^(5)(p)=∫_A+pF(p,q)γ_0dq+∫_ℝ^d∫_ℝ^mH(p,q)xN(dqdx). Exactly as above, we deduce that 1/r∫_M f(y)X̃^(1)(rty)-X̃^(1)(0) ^β_0ℋ^d-1(dy)→0. Moreover, (<ref>) remains valid for X^(5) if we replace 1_D_1^c by 1 in the definition of χ and ξ_r. Therefore, in order to finish the proof, we need to verify that (<ref>) holds for α=1 under this new definition of ξ_r. To see that this is the case, first note that the characteristic exponent r^-1/β_0ξ_r now equals Leb((A_⊕ rt\ A)∪(A\ A⊖ rtD_1))∫_ℝ^m(e^𝐢zr^-1/β_0 x-1)ν(dx). Invoking once again Theorem 1 in <cit.>, we infer that the previous quantity is bounded up to a constant by r∫_ℝ^m(1 r^-1/β_0 zx )ν(dx)≤ Cr(1 r^-1/β_0| z|)→0, as r↓0, because β_0≥2. This concludes the proof. Below, ψ_2(w)=-1/2w^'Σ w and, for 1<α<2, ψ_α is given by (<ref>), where λ is replaced by λ̅(du)=K(u)λ(du). Let ψ be the characteristic exponent of a homogeneous Lévy basis with triplet (γ,Σ,ν). Then, we have the following * If [stableattractassump]Assumption A_α holds, then, as r↓0, rψ(r^-1/αw)→ψ_α(w). * When Σ=0 and ∫_ℝ^m(1 x )ν(dx)<∞, as r↓0, r ψ(r^-1/αw)→𝐢γ_0· w. We will only concentrate on the case where [stableattractassump]Assumption A_α is satisfied for 1<α<2 (the other cases are well known). In this situation, we can write, for τ(x)=1_ x≤1+(1/ x )1_ x >1, rψ(r^-1/αw)=𝐢γ_r· w+∫_ℝ^m\{0}(e^𝐢w· x-1-𝐢w· xτ(x))ν_r(dx), where γ_r = r^1-1/αγ + ∫_ℝ^m\{0} x (τ(x)-1_ x≤1)ν_r(dx) and ν_r(dx) = rν(r^1/αdx). According to Theorem 8.7 in <cit.>, we only need to check that for every continuous and bounded function f vanishing on a neighborhood of 0∈ℝ^m, ∫_ℝ^mf(x)ν_r(dx)=r∫_𝕊^m-1∫_0^∞ f(r^-1/αsu)ρ_u(ds)λ(du)→∫_𝕊^m-1∫_0^∞f(su)ds/s^1+αK(u)λ(du), and that, for all z∈ℝ^m, lim_ϵ↓0lim sup_r↓0∫_ x≤ϵ(z· x)^2ν_r(dx)=0. Set ρ_u,r(ds):=rρ_u(r^1/αds). Equation (<ref>) implies that for any function g:ℝ→ℝ continuous and bounded vanishing on a neighborhood of 0∈ℝ, it holds that for λ-almost all u∈𝕊^m-1 ∫_0^∞g(s)ρ_u,r(ds)→∫_0^∞g(s)ds/s^1+αK(u). The convergence in (<ref>) now follows by applying (<ref>) to the function g(s)=f(su) along with (<ref>) and the Dominated Convergence Theorem. On the other hand, from (<ref>) and Tonelli's Theorem, we deduce that, for all ϵ>0, ∫_ x≤ϵ(z· x)^2ν_r(dx)≤ C∫_0^ϵ∫_𝕊^m-1rρ_u(r^1/αy,+∞)λ(du)ydy≤ C∫_0^ϵy^1-αdy, from which (<ref>) follows trivially. Let Λ_r^α(𝐓,f) be as in (<ref>) but L is replaced by: * A homogeneous Gaussian Lévy basis with covariance matrix Σ if α=2. * L=γ_0Leb, when α=1. * A strictly α-stable homogeneous Gaussian Lévy basis if 1<α<2 with spectral measure λ̅(du) = K(u) λ(du). By following the proof of Lemma 5 in <cit.>, we deduce from the previous result that Λ_r(𝐓,f)d=Λ_r^α(𝐓,f)+o_ℙ(r^-1/α). A similar approximation is valid for Φ_r(𝐓,f). §.§ A useful identity Recall that 𝔇⊆ℝ^d is a bounded Lipschitz domain whose boundary M is a (d-1)-dimensional compact manifold. For every t>0, set g_t(s,x,n):=∫_Mu_M(y)1_[h_M(n)s,+∞)(ty· n)ℋ^d-1(dy)1_0<s<t. In the next result, we find a semi-explicit representation of g_t. Suppose that M is of class C^2. Then, for μ^±_M,A-a.a. (s,x,n) ∈ℝ^+×ℝ^d×𝕊^d-1, g_t(s,x,n)=nℋ^d-1(𝔇∩ℌ(h_M(n)s/t,n))1_0<s<t, where ℌ(ℓ,n) := {y∈ℝ^d:y· n=ℓ}. Set N = {(s,x,n): ℋ^d-2 (M∩ℌ(h_M(n)s/t,n))=+∞,s≥0} ={(s,x,n):ℋ^d-2(M∩ℌ(h_M(n)s/t,n))=+∞,0≤ s≤ t}, where the second identity follows by the definition of the support function. By Tonelli's Theorem and Theorem 10.10 in <cit.>, N is μ^±_M,A-null set. Let us now verify that (<ref>) is valid for every (s,x,n)∈ N^c such that h_M(n)>0 and t>s>0. For such a triplet (s,x,n), we have that h_M(n)>ℓ:=h_M(n)s/t>0 and ℋ^d-1(M∩ℌ(ℓ,n))=0. For every ℓ>ε>0, let M_ε be a (d-1)-dimensional compact manifold of class C^2 contained in {y∈ℝ^d:ℓ-ε≤ y· n≤ h_M(n)} such that M_ε∩ℌ(ℓ-ε,n)=𝔇∩ℌ(ℓ-ε,n), and M_ε∩{y∈ℝ^d:ℓ≤ y· n≤ h_M(n)}=M∩{y∈ℝ^d:ℓ≤ y· n≤ h_M(n)}. By construction, the outward vector of M_ε satisfies that u_M_ε(y)= -n if y· n=ℓ-ε u_M(y) if ℓ≤ y · n≤ h_M(n) . Thus, by the Divergence Theorem and (<ref>), we deduce that g_t(s,x,n) = nℋ^d-1(𝔇∩ℌ (ℓ-ε,n)) + o(ε). In view that 𝔇 is compact, there is a ball B with radius ρ>h_M(n)>0 such that 𝔇∩ℌ(ℓ-ε,n)⊆ B∩ℌ(ℓ-ε,n). For ε small enough, B∩ℌ(ℓ-ε,n) is a (d-1)-dimensional ball embedded on ℌ(ℓ-ε,n) with radius √(ρ^2-(ℓ-ε)^2). The preceding observation allows us to apply the Generalized Dominated Convergence Theorem in (<ref>) to conclude that (<ref>) is indeed valid. imsart-nameyear.bst
http://arxiv.org/abs/2307.04231v1
20230709171314
Mx2M: Masked Cross-Modality Modeling in Domain Adaptation for 3D Semantic Segmentation
[ "Boxiang Zhang", "Zunran Wang", "Yonggen Ling", "Yuanyuan Guan", "Shenghao Zhang", "Wenhui Li" ]
cs.CV
[ "cs.CV" ]
Multi-spin probes for thermometry in the strong-coupling regime Dvira Segal August 12, 2023 =============================================================== Existing methods of cross-modal domain adaptation for 3D semantic segmentation predict results only via 2D-3D complementarity that is obtained by cross-modal feature matching. However, as lacking supervision in the target domain, the complementarity is not always reliable. The results are not ideal when the domain gap is large. To solve the problem of lacking supervision, we introduce masked modeling into this task and propose a method Mx2M, which utilizes masked cross-modality modeling to reduce the large domain gap. Our Mx2M contains two components. One is the core solution, cross-modal removal and prediction (xMRP), which makes the Mx2M adapt to various scenarios and provides cross-modal self-supervision. The other is a new way of cross-modal feature matching, the dynamic cross-modal filter (DxMF) that ensures the whole method dynamically uses more suitable 2D-3D complementarity. Evaluation of the Mx2M on three DA scenarios, including Day/Night, USA/Singapore, and A2D2/SemanticKITTI, brings large improvements over previous methods on many metrics. § INTRODUCTION 3D semantic segmentation methods <cit.> often encounter the problem of shift or gap between different but related domains (e.g. day and night). The task of cross-modal domain adaptation (DA) for 3D segmentation <cit.> is designed to address the problem, which is inspired by 3D datasets usually containing 2D and 3D modalities. Like most DA tasks, labels here are only available in the source domain, whereas the target domain has no segmentation labels. Existing methods, i.e. xMUDA <cit.> and its heirs <cit.>, extract 2D and 3D features through two networks and exploit the cross-modal complementarity by feature matching to predict results. However, as lacking supervision in the target domain, the robustness of this complementarity is not good. As shown in the left part of Fig.<ref>, if the domain gap is large and both networks underperform on the target domain, these methods appear weak. The problem of lacking supervision once constricted the visual pre-training task and has been solved by methods with masked modeling <cit.>, which has been proved to belong to data augmentation <cit.>. Its core solution is simple: removing a portion of inputs and learning to predict the removed contents. Models are fitted with sufficient data in this way, so that learn more inner semantic correspondences and realize self-supervision <cit.>. For this DA task, this way of data augmentation and then the self-supervision can enrich the robustness and reduce the gap. Hence the idea is natural: if we introduce masked modeling into the task, the lacking supervision on the target domain and then the large gap are solved. Nevertheless, two problems are the key to introducing masked modeling. a) The core solution ought to be re-designed to fit for this task, where there are two modalities. b) For the cross-modal feature matching, we should explore a new way to suit the joining of masked modeling. Given these observations, we propose a new method Mx2M utilizing masked cross-modality modeling to solve the problem of lacking supervision for the DA of 3D segmentation. Our Mx2M can reduce the large domain gap by adding two new components to the common backbone for this task, which correspond to the above two problems. For the first one, we design the core solution in the Mx2M, cross-modal removal and prediction (xMRP). As the name implies, we inherit the 'removal-and-prediction' proceeding in the core solution of masked single-modality modeling and improve it with the cross-modal working manner for this task. During removal, the xMRP has two changes. i) Our CNN backbone cannot perform well with highly destroyed object shapes <cit.>, so the masked portion is less. ii) To guarantee the existence of full semantics in this segmentation task, we do not mask all inputs and ensure at least one modality complete in each input. We can obtain the different xMRP by controlling the removal proceeding, which makes the Mx2M adapt to various DA scenarios. During prediction, to learn more 2D-3D correspondences beneficial to networks <cit.>, we mask images/points and predict the full content in points/images by two new branches. In this way, cross-modal self-supervision can be provided for the whole method. As for the second problem, we propose the dynamic cross-modal filter (DxMF) to dynamically construct the cross-modal feature matching by locations, which is inspired by impressive gains when dynamically establishing kernel-feature correspondences in SOLO V2 <cit.>. Similarly, in our DxMF, we structure the 2D-3D kernel-feature correspondences. Kernels for one modality are generated by features from the other, which then act on features for this modality and generate the segmentation results by locations. With the joining of the DxMF, the Mx2M can dynamically exploit the complementarity between modalities. As is shown in the right part of Fig.<ref>, with these two components, our Mx2M gains good results even in the scenario with a large domain gap. To verify the performance of the proposed Mx2M, we test it on three DA scenarios in <cit.>, including USA/Singapore, Day/Night, and A2D2/SemanticKITTI. Our Mx2M attains better results compared with most state-of-the-art methods, which indicates its effectiveness. In summary, our main contributions are as follows: * We innovatively propose a new method Mx2M, which utilizes masked cross-modality modeling to reduce the large domain gap for DA of 3D segmentation. To our knowledge, it is the first time that masked modeling is introduced into a cross-modal DA task. * Two components are specially designed for this task, including xMRP and DxMF, which ensures the Mx2M effectively works and deals with various scenarios. * We achieve high-quality results on three real-to-real DA scenarios, which makes the Mx2M the new state-of-the-art method. The good results demonstrate its practicality. § RELATED WORK Domain Adaptation for 3D Segmentation.Most works pay attention to DA for 2D segmentation <cit.>, which are hard to be applied to unstructured and unordered 3D point clouds. The DA methods for 3D segmentation <cit.> are relatively few, but they also do not fully use the datasets that often contain both images and points. Hence, xMUDA <cit.> and its heirs <cit.> with cross-modal networks are proposed, which achieve better adaptation. Our Mx2M also adopts cross-modal networks, which has the same backbone as xMUDA. Masked Modeling.The masked modeling was first applied as masked language modeling <cit.>, which essentially belongs to data augmentation <cit.>. Nowadays, it has been the core operation in self-supervised learning for many modalities, such as masked image modeling <cit.>, masked point modeling <cit.>, and masked speech modeling <cit.>. Their solutions are the same: removing a portion of the data and learning to predict the removed content. The models are fitted with sufficient data in this way so that the lacking of supervision is satisfied. Our Mx2M designs the masked cross-modality modeling for DA in 3D segmentation that uses point and image. Cross-modal Learning. Cross-modal learning aims at taking advantage of data from multiple modalities. For visual tasks, the most common scene using it is learning the 3D task from images and point clouds <cit.>. The detailed learning means are various, including 2D-3D feature matching <cit.>, 2D-3D feature fusion <cit.>, 2D-3D cross-modal supervision <cit.>, etc. Besides, there are also some works conducting cross-modal learning on other modalities, such as video and medical image <cit.>, image and language <cit.>, as well as video and speech <cit.>. Cross-modal learning is also exploited in our M2xM: the core procedure xMRP leverages the cross-modal supervision, while the DxMF works in the way of 2D-3D feature matching. § METHOD Our Mx2M is designed for DA in 3D segmentation assuming the presence of 2D images and 3D point clouds, which is the same as xMUDA <cit.>. For each DA scenario, we define a source dataset 𝒮, each sample of which contains a 2D image X^2D,S, a 3D point cloud X^3D,S, and a corresponding 3D segmentation label Y^3D,S. There also exists a target dataset 𝒯 lacking annotations, where each sample only consists of image X^2D,T and point cloud X^3D,T. The images and point clouds in 𝒮 and 𝒯 are in the same spatial sizes, i.e. X^2D∈ℝ^H×W× 3 and X^3D∈ℝ^N× 3. Based on these definitions, we will showcase our Mx2M. §.§ Network Architecture The architecture of the Mx2M is shown in Fig.<ref>. For a fair comparison with previous methods <cit.>, we also use the same backbone to extract features: a SparseConvNet <cit.> for the 3D network and a modified version of U-Net <cit.> with ResNet-34 <cit.> pre-trained on ImageNet <cit.> for the 2D one. Their output features, H^2D and H^3D, have the same length N equaling the number of 3D points, where H^2D is gained by projecting the points into the image and sampling the 2D features at corresponding pixels. H^2D and H^3D are then sent into two groups of the same three heads, each group of which is for one modality. During these heads, the ones that predict masked 2D/3D contents M^2D → 3D and M^3D → 2D belong to xMRP. We will introduce them and the proceeding of masking inputs in Sec.<ref>. Besides them, the other heads all participate in feature matching. The heads that predict final segmentation results P^2D and P^3D are our DxMFs (detailed in Sec.<ref>). The heads that mimick the outputs from cross-modality are the linear layers inherited from xMUDA <cit.>, where the outputs are P^2D → 3D and P^3D → 2D. As for the information flow, we illustrate it in Fig.<ref>(b). The whole network is alternately trained on the source and the target domain. When the models are trained on the source domain, all six heads work. The heads for xMRP are respectively self-supervised by the origin image/point. The two DxMF heads that predict the segmentation results are both supervised by Y^3D,S. The two mimicking heads are internally supervised by the outputs from the cross-modal DxMF heads (e.g. P^3D → 2D supervised by P^2D). When the models are trained on the target domain, the DxMFs heads cannot be supervised because of lacking annotations. The other heads normally work as above. The loss functions of segmentation and mimicking heads are the same as previous methods <cit.> for convenience, where the positions are like in Fig.<ref>(b). The CE(·) and KL(·) are loss functions of cross-entropy and KL divergence, respectively. §.§ xMRP The core solution of the Mx2M, xMRP, removes a portion of the data in one modality and learns to predict the full content in the other one, which is related but different from the core solution in masked single-modality modeling. As the name implies, this procedure is divided into two steps. For the step of removal, we randomly select some patches of the image/points and mask them inspired by the way in MAE <cit.>. Considering that 3D points are hard to mask by patches, we first project them into the image. We use two hyper-parameters to control the masking proceeding: the p indicating the size of each patch, and the mr representing the masking ratio of the whole image/points (i.e. masking mr of all patches). The mr cannot be as high as that in <cit.> because the CNN backbone in our method cannot perform well if the shape of objects is highly destroyed <cit.>. Besides, due to our segmentation task, the inputs cannot always be masked and at least one modality is complete to guarantee the existence of full semantics. Thus we use another two hyper-parameters to define the ratio when masking each modality: m_2D meaning the ratio when masking 2D and m_3D indicating when masking 3D (i.e. masking images at times of m_2D, masking points on times of m_3D, and no masking when (1-m_2D-m_3D)). We can control the inputs by (p, mr, m_2D, m_3D) to make the model adapt to different DA scenarios. As is shown in Fig.<ref>, X^2D and X^3D processed by these hyper-parameters (denoted as the new X^2D and X^3D) are sent into the networks as inputs. The next step is the cross-modal prediction that provides self-supervision. Inspired by the conclusion in <cit.> about the good effect of MLP on unsupervised tasks, we use the same MLP heads with middle channels of 4096 for both 2D and 3D to generate the results M^2D→3D and M^3D→2D for 3D and 2D, respectively. Motivated by <cit.>, the losses are correspondingly calculated as follows: ℒ_2D=L_2(X^3D||M^2D→3D), and ℒ_3D=L_2(X^2D||M^3D→2D) . The X^3D means the original 3D point clouds. The X^2D indicates the sampled pixels when X^3D projects into the original image. L_2(·) signs the mean squared error. It is noteworthy that we predict the full contents rather than the removed ones in masked single-modality modeling. The model can learn more 2D-3D correspondences from non-masked parts because the masked modality is different from the predicted one, which is not available in methods of masked single-modality modeling. Herein we finish the core proceeding of our Mx2M. The (p, mr, m_2D, m_3D) are set as (16, 0.15, 0.2, 0.2), (4, 0.3, 0.1, 0.3), and (4, 0.25, 0.3, 0.1) for scenarios of USA/Singapore, Day/Night, and A2D2/SemanticKITTI, respectively. The experiments for USA/Singapore are reported in Sec.<ref> and the other two are in Appendix.A. Our network can learn sufficient 2D-3D correspondences on different DA scenarios in this way, which fixes the lacking of supervision and then reduces the domain gap. §.§ DxMF The whole network can learn more complementarity between modalities by feature matching, so it is still important for our Mx2M. Inspired by SOLO V2 <cit.> which gains great progress compared with SOLO <cit.> via kernel-feature correspondences by locations, our DxMF constructs cross-modal kernel-feature correspondences for feature matching. The pipeline is shown in Fig.<ref>(a). Compared with simple final linear layers in xMUDA <cit.>, we use dynamic filters to segment the results. We make the procedure of segmenting the 2D results as an example to illustrate our DxMF and so do on 3D. The kernel weights W^2D∈ℝ^N× F^2D×C of the filter for 2D segmentation are generated from 3D features H^3D by a linear layer (similarly, W^3D∈ℝ^N× F^3D×C from H^2D). As the 2D features H^2D have a spatial size of (N,F^2D), the result of one point is got: P^2D_i=W^2D_i ∗ H^2D_i, where i ∈N . The ∗ indicates the dynamic convolution. We can get the segmentation results P^2D after all the P^2D_i joined together. As we dynamically construct the 2D-3D correspondences for feature matching, by which the model learns more suitable complementarity compared with the ways in previous methods <cit.>. We provide experiments on this comparison and ones on the scheme of the dynamic feature matching about other heads, where the results are shown in Sec.<ref>. § EXPERIMENTS §.§ Implementation Details Datasets.We follow three real-to-real adaptation scenarios in xMUDA <cit.> to implement our method, the settings of which include country-to-country, day-to-night, and dataset-to-dataset. The gaps between them raise. Three autonomous driving datasets are chosen, including nuScenes <cit.>, A2D2 <cit.>, and SemanticKITTI <cit.>, where LiDAR and camera are synchronized and calibrated. In this way, we can compute the projection between a 3D point and the corresponding 2D pixel. We only utilize the 3D annotations for segmentation. In nuScenes, a point falling into a 3D bounding box is assigned the label corresponding to the object, as the dataset only contains labels for the 3D box rather than the segmentation. The nuScenes is leveraged to generate splits Day/Night and USA/Singapore, which correspond to day-to-night and country-to-country adaptation. The other two datasets are used for A2D2/SemanticKITTI ( i.e. dataset-to-dataset adaptation), where the classes are modified as 10 according to the alignments in <cit.>. Metrics.Like other segmentation works, the mean intersection over union (mIoU) is adopted as the metric for evaluating the performance of the models (both 2D and 3D) for all datasets. In addition, we follow the new mIoU calculating way in <cit.>, which jointly considers both modalities and is obtained by taking the mean of the predicted 2D and 3D probabilities after softmax (denoted as 'Avg mIoU'). Inputs & Labels.For easily conducting masked modeling, we resize images into the sizes that could be divisible by p. The images in nuScenes (i.e. Day/Night and USA/Singapore) are resized as 400×224, whereas the ones in A2D2 and SemanticKITTI are reshaped as 480×304. All images are normalized and then become the inputs/labels of the 2D/3D network. As for points, a voxel size of 5cm is adopted for the 3D network, which is small enough and ensures that only one 3D point lies in a voxel. The coordinates of these voxels are adopted as the labels for the 2D network. Training.We use the PyTorch 1.7.1 framework on an NVIDIA Tesla V100 GPU card with 32GB RAM under CUDA 11.0 and cuDNN 8.0.5. For nuScenes, the mini-batch Adam <cit.> is configured as the batch size of 8, β_1 of 0.9, and β_2 of 0.999. All models are trained for 100k iterations with the initial learning rate of 1e-3, which is then divided by 10 at the 80k and again at the 90k iteration. For the A2D2/SemanticKITTI, the batch size is set as 4, while related models are trained for 200k and so do on other configurations, which is caused by the limited memory. The models with '+PL' share the above proceeding, where segmentation heads are extra supervised with pseudo labels for the target dataset. As for these pseudo labels, we strictly follow the ways in <cit.> to prevent manual supervision, i.e. using the last checkpoints of models without PL to generate them offline. §.§ Ablation Studies To define the effectiveness of each component, we conduct ablation studies on them, respectively. As xMUDA <cit.> is the first method of cross-modal DA in 3D segmentation and is the baseline of all related methods <cit.>, we continue this habit and choose xMUDA as our baseline. By default, all results are reported based on the USA/Singapore scenario. For a fair comparison, we train models with each setting for 100k iterations with a batch size of 8. We also provide experiments on other scenarios, which are reported in Sec.6. §.§.§ Ablation on xMRP As mentioned in Sec.<ref>, in xMRP, we use four hyper-parameters (p, mr, m_2D, m_3D) to control the proceeding of masking inputs and two heads of MLP to predict the cross-modality. To validate the effectiveness of the masked cross-modality modeling strategy, we insert simple xMRPs into xMUDA. The (4, 0.15, 0.1, 0.1) are selected as the start point because of the low mask ratio and the low masking 2D/3D ratio, which are suitable for the task of segmentation. As for heads, we start from the simplest linear layers. The mIoU for (2D, 3D) in this setting are (60.0, 53.4), which are better than the segmentation results of (59.3, 52.0) in xMUDA. The good results demonstrate the significance of masked cross-modality modeling. We next explore the effectiveness of detailed settings. Ablation on Hyper-parameters. To determine the suitable input settings for the current scenario, we conduct ablation studies on (p, mr, m_2D, m_3D), respectively. We start from (4, 0.15, 0.1, 0.1) and first confirm p with fixed other numbers, where the mIoU of 2D and 3D are shown in Tab.<ref>(a). The networks gain the best metrics at p=16. The next job is to define mr, the results of which are illustrated in Tab.<ref>(b). Both metrics decrease with the raising of mr, but when mr=0.10 so do results. Hence the models have the best results when mr=0.15. Finally, we determine the m_2D and m_3D. As mentioned in Sec.<ref>, (1-m_2D-m_3D)>0 because of keeping the full semantics. We design plenty of combinations for these two hyper-parameters, where the details are shown in Tab.<ref>. The metrics are not good when m_2D and m_3D are too large, which matches the fact that our CNN backbones cannot integrate a high mask ratio like <cit.>. We get results of (61.4, 56.5) with suitable m_2D=0.2 and m_3D=0.2, and then appropriate hyper-parameters (16, 0.15, 0.2, 0.2) for the scenario. Ablation for Removal and Prediction. We obtain the results of (61.4, 56.5) with the simple linear layer. According to the conclusion in <cit.>, the network performs well when having an MLP layer. Therefore we compare the schemes of linear layer, a single MLP with mid channels of 4096, and two same MLPs with the 4096 mid channels. They are used to predict both modalities, where the results are shown in Tab.<ref>(c). A single MLP also does for our DA task. Besides, some other removal-prediction strategies are also attempted besides the cross-modal one. We illustrate the segmentation metrics in Tab.<ref>(a). We have tried respectively removing and predicting the content in single-modality (denoted as '2D+3D'), only in 3D point clouds, and only in 2D images. Here only removed portions are set as labels. We can see '2D+3D' has similar results as xMUDA <cit.>, because only rare patches work and bring about seldom information in this scheme. Similarly, the cross-modal scheme performs well thanks to 2D-3D correspondences from all contents, which is beneficial to this task <cit.>. Finally, we gain the (+2.0, +4.2) increase with our xMRP for 2D and 3D performance, respectively. §.§.§ Ablation on DxMF All above experiments are based on the same way of feature matching as xMUDA <cit.>, where the segmentation results are got based on two linear layers. We also conduct experiments on our DxMF, which achieves cross-modal feature matching and then the segmentation by dynamically constructing kernel-feature correspondences. The comparison is shown in the first two rows of Tab.<ref>(b), our DxMF performs the better 2D-3D complementarity and especially increases the 3D performance. We also try to combine the means of sparse-to-dense cross-modal feature matching, DsCML <cit.>, with the masked cross-modality modeling, where the metrics are illustrated in the last three rows in Tab.<ref>(c). The results with '†' or not denote that they are from the implementation of the official source code or from the paper. As our experiments are based on the official source code, we still gain the increase with the join of xMRP. In all, the good metrics prove the effectiveness of our DxMF. We also validate the results for only using DxMF, which is reported in the first two rows of Tab.<ref>(c). Besides, our Mx2M has three output heads for each modality according to Sec.<ref>. We also conduct experiments on DxMF on them besides the above experiments on prediction heads. Like adding DxMF to prediction heads, we add DxMF to other ones on both modalities. The results are reported in Tab.<ref>(c). Both mimicking heads and ones for xMRP do not match the DxMF. We may infer that the former is not involved in segmentation and the latter is like respective single-modal prediction in Tab.<ref>(a). Both situations are not suitable for our DxMF. After all experiments, our Mx2M outperforms the Baseline xMUDA (+4.8, +12.2) for 2D and 3D in total, which shows that the Mx2M does work. §.§ Limitations Considering previous works <cit.> attempt to introduce the adversarial learning (AL) into the DA in 3D semantic segmentation, we also add the extra heads for AL in both 2D and 3D. We use the simple AL in AUDA <cit.> and the CMAL in DsCML <cit.>. The results for 2D and 3D are not ideal, which are correspondingly (56.26, 51.76) and (49.75, 41.94) for AL in AUDA and CMAL. Compared with the metrics of (64.1, 64.2) in the scheme without AL, they decrease so much. We think it is the limitation in our Mx2M that our method does not match AL. §.§ Comparison with The State-of-the-art We evaluate our Mx2M on the above three real-to-real DA scenarios and compare the results with some methods. First, we train the backbones on source only and on target only (except on the Day/Night, where the batches of 50%/50% Day/Night are used to prevent overfitting). The two results can be seen as the upper and the lower limit of the DA effectiveness. Next, some representative uni-modal DA methods are compared. These uni-modal methods are correspondingly evaluated on U-Net with ResNet-34 for 2D and SparseConvNet for 3D, which are the same as our backbones. We use the results from <cit.> for convenience. Finally, We also compare our method with some cross-modal methods, including xMUDA <cit.>, AUDA <cit.>, and DsCML <cit.>. These cross-modal methods and our Mx2M are also trained with the data with pseudo labels on the target domain, where the proceeding can be seen in Sec.<ref>. All comparison results for 3D segmentation are reported in Tab.<ref>. We can see that the Mx2M gains the (2D mIoU, 3D mIoU) on average of (+5.4, +7.6) compared with the baseline xMUDA, which proves the DA performance of our method. Specifically, for the USA/Singapore scenario, the bare Mx2M even surpasses xMUDA with PL. In Day/Night, though the metric without PL looks normal, the result with PL shows a surprising increase that is close to the upper limit. As for the A2D2/SemanticKITTI, the Mx2M outperforms all methods on 2D and 3D metrics with a 0.9 less Avg mIoU compared to the DsCML. In total, our Mx2M gains state-of-the-art performance on most metrics. We also provide some visual results, which are shown in Fig.<ref>. More visual results can be seen in Sec.7. § CONCLUSION In this paper, we propose a method named Mx2M for domain adaptation in 3D semantic segmentation, which utilizes masked cross-modality modeling to solve the problem of lacking supervision on the target domain and then reduce the large gap. The Mx2M includes two components. The core solution xMRP makes the Mx2M adapts to various scenarios and provides cross-modal self-supervision. A new way of cross-modal feature matching named DxMF ensures that the whole method exploits more suitable 2D-3D complementarity and then segments results. We achieve state-of-the-art performance on three DA scenarios for 3D segmentation, including USA/Singapore, Day/Night, and A2D2/SemanticKITTI. Specifically, the Mx2M with pseudo labels achieves the (2D mIoU, 3D mIoU, Avg mIoU) of (67.4, 67.5, 67.4), (52.4, 56.3, 54.6), and (48.6, 53.0, 51.3) for the three scenarios. All the above results demonstrate the effectiveness of our method. § ABLATION STUDIES ON SCENARIOS OF DAY/NIGHT AND A2D2/SEMANTICKITTI We also conduct ablation studies on the scenarios of Day/Night and A2D2/SemanticKITTI. Similarly, the xMUDA <cit.> is also selected as the backbone. We train the models with each Day/Night setting for 100k iterations with a batch size of 8 and with each A2D2/SemanticKITTI setting for 200k iterations with a batch size of 4 because of limited resources. The proceedings are basically as those in USA/Singapore and as follows. §.§ Ablations on Day/Night We first validate the effectiveness of the masked cross-modality modeling strategy. The four hyper-parameters (p, mr, m_2D, m_3D) are set as (4, 0.15, 0.1, 0.1), which is the same as what we do for the USA/Singapore and for the same reason in Sec.4.2.1 of the body. As for the heads for predicting segmentation, we start from the simplest linear layers. The mIoU results for (2D, 3D) are (47.3, 45.9), which are better than the segmentation indexes of (46.2, 44.2) in our baseline xMUDA. The good results demonstrate the significance of the strategy for masked cross-modality modeling. We next explore the effectiveness of detailed settings. To determine the suitable input settings for the current scenario, we conduct ablation studies on (p, mr, m_2D, m_3D), respectively. We start from (4, 0.15, 0.1, 0.1) and first confirm p with fixed other numbers, where the mIoU of 2D and 3D are shown in Tab.<ref>(a). The start point is the most suitable one for this scenario, i.e. p=4. The next job is to define mr, the results of which are illustrated in Tab.<ref>(b). The models have the best results when mr=0.3. Finally, we determine the m_2D and m_3D. We design plenty of combinations for these two hyper-parameters, where the details are shown in Tab.<ref>. We get results of (48.9, 48.5) with suitable numbers, where m_2D=0.1 and m_3D=0.3, and then appropriate hyper-parameters (4, 0.3, 0.1, 0.3) for the scenario. We obtain the results of (48.9, 48.5) with the simple linear layer. According to the conclusion in <cit.>, the network performs well when having an MLP layer. Therefore we compare the schemes of linear layer, a single MLP with mid channels of 4096, and two same MLPs with the 4096 mid channels. They are used to predict both modalities, where the results are shown in Tab.<ref>(c). A single MLP also does for our DA task. Finally, the DxMF is added to the Mx2M, the results of which are illustrated in Tab.<ref>(d). With the join of the DxMF, we gain the mIoU of (49.7, 49.9) for the scenario of Day/Night. §.§ Ablations on A2D2/SemanticKITTI The proceeding of ablation studies on A2D2/SemanticKITTI is the same as what in Day/Night. First, the effectiveness of the masked cross-modality modeling strategy is validated. Our results are (37.9, 44.3) v.s. (36.8, 43.3) of the ones in xMUDA <cit.>. Next, we determine four hyper-parameters (p, mr, m_2D, m_3D), where the procedure is the same as the above. We report them in Tab.<ref>(a), Tab.<ref>(b), and Tab.<ref>, respectively. They are set as (4, 0.25, 0.3, 0.1) for the A2D2/SemanticKITTI scenario. We then define the prediction heads with results (41.5, 46.2), which is shown in Tab.<ref>(c). The MLP works again. Finally, we validate the effectiveness of the DxMF, the results of which are illustrated in Tab.<ref>(d). We gain the mIoU of (44.6, 48.2) for the A2D2/SemanticKITTI. § MORE VISUAL RESULTS ON THE MX2M As is mentioned in Sec.4.4 of the main body, we offer more visual results in Fig.<ref>. The images/points from the top row to the bottom correspondingly come from the A2D2/SemanticKITTI, USA/Singapore, and Day/Night scenarios, where every three rows belong to the same scenario. Our Mx2M has a balanced performance in all three scenarios.
http://arxiv.org/abs/2307.04185v1
20230709142436
Parton shower algorithm with saturation effect
[ "Yu Shi", "Shu-Yi Wei", "Jian Zhou" ]
hep-ph
[ "hep-ph" ]
Key Laboratory of Particle Physics and Particle Irradiation (MOE), Institute of Frontier and Interdisciplinary Science, Shandong University, Qingdao, Shandong 266237, China Key Laboratory of Particle Physics and Particle Irradiation (MOE), Institute of Frontier and Interdisciplinary Science, Shandong University, Qingdao, Shandong 266237, China Key Laboratory of Particle Physics and Particle Irradiation (MOE), Institute of Frontier and Interdisciplinary Science, Shandong University, Qingdao, Shandong 266237, China We extend the previously developed small x parton shower algorithm to include the kinematic constraint effect and k_t resummation effect. This work enables the Monte Carlo generator to simultaneously resum large k_t and small x logarithms in the saturation regime for the first time. It is an important step towards simulating processes involving multiple well separated hard scales, such as di-jet production in eA collisions at EIC. Parton shower algorithm with saturation effect Jian Zhou August 12, 2023 ============================================== § INTRODUCTION The study of dense gluonic matter at small x inside a large nucleus and nucleon has been and continues to be an important frontier of high-energy nuclear physics. It is also one of the main objectives of the physics program of the future Electron-Ion Collider (EIC) <cit.>. Tremendous theoretical efforts have been made to search for smoking gun evidence of saturation. To this end, hard scattering processes in eA collisions at EIC are expected to deliver crucial messages about how saturation emerges from strongly interacting gluonic matter. A Monte Carlo event generator that incorporates saturation effects could play an essential role in fully harnessing the potential of future experimental data taken from EIC. As the core of general purpose Monte Carlo event generators, parton showers describe successive radiations from highly-energetic partons that participate in the hard scattering process. While most parton branching algorithms <cit.> are based on the soft and collinear approximation which effectively resums the Dokshitzer-Gribov-Levin-Altarelli- Parisi (DGLAP) <cit.> like logarithm to all orders, only a few parton shower generators <cit.> have been developed to describe small x processes by simulating semi-hard emissions which give rise to the logarithm of the type ln (1/x) <cit.>. Among these generators, the Cascade <cit.> that is built on the Catani-Ciafaloni-Fiorani-Marchesini (CCFM) evolution equation <cit.> is the most widely used in the phenomenology studies (see for recent examples <cit.>). However, none of the aforementioned parton showers takes into account the gluon recombination process occurs in the dense target. The first attempt to include saturation effect in the parton shower is presented in Ref. <cit.> where both the forward and the backward evolution schemes have been presented. The underlying parton branching equation employed in our formulation is the folded Gribov-Levin-Ryskin (GLR) equation <cit.>. Although the GLR equation is somewhat outdated compared to modern treatments of small x evolution <cit.>, it is sufficient for simulating events in eA collisions at EIC energy. This is because the gluon density probed at EIC is not high enough for the triple pomeron vertex to dominate the gluon fusion process. In the previous work <cit.>, we performed a consistent check by comparing the transverse momentum distribution of exchanged gluons reconstructed from the parton shower generator with numerical solutions of the GLR equation. A full agreement between these two results was reached. The running coupling effect was also implemented in our Monte Carlo simulation. In the present work, we improve this parton branching algorithm by imposing the kinematic constraint arising from the requirement that the offshellness of t channel gluon should be dominated by its transverse momentum squared <cit.>. Though it is formally a sub-leading logarithm contribution, the kinematic constraint effect is known to significantly slow down the evolution speed. It is thus a necessary component of the Monte Carlo generator for any practical phenomenological studies. Actually, the angular ordering of soft emissions is automatically imposed once the kinematic constraint is applied since the angular ordering constraint is weaker than the latter <cit.> in the small x limit. The coherent branching effect is thus effectively included in the parton shower. On the other hand, for the case of hard scattering processes involving multiple well-separated hard scales, like di-jet production in eA collisions, the transverse momentumn dependent (TMD) type large logarithm α_s ln^2 (Q^2/k_⊥^2 ) and small x logarithm α_s ln(1/x) need to be simultaneously resummed. Such a joint resummation formalism has been established in a series of publications <cit.>. Another main objective of this work is to implement the joint resummation in the Monte Carlo simulation. The rest of the paper is organized as follows. In Sec. II, we discuss how to integrate the kinematic constraint effect into the parton shower algorithm. The formulations of both forward and backward evolution are presented. In Sec. III, the implementation of the joint resummation in the algorithm is discussed. Our starting point is the Sudakov factor derived from a folded version of the Collins-Soper (CS) and the renormalization group equation. It is shown that the k_⊥ distribution reconstructed from the parton shower is identical to the numerical and analytical results obtained from the CS equation and renormalization group equation. The paper is summarized in Sec. IV. § THE KINEMATIC CONSTRAINT In our previous work <cit.>, we have developed a Monte Carlo method to simulate the parton shower at small x based on the GLR evolution equation <cit.>. Our formulation only takes into account the summation of the leading logarithm ln(1/x) contribution which is known to result in too rapid growth of gluon number density towards small x region. From a phenomenological point of view, it is crucial to go beyond the leading logarithm accuracy and include the various sub-leading logarithm contributions <cit.>, among which the kinematic constraint effect <cit.> is a particularly interesting one. The kinematic constraint is required for the validity of the BFKL/GLR equation at small x. The constraint is needed to ensure that the virtuality of the gluons along the chain is controlled by the transverse momenta. The implementation of the kinematic constraint can significantly slow down the small x evolution and thus lead to a better description of relevant phenomenology. Note that the angular ordering of the gluon emissions is automatically satisfied once the kinematic constraint is imposed in the small x limit. The coherent branching effect is thus effectively achieved following the steps outlined below. The starting point of the Monte Carlo implementation for such an effect is the folded GLR equation with the kinematic constraint. Following the arguments made in Refs. <cit.>, the transverse momentum square of the radiated gluon l_⊥^2 must be smaller than 1-z/zk_⊥^2 where k_⊥ and z are transverse momentum and longitudinal momentum fraction carried by the daughter gluon respectively. The inclusion of the kinematic constraint leads to a modified GLR equation, ∂ N(η,)/∂η = α̅_s/π∫^2/^2 N ( η +ln[ k_⊥^2/ k_⊥^2+ l_⊥^2] , l_⊥+ k_⊥ ) - α̅_s/π∫_0^k_⊥^2 l_⊥/l_⊥^2 N(η,k_⊥) - α̅_s N^2( η,), with α̅_s=α_s N_c/π, η = ln(x_0/x) and x_0=0.01. The function N(η,) is related to the normal TMD gluon distribution G(η,k_⊥) through N(η,)=2α_s π^3/ N_c S_⊥ G(η,k_⊥) with S_⊥ being the transverse area of nucleon/nucleus. Converting the above equation to the folded form of the GLR equation, it reads, ∂/∂η N(x,)/Δ_ns (η , k_⊥) = α̅_s /π∫ _Λ_ cut ^2 l_⊥/l_⊥ ^2 N ( η +ln[ k_⊥^2/ k_⊥^2+ l_⊥^2] ,+ ) /Δ_ns (η , k_⊥). where Δ_ns (η , k_⊥) represents the probability of evolving from η_0 to η without resolvable branching. It is given by, Δ_ns (η , k_⊥) = exp{-α̅_s ∫^η_η_0 dη' [ lnk_⊥^2/Λ_ cut^2 +N(η',k_⊥) ] }, where the infrared cut off Λ_ cut is the matter of choice about what we classify as a resolvable emission. Emitted gluons with transverse momentum l_⊥<Λ_ cut are considered as the unresolvable ones. And their contribution has been combined with the virtual correction to cancel the infrared divergence. The resolvable branchings are defined as emissions above this range. All order contributions from the virtual correction and the unresolvable real emission are resummed into Δ_ns (η , k_⊥) which reduces to the non-Sudakov form factor <cit.> in the dilute limit by neglecting the saturation term. Eq. <ref> can be converted into an integral form, N(η ,)= N(η_0 ,) Δ_ns (η ,) + α̅_s /π∫^η _η_0 η^'Δ_ns (η , ) /Δ_ns ( η ^', ) ∫ _Λ_ cut ^2 l_⊥/l_⊥ ^2 N(η^'+ ln[ ^2/^2+ l_⊥^2], + ) . It is evident that the kinematic constrained small x equation is no longer a local equation. Namely, the increase of gluon number density at rapidity η is driven by the gluon distribution at rapidity η +ln[ k_⊥^2/ k_⊥^2+ l_⊥^2] rather than that at the same rapidity η. The corresponding weighting factor needs to be modified dramatically for the non-local case as shown below. §.§ Forward evolution With these derived folded evolution equations, we are now ready to introduce the Monte Carlo algorithm starting with the forward evolution case. For a given initial condition N(η_i,k_⊥,i), the first quantity to be generated by the algorithm is the value of η_i+1. As it has been done in <cit.>, this task can be achieved by solving the equation, ℛ = exp[- α̅_s ∫ ^η_i+1 _η_iη ^'( ln k_⊥,i ^2/Λ_ cut ^2 + N(η ^', k_⊥,i ) ) ], where ℛ is a random number distributed uniformly in the interval [0,1]. Throughout this paper, we always use R to denote such a random number. N(η ^', k_⊥,i) is pre-generated by numerically solving the GLR equation with the kinematic constraint. In contrast to the DGLAP evolution, the unitarity is not preserved during the course of the small x evolution. The number of gluons increases after each step of parton branching. The generated cascade thus needs to be re-weighted. For instance, if one neglects the saturation effect and kinematic constraint effect, the number of gluons which vanish due to the virtual correction and the unresolved branching is proportional to α̅_s ∫_Λ_ cut^k_⊥,i l_⊥^2/l_⊥^2, while the number of gluons produced via the real correction is proportional to α̅_s ∫_Λ_ cut^P_⊥^2 l_⊥/l_⊥^2 where P_⊥ is the UV cutoff, in the same rapidity interval. The weighting function is given by the ratio of these two contributions W ( k_⊥,i) =ln(P_⊥^2/Λ_ cut ^2) /ln(k_⊥,i^2/Λ_ cut ^2 ). It is quite non-trivial to work out the correct weighting factor when the kinematic constraint is implemented in the parton branching algorithm. Let us first discuss the derivation of the weighting factor for the case of the fixed boundary prescription. To work out the correct weighting coefficient, we first write down the expression for the fraction of gluons at [η_i+1, η_i+1+δη] that come form the branching between η_i+1 and η_i, δη∂/∂η_i+1 [ α̅_s/π∫_η_i^η_i+1η' ∫_Λ_ cut^2 l_⊥/l_⊥^2e^- α̅_s ∫_η_i^η'η [ lnk_⊥,i^2/Λ_ cut^2 +N(η,k_⊥,i) ] θ ( 1-z'/z' (k_⊥,i-l_⊥)^2-l_⊥^2 ) ] =δηα̅_s/π∫^ min [P_⊥,√((k_⊥,i-l_⊥)^2 1-z/z) ]_Λ_ cut^2 l_⊥/l_⊥^2e^- α̅_s ∫_η_i^η_i+1+ln(k_⊥,i-l_⊥)^2/(k_⊥,i-l_⊥)^2+l_⊥^2η [ lnk_⊥,i^2/Λ_ cut^2 +N(η,k_⊥,i) ] , with z'=x_i+1 /x'=exp [η'-η_i+1]. The kinematic constraint is imposed by the θ-function. Note that the term originating from the derivative acting on the integral boundary is equal to 0. The entire contribution comes from the derivative acting on the θ-function. Meanwhile the fraction of gluons that leave from the rapidity interval [η_i+1, η_i+1+δη] due to the virtual correction is, δη∂ e^- α̅_s ∫_η_i^η_i+1η [ lnk_⊥,i^2/Λ_ cut^2 +N(η,k_⊥,i) ] /∂η_i+1=-δηα̅_s [lnk_⊥,i^2/Λ_ cut^2 +N(η_i+1,k_⊥,i) ]e^- α̅_s ∫_η_i^η_i+1η [ lnk_⊥,i^2/Λ_ cut^2 +N(η,k_⊥,i) ] . For the non-local small x evolution, one also needs the input for gluon distribution beyond the small x boundary x_0=0.01. There are two common choices for the boundary conditions: i) the fixed boundary prescription, N(η<0, k_⊥)=0; ii) the frozen boundary prescription, N(η<0, k_⊥)=N(η=0, k_⊥). The weighting functions are thus different for different rapidity boundary prescriptions. For the fixed boundary prescription, the re-weighting function is given by W_kc,1(η_i,η_i+1;k_⊥,i) =(η_i+1-η_i) ∫^ min [P_⊥,√(1-z/z (k_⊥,i -l_⊥)^2 ) ] _Λ_ cut^2 l_⊥/l_⊥^2 e^- α̅_s ∫_η_i+1^η_i+1+ln(k_⊥,i -l_⊥)^2/(k_⊥,i -l_⊥)^2+l_⊥^2η [ lnk_⊥,i^2/Λ_ cut^2 +N(η,k_⊥,i) ] /(η_i+1-η_i)lnk_⊥,i^2/Λ_ cut^2 + ∫^η_i+1_η_i dη N(η, k_⊥,i). Here, the values of |l_⊥| and ϕ_l can be generated by solving the following equation R = 1/ Cα̅_s/π∫_Λ_ cut^l_⊥^2 l'_⊥/l_⊥'^2exp{ - α̅_s ∫_η_i^η_i+1+ln(k_⊥,i-l_⊥')^2/(k_⊥,i-l_⊥')^2+l_⊥'^2η [ lnk_⊥,i^2/Λ_ cut^2 +N(η,k_⊥,i) ] }, C = α̅_s/π∫_Λ_ cut^ min [P_⊥,√((k_⊥,i-l_⊥')^2 1-z/z) ]^2 l_⊥'/l_⊥'^2exp{ - α̅_s ∫_η_i^η_i+1+ln(k_⊥,i-l_⊥')^2/(k_⊥,i-l_⊥')^2+l_⊥'^2η [ lnk_⊥,i^2/Λ_ cut^2 +N(η,k_⊥,i) ] }, where R again is a random number and C is the normalization factor ensuring that the r.h.s. of Eq. <ref> resides in the region of [0, 1]. In the practical Monte Carlo implementation, a veto algorithm is used to be more efficient. Once |l_⊥| and ϕ_l are generated, l and k_⊥,i+1 then can be reconstructed subsequently. We repeat the procedure outlined above until η_i+1 reach a minimal cut-off value η_ min. Once the whole cascade is generated, we are able to reconstruct the gluon k_⊥ distribution at arbitrary rapidity. For the frozen boundary case, the weighting factor has to be modified to 𝒲_kc,2(η_i, η_i+1;k_⊥,i, k_⊥,i+1) = (η_i+1-η_i) ln P_⊥^2/Λ_ cut^2/ (η_i+1-η_i)lnk_⊥,i^2/Λ_ cut^2 + ∫^η_i+1_η_i dη N(η, k_⊥,i ) N(η_i + ln[ k_⊥,i+1 ^2 / k_⊥,i+1 ^2 +l_⊥^2 ] , k_⊥,i )/ N(η_i ,k_⊥,i ) , and the radiated gluon transverse momentum l_⊥ is sampled solving the following equation R = 1/ Cα̅_s/π∫^l_⊥_Λ_ cut^2 l_⊥'/l_⊥'^2, where the normalization factor for this case is given by C = α̅_s/π∫^P_⊥_Λ_ cut^2 l_⊥'/l_⊥'^2. The k_⊥ distribution of the exchanged gluons that directly attaches to the hard part can be reconstructed from the forward evolution algorithm described above. Using the recipes described above, we are now ready to generate parton cascade. Following the conventional choice, we use the MV model <cit.> result as the initial condition at rapidity η_0=0. Since we are interested in simulating events such as di-jet production in eA collisions, it is suitable to utilize the Weiszäke-Williams (WW) gluon distribution as the initial condition <cit.>. It is given by N (η_0, k_⊥) = ∫d^2 r_⊥/2π e^-i k_⊥· r_⊥1/r_⊥^2(1- exp[-1/4 Q_s0^2 r_⊥^2 ln(e+1/Λ r_⊥) ] ), with Q_s0^2 = 1 GeV^2 and Λ= 0.24 GeV. We explored the behavior of the parton cascade with the both fixed boundary prescription and frozen boundary prescription. From Fig. <ref>, one can see that the k_⊥ distribution obtained from the forward approach is in perfect agreement with the numerical solutions of the kinematic constrained GLR equation for both boundary conditions. §.§ Backward evolution We now turn to discuss how to implement the kinematic constraint in the backward evolution which is far more efficient in generating initial state parton shower as compared to the forward approach. The rapidity η_i+1 of gluon participating hard scattering is fixed by external kinematics. k_⊥,i+1 at the rapidity η_i+1 can be sampled with the distribution N(η_i+1,k_⊥,i+1), which has to be determined beforehand by numerically solving the evolution equation. The next step is to generate η_i using a modified non-Sudakov form factor. The modified non-Sudakov form factor, Π_ns, can be related to the forward non-Sudakov form factor Δ_ns and the gluon distribution N as Π_ns (η_i+1, η_i; k_⊥,i+1)=Δ_ns(η_i+1,k_⊥,i+1) N(η_i,k_⊥,i+1)/Δ_ns(η_i,k_⊥,i+1) N(η_i+1,k_⊥,i+1), which looks similar to that derived in our previous work <cit.>. However, one has to keep in mind that the gluon distributions appearing in the above formula are obtained by solving the GLR equation with the kinematic constraint. On the other hand, the non-Sudakov factor can also be expressed as <cit.>, Π_ns (η_i+1, η_i; k_⊥,i+1) = exp [-α̅_s/π∫_η_i^η_i+1 η∫^P_⊥_Λ_ cut^2 l_⊥/l_⊥^2 N (η+ln[ k_⊥,i+1^2/ k_⊥,i+1^2+ l_⊥^2], k_⊥,i+1+ l_⊥ )/ N(η,k_⊥,i+1) ]. Both non-Sudakov form factors can be equally well used to generate η_i for a given η_i+1 by solving the following equation, R = Π_ns (η_i+1, η_i; k_⊥,i+1). The transverse momentum of the radiated gluon l_⊥ can be generated according to R = 1/ Cα̅_s /π∫_Λ_ cut^l_⊥^2 l_⊥'/l_⊥'^2 N ( η_i+1+ln[ k_⊥,i+1^2/ k_⊥,i+1^2+ l_⊥'^2] , k_⊥,i+1+ l_⊥' ), C = α̅_s /π∫_Λ_ cut^P_⊥^2 l_⊥'/l_⊥'^2 N ( η_i+1+ln[ k_⊥,i+1^2/ k_⊥,i+1^2+ l_⊥'^2] , k_⊥,i+1+ l_⊥' ). Once again, R is a random number, C is the normalization factor and a veto algorithm is employed in our practical implementation to make this sampling procedure more efficient. Similar to the forward evolution case, the generated event has to be re-weighted after each branching in the backward evolution method as well. It is important to notice that the GLR equation with the kinematic constraint is a non-local evolution equation when deriving the weighting factor. The weighting factor associated with backward evolution is the ratio of the fraction of gluons that appear from branching at the rapidity η_i+ ln k_⊥,i+1^2 / k_⊥,i+1^2 + l_⊥^2 and the fraction of gluons that vanish at the rapidity η_i due to the virtual correction and the fusion process. It reads, 𝒲_kc,back(η_i+1, η_i; k_⊥,i+1) = (η_i+1-η_i)lnk_⊥,i^2/Λ_ cut^2 + ∫^η_i+1_η_i dη N(η, k_⊥,i ) / (η_i+1-η_i) ln P_⊥^2/Λ_ cut^2 N(η_i ,k_⊥,i ) / N(η_i + ln[ k_⊥,i+1 ^2 / k_⊥,i+1 ^2 +l_⊥^2 ] , k_⊥,i ) . The procedure outlined above is repeated until η_i is smaller than η_0. The last step of the simulation is to construct four momenta of the radiated gluons. Note that the minus component of the t-channel gluon's four momentum can only be reconstructed after the full cascade has been generated. By going from the last t-channel gluon (closest to the nucleus), which has the vanishing minus component, forward in the cascade to the hard scattering process, the true minus component of the t-channel gluons are constructed. In Fig. <ref>, we compare gluon k_⊥ distribution at different rapidities generated from backward evolution to the numerical solutions of the GLR equation with the kinematic constraint. The perfect match between gluon k_⊥ distributions obtained from the backward approach and by numerically solving the kinematic constrained GLR has been found. § K_T RESUMMATION IN THE SMALL X LIMIT Our ultimate goal is to build a parton shower generator for simulating events in eA collisions at EIC. The hard scattering processes occurring in eA collisions often involve multiple scales. For instance, loosely speaking, there are three well separated scales in the back-to-back di-jet production: the center mass of energy √(s), the invariant mass of the di-jet Q, and the total transverse momentum of the di-jet system k_⊥. To improve the convergence of the pertubative series, the two type large logarithms α_s ln(s/Q^2) and α_sln^2 (Q^2/k_⊥^2) arise in the high order calculations of the di-jet production cross section have to be summed to all orders. The summation of the logarithm contribution α_s ln(s/Q^2) is achieved by solving the small x evolution equation, while the logarithm contribution α_sln^2 (Q^2/k_⊥^2) can be resummed by means of the CS equation. A unified framework that allows us to resum both large logarithms simultaneously in a consistent way have been developed in a sequence of papers <cit.>. The evolved small x gluon TMD can be expressed as the convolution of the Sudakov form factor and the renormalized dipole amplitudes. It has been stressed in Refs. <cit.> that at small x, gluon TMDs only can be matched onto dipole scattering amplitudes rather than the normal gluon PDFs in the collinear factorization. We notice that such a joint resummation formalism has been studied in the various different context <cit.>. To simulate hard scattering processes involving multiple scales in a parton shower generator, it is necessary to develop a Monte Carlo branching algorithm to effectively resum both types of logarithms through an iteration procedure. The essential observation that enables the computer implementation of the joint resummation is described as the following. In the backward approach, the evolution starts from the final t-channel gluon with the most negative virtual mass-squared, which participates in the hard process. As a parton cascade develops towards the backward direction, the virtual mass of the t-channel gluon decreases by radiating soft gluons with the longitudinal momentum fraction 1-z → 0. This first stage of the evolution is described by the CS equation and the renormalization group equation which resum the double leading k_t logarithm and the single leading k_t logarithm respectively. When the virtual mass of the t-channel gluon goes down to the scale which is of the order of saturation scale, we should perform the small x evolution. The precise value of this scale should be fixed by fitting the output of the cascade to the experimental data. During the course of the small x evolution, the virtual mass of the t-channel gluon stops monotonously decreasing, whereas its longitudinal momentum fraction increases rapidly until the small x evolution initial boundary is reached. In this second stage of the evolution, the development of parton cascade is mainly driven by the radiated gluons that carry the large longitudinal momentum fraction 1-z → 1. Therefore, the Monte Carlo algorithm based on the GLR equation should be applied to generate the parton branching at this stage. To simulate the first stage of the evolution, our primary task is to derive a folded version of the CS equation and the renormalization group equation. To this end, we write down the CS equation in the momentum space, ∂ N(μ^2,ζ^2,η,k_⊥) /∂lnζ^2=α̅_s /2 π∫^ζ_0 d^2 l_⊥/l_⊥^2 [ N(μ^2,ζ^2,η, k_⊥+l_⊥) -N(μ^2,ζ^2,η, k_⊥) ] . which can be converted into the conventional expression of the CS equation <cit.> after making the Fourier transform up to the leading logarithm accuracy. Here, μ is the factorization scale, and ζ is a scale introduced to regularize the light cone divergence. The factorization scale dependence of the gluon TMD in the saturation regime is described by the normal renormalization group equation <cit.>, ∂ N(μ^2,ζ^2,η,k_⊥) /∂lnμ^2= α̅_s [β_0 -1/2lnζ^2/μ^2 ]N(μ^2,ζ^2,η,k_⊥) . with β_0=11/12-N_f/6N_c and N_f=3 in this work. By choosing the factorization scale μ to be ζ, one can combine the CS equation and the renormalization group equation together. The combined evolution equation reads, ∂ N(Q^2,η,k_⊥) /∂ln Q^2=α̅_s /2 π∫^Q_0 d^2 l_⊥/l_⊥^2 [ N(Q^2,η, k_⊥+l_⊥)- N(Q^2 ,η, k_⊥) ] + α̅_s β_0 N(Q^2,η, k_⊥), where N(Q^2,η,k_⊥)≡ N(μ^2=Q^2,ζ^2=Q^2,η,k_⊥). Following the standard procedure, the above evolution equation can be cast into a folded equation, ∂/∂ln Q^2N(Q^2,η,k_⊥)/Δ_s(Q^2)=α̅_s /2 π∫_Λ_ cut^Q d^2 l_⊥/l_⊥^2N(Q^2,η, k_⊥+l_⊥) /Δ_s(Q^2), with the Sudakov form factor being given by, Δ_s(Q^2)=exp[ - ∫_ Q_0^2^ Q^2dt/tα̅_s (t)/2 ( lnt/Λ_ cut^2-2β_0 ) ]. The Sudakov form factor is simply the probability of evolving from Q_0 to Q without branching. Eq. <ref> can be integrated to give an integral equation for N(Q^2,η,k_⊥) in terms of the gluon TMD at the initial scale Q_0: N(Q^2,η,k_⊥) =N(Q_0^2,η,k_⊥) Δ_s(Q^2) + ∫ ^Q^2 _Q^2_0dt/tΔ_s(Q^2)/Δ_s(t)α̅_s (t)/2 π∫_Λ_ cut^Q d^2 l_⊥/l_⊥^2N(t,η, k_⊥+l_⊥). With the derived folded CS and renormalization group equation, we are ready to introduce the Monte Carlo implementation of the k_t resummation formulated in the framework of the CGC effective theory. §.§ Forward evolution To have a consistency check, we first present the formulation of the forward evolution scheme. The combined CS and renormalization group equation can be solved using the forward evolution approach. We lay out the main procedures in the following. For a given virtuality scale Q_i, either after several steps of evolution or at the initial condition, we first generate the value of a higher virtuality scale Q_i+1, where the next branching occurs. Following the conventional method, this can be achieved by solving the following equation, ℛ=exp[ - ∫_ Q_i^2^ Q_i+1^2dt/tα̅_s(t) (1/2lnt/Λ_ cut^2-β_0 ) ]. where the argument of the running coupling α_s is simply chosen to be the virtual mass squared. Once Q_i+1 is generated, the transverse momentum of the radiated gluon, l_⊥,i+1, can be determined according to the following equation ℛ = 1/ C∫_Λ_ cut^l_⊥,i+1 d^2 l^'_⊥/l^'_⊥^2, where the normalization factor reads C = ∫_Λ_ cut^Q_i+1 d^2 l^'_⊥/l^'_⊥^2. The four momenta of the radiated gluon and the t-channel gluon can be determined from the momentum conservation and the on-shell condition. We will discuss the reconstruction of kinematics in more details in the next subsection. The generated cascade needs to be re-weighted. This is because that the unitary is no longer preserved beyond the leading double logarithm approximation. We have included the leading single logarithm contribution in the algorithm employed here, which leads to the increase of gluon number density after each splitting. The weighting factor is given by, W _ CS ( Q^2_i+1, Q^2_i) = ∫ ^Q_i+1^2_Q_i^2dt/tα_s(t) lnt/Λ_ cut^2/∫ ^Q_i+1^2_Q_i^2dt/tα_s(t)[ lnt/Λ_ cut^2 - 2β_0 ] . If the single logarithm contribution associated with the β_0 term in the denominator is neglected, the weighting factor reduces to 1. With these re-weighted parton cascades, one can reconstruct the t-channel gluon k_⊥ distribution at different scales and compare with the analytical and numerical solutions of Eq. <ref>. It is straightforward to numerically solve Eq. <ref>, while the analytical solution of Eq. <ref> can also be easily obtained in the impact parameter space. After Fourier transforming back to the momentum space, the evolved gloun TMD distribution reads, N(Q^2,η, k_⊥)=∫d^2 b_⊥/(2π)^2 e^i k_⊥· b_⊥ e^-S(μ_b^2, Q^2)∫ d^2 l_⊥ e^-i l_⊥· b_⊥ N(η, l_⊥) , where N(η, l_⊥) is the gluon distribution evolved with the GLR equation, or the initial condition computed in the MV model. The Sudakov factor at one loop level in the impact parameter (b_⊥) space consists of a perturbative part and a non-perturbative part. It is given by S(μ_b^2,Q^2)= S_pert(μ_b*^2,Q^2) +S_NP (b_⊥^2, Q^2). The perturbative Sudakov factor reads S_pert(μ_b*^2,Q^2) = N_c/2π∫^Q^2_μ_b*^2dμ^2/μ^2α_s(μ) [ lnQ^2/μ^2 - 2 β_0 ], where μ_b*^2 is defined as μ_b*^2=4e^-2γ_E/b_⊥*^2, with b_⊥ *=b_⊥/√(1+b_⊥^2/b^2_max) and b_max=1.5 GeV^-1. To compare with the Monte Carlo result on the same footing, we simply neglect the non-perturbative Sudakov factor S_NP in the numerical calculation. The behaviour at large b_⊥ is regulated by N(η, b_⊥) which is the Fourier transform of N(η,l_⊥). In this work, we use the one-loop running coupling which reads α_s (μ^2) = 1/β_ 0N_c/πln (μ^2/Λ^2_ QCD ), with Λ^2_ QCD = 0.0578 GeV^2. We present the t-channel gluon k_⊥ distribution constructed from the generated parton cascade and compare it with the numerical solution of the CS-renormalization group equation for the fixed coupling case in the left panel of Fig. <ref>. In our estimation, the MV model is employed to provide with the gluon distribution at the initial scale Q_0=3 GeV. In the formulation of TMD evolution, all soft-radiated gluons carry exactly zero longitudinal momentum fraction. In contrast, all radiated soft gluons carry finite longitudinal momentum fraction in the parton branching algorithm. This presents an important advantage of the Monte Carlo method comparing with the conventional analytical approach. Keeping longitudinal momentum conservation exactly in parton splitting process is often crucial to correctly account for phenomenology near the threshold region <cit.>. However, to make the comparisons in a consistent way, we didn't change the longitudinal momentum fraction of the t-channel gluon after each branching in our algorithm. In the right panel of Fig. <ref>, we compare the Monte Carlo simulation result with both the numerical solution of the CS-renormalization equation and the analytical solution for the running coupling case at the scale Q=13 GeV. It is clear to see from the right panel of Fig. <ref> that our algorithm yields the same k_⊥ distribution as the numerical result. On the other hand, it differs from the analytical approach result. Such discrepancy is expected because the non-perturbative part of the CS kernel is treated differently in the analytical approach. In addition, the argument of the running coupling used in the parton branching algorithm and the numerical solution is the hard scale Q, whereas the scale of running coupling is μ_b in the analytical approach. Since the analytical result can describe the relevant phenomenology very well, one should use it as guidance to model the non-perturbative part of the Sudakov factor which will be introduced in the Monte Carlo algorithm in the future work. Alternatively, one could also use a relatively large infrared cutoff value Λ_ cut to mimic the effect of the non-perturbative Sudakov factor. We leave this for a future study. §.§ Backward evolution In this subsection, we will outline the essential steps of Monte Carlo implementation for the backward evolution based on the folded CS-renormalization group evolution equation. Unlike the forward evolution which can be considered as a way of solving the evolution equation, the evolved parton distributions have to be pre-generated and are used to guide the backward evolution. In the most parton branching algorithm, the k_t resummation is achieved by using the modified Sudakov factor incorporating the collinear Parton Distributions Functions (PDFs). However, in the saturation regime, the k_t resummation has to be formulated in terms of the unintegrated gluon distribution. The main procedures are summarized as follows. The modified Sudakov factor in the backward evolution approach is different from that in the forward evolution approach. It reads Π_s(Q_i+1,Q_i; k_⊥,i+1)=Δ_s( Q_i+1^2) N(Q^2_i, η, k_⊥,i+1) /Δ_s( Q_i^2) N(Q^2_i+1, η, k_⊥,i+1). An alternative way to compute the modified Sudakov factor is given by Π_s(Q_i+1,Q_i; k_⊥,i+1) = exp [ - ∫_Q_i^2^Q_i+1^2d t /tα̅_s(t)/2π∫^√(t)_Λ_ cut d^2 l_⊥/l_⊥^2 N(t,η, k_⊥,i+1+l_⊥)/N(t,η, k_⊥,i+1)]. It describes the probability for gluon evolving backward from Q_i+1 to Q_i without branching. The transverse momentum dependent gluon distribution appearing in Eq. <ref> and Eq. <ref> has to be pre-generated by numerically solving the combined CS-renormalization group equation. The backward evolution starts from the t-channel gluon with the highest virtuality Q_i. The hard scale of the partonic scattering process is denoted as Q_i+1. We first have to sample k_⊥, i+1 according to the following distribution ℛ = 1/ C∫ ^k_⊥,i+1_Λ_ cutd^2k^'_⊥ N(Q_i+1^2,η,k_⊥^'), with C = ∫ ^Q_i+1_Λ_ cut d^2 k^'_ ⊥ N(Q_i+1^2,η,k_⊥^') being the normalization factor. The rapidity η is fixed by external kinematics. The next quantity to be generated by the parton cascade algorithm is the value of virtuality Q_i. Following the standard backward evolution strategy, Q_i is obtained using the backward type Sudakov factor. We can sample a Q_i by solving the following equation, R = Π_s(Q_i+1,Q_i; k_⊥,i+1). As the virtual mass of (i+1)th t-channel gluon, Q_i also serves as the hard probe scale at which the ith t-channel gluon's transverse momentum is measured. The transverse momentum of the radiated gluon l_⊥,i is thus sampled solving the following equation ℛ = 1/ C∫ ^l_⊥,i_Λ_ cutd^2l'_⊥/l'^2_⊥N(Q_i^2,η,k_⊥,i+1 +l'_⊥), C = ∫ ^Q_i_Λ_ cutd^2l'_⊥/l'^2_⊥N(Q_i^2,η,k_⊥,i+1+l'_⊥) . The longitudinal momentum fraction of the radiated gluon is determined through the on-shell condition, |Q_i^2|≈z_i l_⊥,i^2/1-z_i+|k_⊥,i+1^2|, which is valid in the strong ordering region |Q_i-1^2 | ≪ |Q_i^2| ≪ |Q_i+1^2|. The minus component of the emitted gluon can be fixed accordingly. The ith t-channel gluon's transverse momentum is trivially obtained: k_⊥,i=k_⊥,i+1-l_⊥,i. The virtual mass Q_i-1 of the ith t-channel gluon is computed with Eq. <ref>. However, t-channel gluons' four momenta can be determined only after the whole cascade is generated. The minus component of the t-channel gluon that is directly attached to nuclear target is set to be 0. From this initial condition, the four momenta of t-channel gluons are retrospectively reconstructed by momentum conservation. As argued in the previous subsection, the generated event has to be re-weighted after each branching since the unitary is not preserved in the single leading logarithm accuracy level. In the backward evolution approach, the re-weighting function reads W _ CS, back ( Q^2_i+1, Q^2_i) = ∫ ^Q_i+1^2_Q_i^2dt/tα_s(t) [ lnt/Λ_ cut^2- 2β_0 ] /∫ ^Q_i+1^2_Q_i^2dt/tα_s(t) lnt/Λ_ cut^2 . We repeat the procedure outlined above until Q_i^2 reach a minimal cut-off scale at which TMD evolution stops. The TMD evolution is driven by the soft gluon radiations which carry the vanishing longitudinal momentum fraction 1-z_i → 0. In the practical Monte Carlo implementation, the cut-off is chosen to be |Q_i^2|>|l_⊥,i^2|+|k_⊥,i+1^2|, or equivalently z_i>0.5. Meanwhile, |Q_i^2| is also required to be larger than the satuartion scale Q_s^2. If these two conditions can not be met simultaneously, we terminate the TMD evolution, and start the backward small x evolution. We test the backward evolution algorithm against the numerical method as shown in Fig. <ref>. The MV model result is applied at the initial scale Q_0=3 GeV. The gluon k_⊥ distribution at high scale Q=13 GeV is obtained by numerically solving the combined CS-renormalization group equation. The cascade is generated starting from the scale Q=13 GeV and evolve down to the initial scale with the backward approach. The t-channel gluon k_⊥ distribution reconstructed from the cascade is compared with the numerical results at different scales. Gluon k_⊥ distributions are presented in the left panel of Fig. <ref> for the fixed coupling case, and in the right panel of Fig. <ref> for the running coupling case. It is evident that the k_⊥ distributions obtained from the Monte Carlo method is the same as the numerical results. We conclude that the backward evolution algorithm pass this consistency check as expected. § CONCLUSION In this work, we extended the small x initial state parton branching algorithm developed in the previous paper to include the kinematic constraint effect. In the small x limit, the kinematic constraint leads to stronger suppression of soft gluon emissions than that caused by the angular ordering along the chain. The coherent branching effect is thus effectively implemented in the parton branching algorithm once the kinematic constraint is imposed. This is a nontrivial extension in the sense that the weighting factor and the way of sampling radiated gluon's transverse momenta are drastically altered. The t-channel gluon k_⊥ distributions constructed from both the forward scheme and the backward scheme are shown to reproduce the numerical solutions of the kinematic constrained GLR equation. We also formulated a parton branching algorithm that enables us to resum large k_t logarithms at small x logarithms following a two-step evolution picture. The cascade first develops by radiating soft gluons that carry vanishing longitudinal momentum fractions in the backward approach description. At this first stage of the evolution, the parton branching is simulated with the Sudakov factor which we obtained from the folded CS equation and the renormalization group equation. The transverse momentum-dependent gluon distribution instead of gluon PDF is used to guide the evolution path toward the most populated regions of (Q^2, k_⊥). When the virtual mass of the t-channel gluon is dominated by its transverse momentum or is of the order of saturation scale, the parton branching starts being generated according to the non-Sudakov form factor derived from the small x evolution equation. The joint k_t and small x resummation thus has been achieved in the Monte Carlo simulation by implementing such two-step evolution. Our study represents an important step towards practical applications of the parton shower generator in simulating scattering processes that involve multiple well-separated hard scales, such as di-jet production in eA collisions at EIC. The next step is to construct a full hadron-level Monte Carlo generator with the hadronization being performed using multi-purpose generators such as PYTHIA <cit.>. We also plan to integrate the algorithm into eHIJING framework <cit.> aiming at the simulation of events in eA collisions for the whole x range accessible at EIC in the future. Acknowledgments: We thank Hai-tao Li and Shan-shan Cao for helpful discussions. This work has been supported by the National Natural Science Foundation of China under Grant No. 1217511. Y.S. is supported by the China Postdoctoral Science Foundation under Grant No. 2022M720082. S.Y.W. is also supported by the Taishan fellowship of Shandong Province for junior scientists. apsrev4-1
http://arxiv.org/abs/2307.04752v1
20230710175622
Ogg's Torsion conjecture: Fifty years later
[ "Jennifer S. Balakrishnan", "Barry Mazur" ]
math.NT
[ "math.NT", "math.AG" ]
Ogg's Torsion conjecture: Fifty years later 10pt Jennifer S. Balakrishnan and Barry Mazur (with an appendix by Netan Dogra) 10pt Andrew Ogg's mathematical viewpoint has inspired an increasingly broad array of results and conjectures. His results and conjectures have earmarked fruitful turning points in our subject, and his influence has been such a gift to all of us. Ogg's celebrated Torsion Conjecture—as it relates to modular curves—can be paraphrased as saying that rational points (on the modular curves that parametrize torsion points on elliptic curves) exist if and only if there is a good geometric reason for them to exist.[B.M.: And here's just one (tiny) instance of Ogg's jovial and joyful way of thinking: As Tate and I recorded in one of our papers <cit.>: “Ogg passed through our town" and mentioned that he had discovered a point of order 19 on the Jacobian of X_1(13) allowing us to feel that that Jacobian was “not entitled to have" more than 19 points." ] 10pt < g r a p h i c s > § AN OVERVIEW Let K be a number field, and denote by G_K its absolute Galois group, i.e. G_K (K̅/K). A basic question in the arithmetic of abelian varieties over number fields is to classify (up to the natural notion of isomorphism) pairs (A; Cα↪A(K̅)) where * A is a (polarized) abelian variety defined over K, * C is a finite abelian group with a G_K-action, and * α is a G_K-equivariant injection. These are the three basic parameters in this general question, and you have your choice of how you want to choose the range of each of them. For example, you can: * allow the “C"s to run through all cyclic finite groups with arbitrary G_K-action; and A to range through all abelian varieties with a specified type of polarization. Equivalently, you are asking about K-rational cyclic isogenies of abelian varieties, or * restrict to finite “C"s with trivial G_K-action, in which case you are asking about K-rational torsion points on abelian varieties. * You might also vary over a class of number fields K—e.g., number fields that are of a fixed degree d over a given number field k, * and, of course, fix the dimension of the abelian varieties you are considering. § `GEOMETRIZATION' OF THE PROBLEM If you organize your parameters appropriately you can `geometrize' your classification problem by recasting it as the problem of finding K-rational points on a specific algebraic variety. In more technical vocabulary: you've framed a representable moduli problem—and the algebraic variety in question is called the moduli space representing that moduli problem. § SOME CLASSICAL EXAMPLES—MODULAR CURVES Fixing N a positive integer and sticking to elliptic curves, the moduli spaces for rational torsion points or cyclic isogenies are smooth curves defined over : torsion points of order N: Y_1(N) ⟶ X_1(N)[d] cyclic isogenies of degree N: Y_0(N) ⟶ X_0(N) The elliptic curves defined over K possessing a K-rational point of order N are classified by the K-rational points of the affine curve Y_1(N)—and X_1(N) is the smooth projective completion of Y_1(N) given by the adjunction of a finite set of `cusps'. And similarly, the classification of elliptic curves defined over K possessing a K-rational cyclic isogeny of degree N is given by the K-rational points of the affine curve Y_0(N)—with X_0(N) being the corresponding smooth projective completion. 30pt § THE GEOMETRIC FORMULATION COMES WITH A NUMBER OF SIDE-BENEFITS. Here are two: * If, say, the curve X_0(N) is of genus 0—noting that one of the cusps (∞) is defined over , it follows that there is a rational parametrization of that curve over which gives us a systematic account (and parametrization); that is, a K-rational parametrization of cyclic N-isogenies of elliptic curves—for any K. * If it is of genus greater than 0, one has a -rational embedding (sending the cusp ∞ to the origin) X_0(N) ↪ J_0(N) of the curve in its Jacobian, which allows us to relate questions about K-rational cyclic N-isogenies to questions about the Mordell-Weil group (of K-rational points) of the abelian variety J_0(N). Besides being able to apply all these resources of Diophantine techniques, there are the simple constructions that are easy to take advantage of. For example, if you have a `moduli space' ℳ whose K-rational points for every number field K provides a classification of your problem over K, then, say, for any prime p the set of K-rational points of the algebraic variety that is the p-th symmetric power of ℳ —denoted ^p(ℳ)— essentially classifies the same problem ranging over all extensions of K of degree p. As an illustration of this, consider cyclic isogenies of degree N and noting that the natural -rational mapping ^p(X_0(N)) ⟶ J_0(N) given by (x_1,x_2,…, x_p) ↦ Divisor class of [∑_ix_i - p·∞] has linear spaces as fibers, we get that the classification problem of all cyclic N-isogenies of elliptic curves over all number fields of degree p is geometrically related, again, to the Mordell-Weil group of J_0(N) over . A particularly nice example of this strategy carried out in the case of the symmetric square ^2 of Bring's curve is given in the appendix by Netan Dogra. Bring's curve is the smooth projective genus 4 curve in ℙ^4 given as the common zeros of the following system of equations: x_1 + x_2 + x_3 + x_4 + x_5 = 0 x_1^2 + x_2^2+ x_3^2 + x_4^2 + x_5^2 = 0 x_1^3 + x_2^3+ x_3^3 + x_4^3 + x_5^3 = 0. It has no real points and thus no rational points. However, there are a number of points defined over (i), such as (1: i: -1: - i: 0). A natural question is thus if one can find all quadratic points on Bring's curve. Dogra proves that all quadratic points are defined over (i) and produces the complete list of (i)-rational points. Siksek gave a symmetric Chabauty method <cit.>, a variant of the Chabauty–Coleman method (see Section <ref>) for symmetric powers of curves. Symmetric Chabauty has been used and extended in various ways to determine quadratic points on numerous modular curves X_0(N) <cit.>. Box, Gajović, and Goodman <cit.> further developed a “partially relative” symmetric Chabauty method to study cubic and quartic points on several modular curves X_0(N). § ANDREW OGG'S TORSION CONJECTURE(S) (1973) Torsion in algebraic groups—even if not in that vocabulary—has played a fundamental role since Gauss's Disquisitiones Arithmeticae (1801), the structure of roots of unity (torsion in the multiplicative group) being a central concern in the development of modern number theory.[See Umberto Zannier's expository article <cit.>, Torsion in algebraic groups and problems which arise.]Andrew's Torsion Conjectures taken in broad terms can be formulated in terms of “the geometrization(s)," as just described—i.e., in terms of -rational points of modular curves—and the Mordell-Weil groups of abelian varieties (i.e., of their Jacobians): §.§ -Rational torsion Conjecture 1 (Ogg): An isomorphism class {C} of finite groups occurs as the torsion subgroup of the Mordell-Weil group of some elliptic curve (defined over ) if and only if the modular curve that classifies this problem is of genus zero[ A form of this conjecture was made by Beppo Levi in his 1908 ICM address in Rome. See <cit.> which gives a wonderful account of the story of Beppo Levi's engagement with (and his important results about) the arithmetic of elliptic curves—all this being even before Mordell proved that the group of rational points of an elliptic curve over is finitely generated. Levi considers the tactic of producing multiples of a rational point on an elliptic curves {n· P} n=1,2,3,… a “failure" if it loops finitely—i.e., if P is a torsion point; his aim is to classify such “failures." ] . Put in another way: an isomorphism class occurs if and only it is expected to occur; i.e., if it necessarily occurs, as a consequence of the ambient geometry—this view being a continuing guiding inspiration for number theory.By `geometry' one means the (algebraic) geometry of the curve X_0(N). For example, Andrew's article <cit.> discusses the curious case of X_0(37) which has two noncuspical -rational points, these being the images of the hyperelliptic involution (a non-modular involution) applied to the two cusps, both cusps being -rational[ See Section <ref> below.]. Andrew comments: As Mazur and I are inclining to the opinion that Y_0(N) has no -rational points except for a finite number of values of N, we are certainly interested in knowing when this sort of thing is going on, and in putting a stop to it if at all possible. §.§ -rational cyclic isogenies There are two different proofs of Conjecture 1. A major step in one of these proofs of Conjecture 1 is the full classification of -rational cyclic isogenies of prime degree; this is proved in <cit.>: Let N be a prime number such that some elliptic curve over admits a -rational N-isogeny. Then N=2, 3, 5, 7, 13 ( the genus zero cases) or N=11, 17, 19, 37, 43, 67, or 163. This result was followed by a sequence of papers of M.A. Kenku (<cit.>) that extends the classification to cyclic isogenies of any degree: The -rational cyclic isogenies of degree N of elliptic curves defined over only occur—and do occur—if 1 ≤ N ≤ 19 or if N=21, 25, 27, 37, 43, 67, or 163. Following in the spirit of Ogg's original view of torsion points, all of these N-isogenies can be given `geometric reasons' for existing; e.g., the 37-isogenies `come by' applying the hyperelliptic involution (it is non-modular!) to the cusps of X_0(37). 20pt §.§ Rational torsion points on the Jacobians of modular curves Let J_0(N) denote the Jacobian of X_0(N). Noting that the cusps of X_0(N) map to torsion points of J_0(N), denote by C_0(N) ⊂ J_0(N)_ tors⊂ J_0(N) the subgroup generated by those cusps. 90pt Cusps in X_0(N)[d][dr] C_0(N)[d] X_0(N)[d] J_0(N)_ tors[r]^⊂ J_0(N) we have another, seemingly quite different type of conjecture: Conjecture 2: Let N be a prime number. We have: C_0(N)=J_0(N)_ tors() ⊂ J_0(N)() Put in another way: there are no `unexpected' -rational torsion points in J_0(N): they all come from cusps. Conjectures 1 and 2 are known. For Conjecture 1, see in <cit.> and also <cit.>. For Conjecture 2, see <cit.>. (Also see the broad survey of rational torsion results in Andrew Sutherland's <cit.>). That these conjectures are interlinked is a long story, as we discuss in Section <ref>. §.§ Conjecture 1 Letting C_n denote the cyclic group of order n, the complete list of possible (isomorphism classes) of finite groups that occur as torsion subgroups of the Mordell-Weil group of -rational points of elliptic curves are * C_n with 1≤ n ≤10, and also C_12, and * the direct sum of C_2 with C_2m, for 1≤ m ≤ 4. All these torsion groups occur infinitely often over , since the corresponding modular curves are all genus zero curves possessing a rational point.[See <cit.> where it is proved that each of these groups appears as a possible torsion group over any quadratic field.] Thanks to the work of Loïc Merel <cit.>, Joseph Oesterlé, Pierre Parent <cit.> and others, we have neat explicit upper bounds for the order of torsion points on elliptic curves over number fields of degree d. For surveys of this work, see <cit.> and <cit.>. Conjecture 1 having been completely resolved in the case of elliptic curves, has inspired more general uniform boundedness expectations for rational points; e.g., for abelian varieties A over number fields K: conjectures that the order of the torsion group of an abelian variety over a number field can be bounded in terms of the dimension of the variety and the number field; and still stronger versions: that the torsion is bounded in terms of the dimension of the variety and the degree of the number field. Moreover, it is striking how few additional isomorphism classes of K-rational torsion subgroups of elliptic curves can occur in elliptic curves over quadratic and cubic number fields K: §.§ Torsion on elliptic curves over quadratic number fields Let K range through all quadratic number fields, and E all elliptic curves over these fields. Then the torsion subgroup E(K)_ tors of E(K) is isomorphic to one of the following 26 groups: * C_n for 1≤ n ≤ 18, n 17, * the direct sum of C_2 with C_2m for 1≤ m≤ 6, * the direct sum of C_3 with C_3m for m=1,2, * C_4⊕ C_4. §.§ Torsion on elliptic curves over cubic number fields Let K range through all cubic number fields, and E all elliptic curves over these fields. Then the torsion subgroup E(K)_ tors of E(K) is isomorphic to one of the following 26 groups: * C_n for 1≤ n ≤ 18, n 17, * the direct sum of C_2 with C_2m for 1≤ m≤ 7, * C_20, C_21. There exist infinitely many -isomorphism classes for each such torsion subgroup except for C_21. In this case, the base change of the elliptic curve with LMFDB label https://www.lmfdb.org/EllipticCurve/Q/162/c/3/162.c3 to (ζ_9)^+ is the unique elliptic curve over a cubic field K with K-rational torsion group isomorphic to C_21. §.§ Conjecture 2 expanded * The order of the C_0(N) had been computed for square-free N thanks to Kubert and Lang <cit.>, and Takagi <cit.>. In this case (i.e., N square-free) the set of cusps are -rational. * Ohta <cit.> has proved a generalization of Ogg's conjecture in the context of square-free N. That is, he proved that the p-primary parts of J_0(N)_ tors() and of C_0(N) are equal for p ≥ 5 and p=3 if 3 doesn't divide N. Related to this, see <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, and <cit.>, <cit.>. And very recently the PNAS article <cit.> (Another look at rational torsion of modular Jacobians) by Ken Ribet and Preston Wake appeared, giving another approach to this issue. * In the more general context of N not squarefree, the cuspidal subgroup of J_0(N) may not consist entirely of rational points; nevertheless: Conjecture 2^*: J_0(N)_ tors() =C_0(N)()⊂ C_0(N). §.§ Conjecture 2 further expanded Now let X (over ℚ) denote either X_0(N) or X_1(N) for some N ≥ 1. Let 𝒥 be the Jacobian of X, and 𝒞⊂𝒥 the finite étale subgroup scheme of 𝒥 generated by the cusps. Let K/ℚ be the field cut out by the action of Galois on 𝒞. Thus there's an exact sequence 0→(ℚ̅/K) →(ℚ̅/ℚ)→(𝒞(ℚ̅)). Define the cuspidal defect of X to be the cokernel of 𝒞(ℚ̅)= 𝒞(K) ↪𝒥(K)_ tors. Conjecture 2^**: The `cuspidal defect' of either X listed above is trivial. § THE CONNECTION BETWEEN RATIONAL TORSION ON ELLIPTIC CURVE AND RATIONAL TORSION ON ABELIAN VARIETIES RELATED TO ELLIPTIC CURVES The easiest way to explain this is to follow the ideas of the proof of Conjecture 1 in <cit.>, rather than the ideas in the earlier and quite different proof given in <cit.>. To set things up, let N be a prime number such that X_0(N) is of genus greater than 0 and let J_/ ℤ the Néron model of the Jacobian of X_0(N) over , and X_0(N)^smooth_/ ℤ ι↪ J_/ ℤ be the smooth locus of the Zariski closure of X_0(N)_ in J_/ ℤ, the embedding being defined by sending the cusp “∞"—viewed as ℤ-valued section e∈ X_0(N)_/ ℤ—to the `origin section' of J_/ ℤ.An elliptic curve E with a cyclic isogeny of degree N over ℚ is represented by a noncuspidal Spec ℤ-valued section, x, of X_0(N)^smooth_/ ℤ and hence (via ι) also of the Néron model J_/ ℤ of the Jacobian of X_0(N) over . Suppose that such a rational point x exists, and denote by x̅ its image under the Atkin-Lehner involution w_n: X_0(N)→ X_0(N), the involution that exchanges the two cusp sections“0" and “∞" of X_0(N). Neither x nor x̅ are cuspidal sections of X_0(N)^smooth_/ ℤ. The articles <cit.> and <cit.> construct and discuss a specific smooth group scheme A_/ ℤ that is an optimal quotient[ The group scheme A is the relevant Eisenstein quotient—cf. loc.cit. The term `optimal quotient' means that the kernel of J_/ ℤ→ A_/ ℤ is a connected group scheme.] of J_/ ℤ for which these two properties are proven:(1) The generic fiber A_/ ℚ of A_/ ℤ is an abelian variety with finite Mordell-Weil group—i.e., A(ℚ) consists of rational torsion–and hence the image under f of any ℤ-valued section of X_0(N)^smooth_/ ℤ is either trivial, or else generates a cyclic finite flat subgroup of A_/ ℤ;and:(2) The following diagram 80ptx[r][dr] X_0(N)^smooth_/ ℤ[r]^ ⊂[dr]^f J_/ ℤ[d] e[r] X_0(N)^smooth_/ ℤ[r]^ f A_/ ℤ has the property that * the mapping f:X_0(N)^smooth_/ ℤ⟶ A_/ ℤ is formally smooth along the cuspidal section e, and * the diagram70pt X_0(N)[r]^f [d]^w_N A[d]^-1 X_0(N)[r]^f A commutes where “-1" denotes the involution z ↦ z^-1. So, by (i), f is formally smooth along both cusp sections. It follows that the image α f( x) ∈ A(ℤ) is either * the section defining the origin in the group scheme A_/ ℤ or else * is the generating section of a (nontrivial) cyclic finite flat subgroup scheme over ℤ, call it: 𝒢_/ ℤ⊂ A_/ ℤ. By the classification of such group schemes we have that either 𝒢_/ ℤ is a constant (nontrivial ) group scheme, or else 𝒢_/ ℤ≃μ_2 (μ_2 ⊂𝔾_m being the kernel of multiplication by 2 in the multiplicative group scheme 𝔾_m). These possibilities also hold, of course, for the `conjugate section' α̅ f(x̅) ∈ A(ℤ): it is either the trivial section or it generates a finite flat group scheme 𝒢̅_/ ℤ⊂ A_/ ℤ that is either a constant group scheme or μ_2. Neither α nor α̅ are the trivial section of A_/ ℤ. Since f is formally smooth along the cuspidal sections if α or α̅ were the trivial section we would be led to a contradiction, as illustrated by the diagram 10pt10pt < g r a p h i c s > in that the image of the two depicted sections would converge onto the origin section of A contradicting formal smoothness along e. So α and α̅ are generators of nontrivial group schemes 𝒢 and 𝒢̅ respectively, these being either constant or μ_2. * If 𝒢 and 𝒢̅ are constant group schemes, then α and α̅ are sections of A over disjoint (as schemes) from the trivial section of A and therefore x and x̅ are disjoint (as schemes) from the cuspidal sections of X_0(N)_/ ℤ. It follows that the elliptic curves E and E̅ that are classified by x and x̅ have potentially good reduction everywhere. * And if α or α̅ generates a subgroup isomorphic to μ_2, since μ_2 is étale outside the prime 2 it follows that E or E̅ would have potentially good reduction except for the prime 2. Even though this is the start of a sketch of the proof of Conjecture 1 in <cit.>, from what we've just described, we can prove The only prime numbers N for which there exist elliptic curves over ℚ with rational torsion points of order N are: N=2,3,5,7. First note that X_1(N) is of genus 0 for N=2,3,5,7 so there are infinitely many elliptic curves with rational torsion points of order N for these primes. That list of primes and 13 are precisely the primes for which X_0(N) is of genus 0. Since the curious prime N=13 is taken care of by <cit.> where it is proven that there are no rational points of order 13 on elliptic curves over , to prove the theorem we may suppose N to be different from N=2,3,5,7, 13; equivalently, that X_0(N) is of genus greater than 0 so the discussion above applies. In particular, we assume that E is an elliptic curve over , of potential good reduction away from p=2, and possessing a rational point of order N=11 or N ≥17, where N is a prime. Since it has such a rational point, the Néron model of E over ℤ contains a constant subgroup scheme 𝒵 isomorphic to /(N·). For p a prime, let E_p denote the fiber at the prime p of the Néron model of E, so E_p is a group (scheme) of finite order over the prime field 𝔽_p. Since the specialization of 𝒵 to E_p defines N distinct 𝔽_p-rational points of E_p it follows that N divides |E_p(𝔽_p)|. If p>2, since E is of potentially good reduction, in the terminology of the theorem of Kodaira and Néron (cf. Theorem 15.2 and Table 15.1 in Section 15 in Appendix C of <cit.>) we have that E_p is not of multiplicative type—i.e., of “type I_ν" or “I_ν^*" for any ν. So either: * p is a prime of good reduction for E, or * it is of additive reduction. If E has additive reduction at p (i.e., the Néron fiber at p is of one of the types I, II, III or I^*, II^*, III^*; see Table 15.1 in loc.cit.) then E_p is an extension of the additive group 𝔾_a over 𝔽_p by a finite group of order ≤ 4. In particular |E_p(𝔽_p)| is divisible by p and is ≤ 4p. It follows that Equation <ref>, applied to the prime p=3 already shows that E cannot have additive reduction at p=3 for the primes N we are considering, so it must have good reduction–i.e., be an elliptic curve— at p=3. But since any elliptic curve over 𝔽_3 has at most 7 𝔽_3-rational points, we see, by (<ref>) that N is either 2, 3, 5, or 7. A significantly more detailed outline of the proof of Conjecture 1 is given as Steps 1-4 on pages 132, 133 of <cit.>—the full proof itself being in the body of that paper. § REMARKABLE `DIOPHANTINE STABILITY' Let L/K be an extension of (number) fields, and V an algebraic variety defined over K. Denote by V(K) the set of K-rational points of V. Say that V is Diophantine Stable for L/K, or L/K is Diophantine Stable for V, if the inclusion V(K) ↪ V(L) is an isomorphism, i.e.: if V acquires no new rational points after passing from K to L. Note that Theorem <ref> tells us that: For all but finitely many positive numbers N, the curve Y_1(N) (over ) is Diophantine Stable for all quadratic extensions L/. This is striking and suggests that Diophantine Stability is a common feature.[ Filip Najman suggested that one might add a comment that the Diophantine Stability phenomenon of Corollary <ref> holds more generally over number fields of any degree, given the results referred to in Remark <ref> in Subsection <ref> above.] Consider: Suppose A is a simple abelian variety over K and all K-endomorphisms of A are defined over K. Then there is a set 𝒮 of rational primes with positive density such that for every ℓ∈𝒮 and every n ≥ 1, there are infinitely many cyclic extensions L/K of degree ℓ^n such that A(L) = A(K). If A is an elliptic curve without complex multiplication, then 𝒮 can be taken to contain all but finitely many rational primes. and this is surely not the last word regarding the extent of Diophantine Stability, specifically if the base field K is and if A=E, an elliptic curve over . We conjecture that any such E is Diophantine stable for all but finitely many Galois extensions of prime degree greater than 5. § CYCLIC ISOGENIES DEFINED OVER QUADRATIC FIELDS So, what about uniformity results regarding cyclic N-isogenies of elliptic curves ranging over all quadratic fields? This question has been addressed in <cit.> and generalized to arbitrary number fields in <cit.>. § `EXPECTED' AND `UNEXPECTED' L-RATIONAL CYCLIC ISOGENIES FOR L RANGING THROUGH QUADRATIC FIELDS A corollary of a theorem of Faltings[For a discussion of this in the context of generalization(s) of the classical Mordell Conjecture—with references listing the people who also worked on this, see Thoughts about Mordell and uniformity of finiteness bounds: <https://people.math.harvard.edu/ mazur/papers/M.pdf>] is that: (Faltings) Let K be a number field and X a curve defined over K. Then X is Diophantine Stable for all but finitely many quadratic extensions L/K unless X is—of genus 0 or 1, or—hyperelliptic or bielliptic (over K). And, for a hyperelliptic and/or bielliptic curve X defined over K, Faltings proves that there are only finitely many quadratic points (relative to K) that are not parametrized by an infinite system of quadratic points arising by X being the double cover of a rational curve Y with a K-rational point; or an elliptic curve of Mordell-Weil rank greater than 0 over K: 90pt π^-1(Y(K))[d]^π[r]^⊂ X(K̅)[d]^π Y(K)[r]^⊂ Y(K̅). 10pt10pt < g r a p h i c s > §.§ Isolated quadratic points Call the set of quadratic points of X that are not among such (infinite) systems of parametrized quadratic points isolated points. The infinite systems deserve to be called `expected quadratic points (over K) in X' given the geometry of the situation. But when X=X_0(N) for some N and K = there may also be a few other points of X_0(N) over quadratic imaginary fields (√(d)) of class number 1; i.e., d = -1, -2, -3, -7, -11, -19, -43, -67, -163 that deserve the title “expected.” Namely, if E is an elliptic curve over that is CM with CM field K (√(d)) (with d in the above list) then for any positive integer N with the property that all of its prime divisors are (unramified and) split in K, E has a K-rational cyclic isogeny of degree N; hence is classified by a K-rational point of X_0(N). Such a point is therefore also “expected.” So: §.§ Sporadic quadratic points Call a quadratic point of X_0(N) sporadic (quadratic) if: * it is not a cusp, and * is isolated; i.e., * is not the inverse image of a -rational point in ℙ^1 via a hyperelliptic covering (i.e., a degree 2 mapping X_0(N)→ℙ^1), in the case where X_0(N) is hyperelliptic, and * is not the inverse image of a -rational point in an elliptic curve E via a bielliptic covering (i.e., a degree 2 mapping X_0(N)→ E), in the case where X_0(N) is bielliptic,and * is not a point of X_0(N) classifying a CM elliptic curve and cyclic isogeny of degree N as described above. Ranging over all X_0(N)'s for N ∈ℤ_≥ 1 there are only finitely many sporadic quadratic points. Surely all of us agree with the spirit of the quotation of Andrew's view regarding rational torsion in Section <ref>. That is, we're interested “in knowing when this [sporadic quadratic points] sort of thing is going on, and in putting a stop to it if at all possible." Thanks to the recent work of a number of people, the sporadic quadratic points of all of the curves X_0(N) that are hyperelliptic or bielliptic have been computed, as we will discuss in the next section. Sheldon Kamienny made the following comment: The existence of sporadic points always left me scratching my head. Do they fit into a framework, or is it just nature being unkind? § HYPERELLIPTIC X_0(N) A classical theorem of Ogg <cit.> gives the nineteen values of N for which X_0(N) is hyperelliptic (we take hyperelliptic to require that the genus is >1):10pt [ N: 22 23 26 28 29 30 31 33 35 37; genus: 2 2 2 2 2 3 2 3 3 2 ] 10pt [ N: 39 40 41 46 47 48 50 59 71; genus: 3 3 3 5 4 3 2 5 6 ] 10pt The levels N that appear in boldface above are those values of N such that X_0(N) is bielliptic as well as hyperelliptic. All sporadic quadratic points for any of those modular curves X_0(N) (except for X_0(37)) have been computed by Peter Bruin and Filip Najman in their article <cit.> (which has other interesting results as well). The case of X_0(37) is taken care of in Josha Box's paper <cit.>, in which all sporadic quadratic points have also been computed for the curves X_0(N) with N=43, 53, 61, 65, these being bielliptic curves covering elliptic curves of positive Mordell-Weil rank. These are the values of N for which X_0(N) is of genus >1 and bielliptic (over ): 10pt [ 22 26 28 30 33 34 35 37 38; 39 40 42 43 44 45 48 50 51; 53 54 55 56 60 61 62 63 64; 65 69 72 75 79 81 83 89 92; 94 95 101 119 131 ] 10pt Until very recently there remained a dozen entries in the above table for which we did not know the set of their isolated quadratic points. Thanks to Filip Najman and Borna Vukorepa <cit.> we now have computation of the isolated quadratic points for all bielliptic curves X_0(N) (as we also do for all hyperelliptic X_0(N)). § EXCEPTIONAL QUADRATIC POINTS Let N be prime, and w_N X_0(N)→ X_0(N) the Atkin-Lehner involution. This involution is given by sending a pair (representing a point in X_0(N)) (E, C_Nα↪ E) —consisting of an elliptic curve E and C_N a cyclic subgroup of order N— to the pair (E', C'_Nα'↪ E'). Here E' E/C_N and C'_N E[N]/C_N (where E[N] is the kernel of multiplication by N in E). Forming the quotient, X_0(N)^+ X_0(N)/ action of w_N we get the double cover X_0(N) π⟶ X_0(N)^+ For N an integer where X_0(N)^+ of genus >1, * call a -rational point of X_0(N)^+ exceptional if it is neither a cusp nor a point classifying a CM elliptic curve; * call a quadratic point P of X_0(N) exceptional if it is not defined over (i.e., it is an honest quadratic point) and the image of P in X_0(N)^+ is an exceptional -rational point. Exceptional points deserve the adjective, since they have the intriguing structure of a duo of cyclic N-isogenies: EN↔ E' and E'N↔ E. This structure can also be combined into a single abelian surface defined over : A E× E' endowed with an endomorphism: “√(N)": (x,y)↦(α'(y), α(x)). What tools do we have to compute the exceptional -rational points on X_0^+(N)? The classical method of Chabauty-Coleman (see Section <ref>) computes a usable bound for the number of rational points on a curve X (of genus >1) provided that the rank r of the Mordell-Weil group of the Jacobian of X is strictly less than its genus g. But the Birch and Swinnerton-Dyer conjecture predicts that (for N prime) the rank r_0(N)^+ of J_0(N)^+(), the Mordell-Weil group of the Jacobian of X_0(N)^+, is greater than or equal to g_0(N)^+, the genus of X_0(N)^+. So this classical method can't be brought to bear here. Computationally, we have many examples where there's actual equality: r_0(N)^+=g_0(N)^+. (Indeed, this is true for all N < 5077 for which g_0(N)^+ > 1.) Happily, for exactly such cases—i.e., for curves X of genus >1 with r=g— we have the more recent “Quadratic Chabauty–Coleman–Kim" method that offers a new approach to compute the set of all ℚ-rational points[ We think it is reasonable to conjecture that the average value of the ratios r_0(N)^+/g_0(N)^+ is 1; e.g., as N ranges through prime values; are these ratios bounded?]. For example see <cit.> (and Section <ref>). Indeed, there are also two new viewpoints on quadratic Chabauty, the geometric perspective of Edixhoven–Lido <cit.> and the (p-adic) Arakelov theoretic one of Besser–Müller–Srinivasan <cit.>. We will say more about these in the following section. The list of curves X_0(N)^+ of genus 2 or 3 with N prime is a result of Ogg. We have the following: Theorem (Ogg) For N prime, X_0(N)^+ is of genus 2 if and only if N ∈{67, 73, 103, 107, 167, 191} and it has genus 3 if and only if N ∈{97, 109, 113, 127, 139, 149, 151, 179, 239}. Elkies and Galbraith <cit.> found exceptional rational points on X_0^+(N) for N = 73, 91, 103, 191 and N = 137, 311 (which are of genus 4). In <cit.>, it was shown that the only prime values of N with X_0^+(N) of genus 2 or 3 that have an exceptional rational point are N = 73, 103, 191 (all genus 2). In particular, for prime N, if X_0^+(N) is of genus 3, it has no exceptional rational points. Adžaga, Arul, Beneish, Chen, Chidambaram, Keller, and Wen <cit.> showed that the only prime values of N with X_0^+(N) of genus 4, 5, or 6 that have an exceptional rational point are N = 137 and 311. Thus for all of the above values of N, we have a complete understanding of the exceptional quadratic points on X_0(N). We briefly discuss the work of <cit.> on the genus 4 curve X_0(311)^+. Using the canonical embedding, a model for X_0(311)^+ is given by the following equations in ℙ^3: X^2 + W Y - 2 X Y + 2 Y^2 + 7 X Z - 8 Y Z + 13 Z^2 = 0, W X^2 - 2 W X Y + X^2 Y - W Y^2 - X Y^2 - 2 Y^3 + W^2 Z + 6 W X Z - X^2 Z - W Y Z + 5 X Y Z + 4 Y^2 Z + 7 W Z^2 - 4 X Z^2 - 2 Z^3 = 0. Using quadratic Chabauty (see Section <ref>) at p=5 on a plane model, they show that there are precisely five rational points on the curve: rational point on X_0(311)^+ type of point (1 0 0 0) cusp (1 -1 -1 0) CM, D = -11 (1 2 -1 -1) CM, D=-19 (2 0 -1 0) CM, D=-43 (6 8 -1 -2) exceptional Galbraith <cit.> had earlier computed that the j-invariant of the ℚ-curve corresponding to the exceptional point is j = 31244183594433270730990985793058589729152601677824000000 ± 1565810538998051715397339689492195035077551267840000√(39816211853). See also the survey article <cit.> (and <cit.>) in which exceptional points found by Elkies and Galbraith are defined and studied in the context of ℚ-curves; and for the list of the seven known exceptional N-isogenies, these being rational over a quadratic field of discriminant Δ: [ N g Δ; 73 2 -127; 91 2 -3 · 29; 103 2 5· 557; 125 2 509; 137 4 -31159; 191 2 61· 229· 145757; 311 4 11· 17· 9011· 23629; ] By the work of <cit.> this gives a complete list of exceptional isogenies arising from rational points on the curves X_0(N)^+ of level N and genus at most 6. Are these the only exceptional isogenies? There's lots to be done. § THE METHOD OF CHABAUTY–COLEMAN–KIM AND `QUADRATIC CHABAUTY' §.§ The classical method of Chabauty The aim of this `classical' method is to prove finiteness of the set of -rational points of a curve X of genus g>1 under the assumption that the rank r of the Mordell–Weil group of the Jacobian J of X is small; specifically, if it is strictly less than g. One can assume that X has at least one -rational point, for otherwise the job is done. Choosing a rational point b ∈ X(), form the Abel-Jacobi embedding i_b: X → J P ↦ [(P) - (b)]. For any prime p viewing J(_p) as a p-adic analytic group (of dimension g) containing the Mordell-Weil group J() as subgroup, denote by Γ_p ⊂ J(_p) the topological closure of J() in J(_p) noting that, given our hypothesis, its dimension is less than r. We have:120ptX()[r]^i_b[d]^⊂ Γ_p[d]^⊂ X(_p)[r]^i_b J(_p)X(_p) is a (proper) p-adic analytic subvariety of J(_p) that generates J(_p) as a p-adic analytic group. It follows that X(_p) is not contained in the proper subgroup Γ_p and therefore X(_p)∩Γ_p is finite; hence so is X(). How can one make this method effective? §.§ The method of Chabauty as augmented by Coleman The Chabauty–Coleman method <cit.> is one of our most practical tools for actually computing the finite set of rational points on a curve X of genus greater than 1 defined over the rationals, subject to the same Chabauty condition; namely that the Mordell-Weil rank r of the Jacobian of the curve is strictly less than its genus.. Robert Coleman constructed, in the above conditions, a p-adic analytic function ϕ on the p-adic analytic variety X^an_/_p such that the zeroes of ϕ on X(_p) are * reasonably computable (to any approximation), * finite in number, and * include X(). The construction of such a ϕ uses Coleman's p-adic abelian integrals on the Jacobian of the curve. Let X be a curve (of genus g>1) defined over the rationals and let J be its Jacobian. Now fix a prime p of good reduction for X and a rational point b ∈ X(). Consider, as before, the Abel-Jacobi embedding i_b: X → J given by P ↦ [(P) - (b)]. Coleman <cit.> proved that there is a p-adic line integral on holomorphic differentials on the curve satisfying several nice properties (linearity in the integrand, additivity in endpoints, pullbacks under rigid analytic maps, Galois compatibility). The map J(_p) × H^0(X__p, Ω^1) →_p (Q, ω) ↦⟨ Q, ω⟩ is additive in Q, is _p-linear in ω and is given by ⟨ Q, ω⟩ = ⟨ [D], ω⟩∫_D ω for D ∈^0(X) with Q = [D]. Then ⟨ i_b(P), ω⟩ = ∫_b^P ω. The embedding i_b induces an isomorphism of g-dimensional vector spaces H^0(J__p, Ω^1) ≃ H^0(X__p, Ω^1), giving us the pairing J(_p) × H^0(J__p, Ω^1) →_p (Q, ω_J) ↦∫_0^Q ω_J. This gives a homomorphism log: J(_p) → H^0(J__p, Ω^1)^*, where log is the logarithm on the p-adic Lie group J(_p), and we have the following diagram 8.5cm![->,>=stealth',baseline=(current bounding box.center)] [] (X) X(); [right of=X, node distance=3.7cm] (Xp) X(_p); [below of=X, node distance=1.5cm] (Hf) J(); [right of=Hf,node distance=3.7cm] (Hfp) J(_p); [right of=Hfp,node distance=3.5cm](Dieu) H^0(J__p,Ω^1)^∗≃ H^0(X__p,Ω^1)^∗; (X) edge node[left] (Hf); (Xp) edge node[left] (Hfp); (X) edge (Xp); (Hf) edge node[above] (Hfp); (Hfp) edge node[above]log(Dieu); (Xp) edge node[above right] (Dieu); Recall that under the hypothesis r < g, the intersection X(_p)_1 X(_p) ∩Γ_p and, consequently, X() is finite. Coleman gave a technique to compute X(_p)_1, by his construction of p-adic integrals that vanish on Γ_p: in particular, considering an integral of an annihilating differential ω, a holomorphic differential such that ⟨ P, ω⟩ = 0 for all P ∈ J(), then computing the zero locus of this integral on X(_p). Bounding the number of zeros of this integral via fairly elementary p-adic analysis (for good p > 2g) yields the bound #X() ≤#X(_p)_1 ≤#X(_p) + 2g-2. In Section <ref>, we give a worked example of the Chabauty–Coleman method. §.§ The method of Chabauty–Coleman–Kim The construction above crucially uses an assumption that the rank of the Jacobian is small relative to the genus. Nevertheless, there are many interesting curves where this hypothesis is not satisfied, including a number of modular curves we have already seen. In a series of papers <cit.>, Minhyong Kim laid out a program to extend Chabauty–Coleman relaxing the condition on Mordell–Weil rank, going beyond the abelian confines of the Jacobian, replacing it by a sequence of Selmer varieties, which are carved out of unipotent quotients of π_1^(X_)__p, the _p-étale fundamental group of X_ with base point b. We first recast the Chabauty–Coleman method (see also <cit.>, <cit.>) using p-adic Hodge theory, which adds an extra row of compatibilities to diagram (<ref>). Let V = ^1_(X_, _p)^* and V_^1_(X__p)^*, viewed as a filtered vector space with filtration dual to the Hodge filtration. We have an isomorphism V_/F^0 ≃^0(X__p, Ω^1)^*. Let G_T be the maximal quotient of G_ unramified outside T, the set of primes of bad reduction of X, together with the prime p. Let G_p denote the absolute Galois group of _p. Then the étale formulation of Chabauty–Coleman is given by the following diagram, where the last row is of Bloch–Kato Selmer groups: 8.5cm![->,>=stealth',baseline=(current bounding box.center)] [] (X) X(); [right of=X, node distance=3.7cm] (Xp) X(_p); [below of=X, node distance=1.5cm] (Hf) J(); [right of=Hf,node distance=3.7cm] (Hfp) J(_p); [right of=Hfp,node distance=3.5cm](Dieu) ^0(X__p,Ω^1)^∗; [below of=Hf, node distance=1.5cm] (H1f) ^1_f(G_T,V); [right of=H1f,node distance = 3.7cm] (H1fp)_f^1(G_p,V); [right of=H1fp, node distance=3.5cm](H1dR) _1^(X__p)/F^0; (X) edge node[left] (Hf); (Xp) edge node[left] (Hfp); (X) edge (Xp); (Hf) edge node[above] (Hfp); (Hfp) edge node[above]log(Dieu); (Xp) edge node[above right]i_b (Dieu); (Hf) edge node[above] (H1f); (Hfp) edge node[above] (H1fp); (Dieu) edge node[right] ≃ (H1dR); (H1f) edge node[right] (H1fp); (H1fp) edge node[above] ≃ (H1dR); . Now let U be a Galois-stable unipotent quotient of π_1^(X_)__p. Kim defined global and local unipotent Kummer maps j_U and j_U_v such that the following diagram commutes: [->,>=stealth',baseline=(current bounding box.center)] [] (X) X(); [right of=X, node distance=4.7cm] (Xv) ∏_v ∈ TX(_v); [below of=X, node distance=1.5cm] (Hf) ^1(G_T,U); [right of=Hf,node distance=4.7cm] (Hfv) ∏_v ∈ T^1(G_v,U).; (X) edge node[left]j_U (Hf); (Xv) edge node[right]∏ j_U,v (Hfv); (X) edge (Xv); (Hf) edge node[above]∏loc_v (Hfv); Kim proved that the nonabelian pointed cohomology sets ^1(G_T, U) and ^1(G_v, U) are affine algebraic varieties over _p. Motivated by the classical study of Selmer groups, he then refined ^1(G_T, U) by local conditions to produce a Selmer variety. We give an adapted version <cit.> of the definition here: The Selmer variety (U) is the reduced scheme associated to the subscheme of ^1(G_T, U) containing those classes c such that * _p(c) is crystalline, * _ℓ(c) ∈ j_U,ℓ(X(_ℓ)) for all ℓ p, * the projection of c to ^1(G_T,V) comes from an element of J()⊗_p. Now the Selmer variety gives rise to the following interesting set of points X(_p)_U j_p^-1(_p ((U))) ⊂ X(_p ). We have that X()⊂ X(_p)_U. Suppose that U is a Galois-stable quotient of U_n, the maximal n-unipotent quotient of π_1^(X_, b)__p. Then X() ⊂ X(_p)_n X(_p)_U_n⊂ X(_p)_U . The depth-n Selmer set X(_p)_n can be computed in terms of n-fold iterated Coleman integrals, and one has a series of refinements X() ⊂⋯⊂ X(_p)_n ⊂ X(_p)_n-1⊂⋯⊂ X(_p)_2 ⊂ X(_p)_1. Note that the depth-1 Selmer set is the Chabauty–Coleman set from before. We refer to the points in X(_p)_n as the set of Selmer points of level n. We call the points in X(_p)_n ∖ X() the set of mock-rational Selmer points of level n. Kim has conjectured that for n ≫ 0, the set X(_p)_n is finite. This conjecture is implied by the conjecture of Bloch–Kato. Putting everything together, Kim's program is to study finiteness of X(_p)_U using p-adic Hodge theory and the following diagram is the nonabelian generalization of (<ref>): [->,>=stealth',baseline=(current bounding box.center)] (m) [matrix of math nodes, row sep=3em, column sep=4em, minimum width=2em] X() X(_p) (U) _f^1(G_p,U) U^/Fil^0 ; [-stealth] (m-1-1) edge (m-1-2) edge node [left] j_U (m-2-1) (m-1-2) edge node [left] j_U,p (m-2-2) edge node [above,right] j_U^ (m-2-3) (m-2-1) edge node [above] loc_U,p (m-2-2) (m-2-2) edge node [above] ≃ (m-2-3); Computing the depth-2 Selmer set (or a slightly larger finite set containing it), known as quadratic Chabauty, has seen progress in recent years <cit.>, via aspects of the theory of p-adic height functions <cit.>. The quadratic Chabauty set X(_p)_2 is finite for those curves that satisfy the rank bound <cit.> r < g + ρ - 1, where ρ((J)) is the Néron–Severi rank of the Jacobian over . To carry out the quadratic Chabauty method, one uses a nontrivial element of ((J) →(X)) to construct a nonabelian quotient U of U_2, which is used to compute X(_p)_U. Siksek <cit.> showed that modular curves of genus 3 or more have ρ at least 2, and consequently, for these curves, quadratic Chabauty allows one to consider Jacobians of higher rank than allowed by Chabauty–Coleman. Balakrishnan, Dogra, Müller, Tuitman, and Vonk <cit.> made various aspects of quadratic Chabauty computationally practical, using explicit p-adic cohomology to compute a certain (global) p-adic height of Nekovář, depending on a choice of a nontrivial element of ((J) →(X)). Roughly speaking, the method starts from the following observation: the global p-adic height admits a decomposition as a sum of local heights: a local height at p that can be computed using p-adic Hodge theory, and a finite sum of local heights away from p that, in certain favorable conditions, can be shown to be trivial—or if not trivial, at least a quantity that can be computed from the geometry of a regular model of the curve. Moreover, the global p-adic height is a quadratic form on H^0(X, Ω^1)^*. Choosing an explicit basis for the space of quadratic forms in terms of Coleman integrals, and knowing sufficiently many rational points (either on X or on J) and their p-adic heights, one can compute a locally analytic function whose zero locus contains X(_p)_2. Recently, two new perspectives on quadratic Chabauty have emerged: the geometric one of Edixhoven–Lido <cit.> and the p-adic Arakelov theoretic one of Besser–Müller–Srinivasan <cit.>. In geometric quadratic Chabauty, Edixhoven and Lido <cit.> use line bundles over the Jacobian, the Poincaré torsor (a biextension of the Jacobian by 𝔾_m), and models over the integers to study rational points under the same rank bound hypothesis. Besser, Müller, and Srinivasan <cit.> give a new construction of p-adic heights on varieties over number fields using p-adic adelic metrics on line bundles, in the spirit of Zhang's work on real-valued heights using adelic metrics <cit.>. This leads them to formulate p-adic Arakelov quadratic Chabauty. § RATIONAL POINTS ON X_0(37): THREE PERSPECTIVES As a concrete application of the techniques discussed so far, we present here three perspectives on rational points on the modular curve X_0(37). For further discussion, see Section 5 of <cit.>; and for more, see Section 5 of <cit.>. The modular curve X X_0(37) is of genus 2 and therefore is hyperelliptic. Denote by X_0(37)σ⟶ X_0(37) its hyperelliptic involution, and by X_0(37)w⟶ X_0(37) its Atkin-Lehner involution. The involutions σ and w commute, generating a Klein group 𝒢 of automorphisms. The automorphisms 1, w, σ, wσ are defined over and are the only automorphisms of X_0(37) over . Form the quotients X_0(37)[dl]_i_0[d]^x[dr]^i_1 E_0 X_0(37)/⟨σ· w⟩ ℙ^1_/≃ X_0(37)/⟨σ⟩ E_1 X_0(37)/ ⟨ w ⟩. By the Riemann–Hurwitz formula, the ramification locus of each of the double covers: X_0(37) i_0⟶E_0 and X_0(37) i_1⟶ E_1 are -rational (effective) divisors of degree two: * D_0 {η_0,η̅_0}⊂ X_0(37)—for (<ref>). * D_1 {η_1,η̅_1}⊂ X_0(37)—for (<ref>). In particular, D_1 is the fixed point set of w and D_0 is the fixed point set of wσ. Note that (since σ commutes with w) each of these involutions (σ, w, wσ) preserves D_1 and D_0. The involution wσ interchanges the points η_1,η̅_1. So their image e_0∈ E_0—which is therefore the image of a -rational divisor in X_0(37)— is -rational. Consequently {η_1,η̅_1} either consists of a pair of -rational points[ that's not the case; see Lemma <ref> below] or a conjugate pair of quadratic points in X_0(37). For the same reason the involution w preserves the ramification divisor of wσ and interchanges the points η_0,η̅_0 and therefore their image e_1∈ E_1 is -rational. A visit to the L-Functions and Modular Forms Database (LMFDB) <http://www.lmfdb.org/EllipticCurve/Q/> (with a bit of work) will get you that: * E_0 is the elliptic curve https://www.lmfdb.org/EllipticCurve/Q/37/b/237.b2: y^2+y=x^3+x^2-23x-50. Its Mordell-Weil group is of order 3. * E_1 is the elliptic curve https://www.lmfdb.org/EllipticCurve/Q/37/a/137.a1: y^2+y=x^3-x. It has Mordell-Weil rank 1, and its group of -rational points is isomorphic to . * (Classical Chabauty gives finiteness) Let J_0(37) denote the Jacobian of X_0(37). We have:80pt X_0(37)[r]^⊂[dr]^i_0× i_1 J_0(37)[d]^ϕ E_0× E_1 where i_0,i_1 are (as above) the modular parametrization of E_0,E_1, and ϕ: J_0(37) E_0× E_1 is an isogeny. Since {E_0× E_1}() is—by the data above—a group isomorphic to ×/3 (contained in a cyclic group of order three times the elliptic curve E_1) we see that the Zariski closure of the group of -rational points J_0(37)() is an algebraic subgroup in J_0(37) of codimension 1 so can intersect only finitely with X_0(37)—giving that X_0(37)() is finite. * (The projection to E_0 gets us the precise set of -rational points)The cusp ∞∈ X_0(37) is a -rational point, as are the four points 𝒮=𝒢·∞={∞, w(∞)= the cusp 0, σ(∞), σ( 0) }. These are the only four -rational points on X_0(37). Returning to the mapping of degree two X_0(37)() i_0⟶ E_0(), since E_0() is cyclic of order three, we see that * the pair {∞, σ w(∞)=σ( 0)} maps to the origin in E_0() and * the pair { 0, σ w( 0)= σ(∞)} maps to a (nonzero) point e∈ E_0(). * Recalling that the pair {η_0,η̅_0} discussed above maps to e_0∈ E_0() and noting that e_0 cannot be any of the above two -rational points of E_0, it must be the third -rational point, giving us: e_0 = 2e=-e ∈ E_0(). The inverse image of e_0 =2e=-e in X_0(37) consists of a pair of (√(37))-conjugate points {η, η̅}∈ X_0(37)((√(37))). X_0(37)() = 𝒮.Proof of Lemma <ref> The involutions σ and w of X_0(37) are easily described in terms of the model of X_0(37) given by Equation (<ref>): y^2 = -x^6 -9x^4 - 11x^2 + 37. We have: * (x,y) σ↦ (x,-y), * (x,y) w↦ (-x,y) and * (x,y) wσ↦ (-x,-y). The proof of (c) follows from (a) and (b) by composition. The proof of (a) is simply that the quotient of the involution (x,y) ↦ (x,-y) is of genus zero as is clear from the equation; so that involution is the hyperelliptic involution σ. The proof of (b) follows from considering the following model (over ) for the expression of X_0(37) as the double cover X_0(37) i_1⟶ E_1 = X_0(37)^+ over : X_0(37):[d]^i_1 y^2 = -x^6 -9x^4 - 11x^2 + 37 E_1: v^2 = -u^3 -9u^2 - 11u + 37[u]^u=x^2; v=y Since {η_1,η̅_1} consists of the fixed points of the involution w, we have: {η_1,η̅_1} = { (0, ±√(37))}, from which it follows that wσ:(0, ±√(37)) ↦ (0, ∓√(37)) and therefore i_0(η_1)=i_0(η̅_1) = i_0(0, +√(37)) = i_0(0, -√(37)) ∈ E_0(), i.e., it is a -rational point of E_0, which can be neither i_0(∞) nor i_0( 0) so must be the third -rational point. * (The classical Chabauty method would also give us the set of -rational points)Fix the model y^2 = g(x) -x^6 -9x^4 - 11x^2 + 37 for X over . Since J_0(37)() has Mordell–Weil rank 1, which is less than the genus of the curve, we may carry out the Chabauty–Coleman method to compute the set X(). We use the prime p = 3. Searching in a box for rational points of small height, one finds the points (± 1, ± 4) ∈ X(). The point P [(1,-4) - (-1,4)] ∈ J_0(37)() is non-torsion, since the 3-adic Coleman integral of a holomorphic differential along this point is nonzero: ∫_(-1,4)^(1,-4)x dx/y = 3^2 + 2 · 3^3 + 3^4 + 2 · 3^5 + 3^7 + O(3^9). Moreover, ∫_(-1,4)^(1,-4)dx/y = O(3^9). Thus we may take dx/y as our annihilating differential. The curve X over _3 has the following rational points: (0,1), (0,2), (1,1), (1,2), (2,1), (2,2) ∈ X__3(_3), which correspond to the residue disks over which we carry out our computation. Fixing as our basepoint (-1,4) ∈ X(), we start in the residue disk corresponding to (0,1). We take the following point in the residue disk S_0 = (0, 1 + 2· 3^2 + 3^4 + 2 · 3^5 + 3^7 + 2 · 3^8 + 2 · 3^9 + O(3^10)), at which we compute our local coordinate, producing S_t = (t, -3788 + (2159 + O(3^10))t^2 - (15737 + O(3^10))t^4 + - (23833 + O(3^10))t^6 + (746· 3^3 + O(3^10))t^8 + O(t^10)) =: (x(t),y(t)). We wish to compute the zeros of the power series I(3T), where I(T) = ∫_(-1,4)^S_0dx/y + ∫_S_0^S_Tdx(t) dt/y(t). We find I(3T) = (3 + 3^3 + 2 · 3^4 + 2 · 3^5 + 3^6 + 3^7 + 2 · 3^8 + 3^9 + 3^10 + O(3^11))T + (3^2 + 2 · 3^4 + 2 · 3^5 + 3^7 + 2 · 3^8 + 2 · 3^9 + 3^10 + O(3^12))T^3 + (3^6 + 3^7 + 2 · 3^8 + 3^9 + 3^10 + 3^11 + 2 · 3^13 + 2 · 3^14 + O(3^15))T^5 + (3^8 + 2 · 3^9 + 3^10 + 2 · 3^11 + 2 · 3^12 + 2 · 3^13 + 2 · 3^15 + O(3^17))T^7 + (3^7 + 2 · 3^8 + 2 · 3^10 + 2 · 3^11 + 3^12 + 3^14 + 2 · 3^16 + O(3^17))T^9 + O(T^10), which has precisely one zero at T = 0, corresponding to S_0, which we can identify, after fixing a choice of √(37)∈_3, as (0, √(37)). Continuing in this way, parametrizing each residue disk by a local coordinate and computing the zeros of the corresponding I(3T) in each residue disk, we find that X(_3)_1 = {(0, ±√(37)), (± 1, ± 4)}, from which we immediately produce X() = (± 1, ± 4). It was fairly lucky that X(_3)_1 = {(0, ±√(37)), (± 1, ± 4)} and was not much larger. Finding a small good prime p such that there are no mock-rational Selmer points—or where the mock-rational points are easily-recognized algebraic points—may be an issue. By the Weil bound, we know that #X(_p) grows linearly as p grows. So if we had used a larger prime p in the classical Chabauty–Coleman method, we would expect more p-adic points in X(_p)_1, and we may not be able to immediately recognize these extra points. 30pt § QUADRATIC POINTS ON BIELLIPTIC CURVES OF GENUS 2 USING QUADRATIC CHABAUTY In the previous section, we considered the problem of determining the finitely many rational points on X_0(37). We could also study the finite sets X_0(37)(K) for various other number fields K, one number field at a time. Or, we could further study ^d(X_0(37))(), as described in Section <ref>, which would tell us about all degree d points on X_0(37). We start by considering X_0(37)(K) for a fixed quadratic field K. If the rank of J_0(37)(K) is now 2, and if this is because the rank of E_0(K) increases to 1—recall from Section <ref> that the rank of E_0() is 0—then the Chabauty–Coleman method no longer applies. However, since X_0(37) is bielliptic and genus 2, we can use the method of <cit.>, which gives a particularly explicit description of quadratic Chabauty functions using p-adic height functions and double Coleman integrals on elliptic curves, for bielliptic genus 2 curves. We describe this below in some generality, and then use it to study rational points on X_0(37) over K=(i). Let K = or a quadratic imaginary extension, and let X/K be a genus 2 bielliptic curve y^2 = x^6 + a_4x^4 + a_2x^2 + a_0, with a_i ∈ K. Let C_1 and C_2 be the elliptic curves over K defined by the equations C_1: y^2 = x^3 + a_4 x^2 + a_2 x + a_0, C_2: y^2 = x^3 + a_2x^2 + a_4a_0 x + a_0^2, and let f_1: X → C_1 be the map that sends (x,y) to (x^2,y) and f_2: X → C_2 be the map that sends (x,y) → (a_0 x^-2, a_0 yx^-3). We will be considering the case where the Mordell-Weil ranks of C_1 and C_2 over K are equal to 1. Letting J denote the jacobian of X we have that the rank of J over K is 2. The natural mapping defined over K ^2(X) → J (i.e., setting p=2 in Equation <ref> in Section <ref>) is * an isomorphism if X is not hyperelliptic, or is * an isomorphism in the complement of an `exceptional fiber' ℰ⊂^2(X) isomorphic to ℙ^1 over K if X is hyperelliptic. The rank two group J(K) and—if X is hyperelliptic J(K) together with ℰ≃ℙ^1(K) `parametrize' —in the appropriate sense—all quadratic points of X over K.These parametrization are neat, and explicit, but they still leave untouched the question: for a given quadratic field K what—exactly—is the finite set X(K)? We want to use quadratic Chabauty to answer such questions. Fix some auxiliary choices, including an idèle class character χ: G_K^ab→_p. When K = fix a prime = (p) to be a prime of good ordinary reduction. When K is imaginary quadratic, take p to be a rational prime that splits as where both and are primes of good ordinary reduction. Let h_C_1 and h_C_2 denote the global -adic height functions associated to the choices made above and h_C_i, the respective local height at , with the global height written as the sum of local heights h_C_i = ∑_v h_C_i, v. Suppose C_1(K) and C_2(K) each have Mordell–Weil rank 1, and let P_i ∈ C_i(K) be points of infinite order. Let α_i = h_C_i(P_i)/[K:]log_C_i(P_i)^2. Let Ω denote the finite set of values taken by -∑_v∤ p (h_C_1, v(f_1(z_v)) - h_C_2,v(f_2(z_v)) - 2χ_v(x(z_v))), for (z_v) ∈∏_v∤ p X(K_v). Then X(K) is contained in the finite set of z ∈ X(K_) cut out by the quadratic Chabauty function h_C_1,(f_1(z)) - h_C_2, (f_2(z)) - 2χ_(x(z)) - α_1log_C_1(f_1(z))^2 + α_2log_C_2(f_2(z))^2 ∈Ω, where log_C_i(Q) = ∫_∞^Q dx/2y, the single Coleman integral we saw in the Chabauty–Coleman method (with ∞ denoting the point at infinity on the corresponding elliptic curve) and h_C_i,(z) is a double Coleman integral. Over K = (i), the elliptic curves https://www.lmfdb.org/EllipticCurve/Q/37/a/137.a1 and https://www.lmfdb.org/EllipticCurve/Q/37/b/237.b2 each have rank 1. The computation in <cit.>, applies quadratic Chabauty as described above at the primes p = 41, 73, 101 to produce, for each prime p, a finite superset of p-adic points containing X(K). This is then combined with another method, the Mordell–Weil sieve, to give X_0(37)(K)= {(± 2i, ± 1),(± 1, ± 4), ∞, 0}. §.§ Explicitly determining quadratic points Quadratic Chabauty for bielliptic curves over was subsequently refined by Bianchi <cit.> using p-adic sigma functions in place of double Coleman integrals. This was recently extended by Bianchi and Padurariu <cit.>, where an implementation was given to study rational points on all rank 2 genus 2 bielliptic curves in the LMFDB, including the Atkin–Lehner quotient curve X_0(166)^* X_0(166)/⟨ w_2, w_83⟩ (with LMFDB label https://www.lmfdb.org/Genus2Curve/Q/13778/a/27556/113778.a.27556.1), as well as the Shimura curve X_0(10,19)/⟨ w_190⟩. Using a slight extension of their work to K = (i), as done in <cit.>, one can use a smaller prime to carry out the computation of a finite set containing the depth 2 Selmer set for X_0(37). (Recall Definition <ref> in Section <ref>.) We carried out this computation for p=13 and recovered the points (± 2i, ± 1),(± 1, ± 4), and ∞, 0. But lurking within the set of depth 2 Selmer points, we also found the algebraic points (±√(-3), ± 4), these being initially observed 73-adically in <cit.>. We also found several other mock-rational Selmer points, such as (5 + 8 · 13 + 12 · 13^2 + 4 · 13^3 + 2 · 13^4 + 3 · 13^5 + O(13^6), 1 + 3 · 13 + 3 · 13^2 + 9 · 13^3 + 12 · 13^4 + 5 · 13^5 + O(13^6)). . See Banwait–Najman–Padurariu <cit.> for an extensive discussion—and for results—regarding quadratic points on X_0(N). In particular they show that X_0(37)(ℚ(√(d))) = X_0(37)() for d= -6846, -2289, 213, 834, 1545, 1885, 1923, 2517, 2847, 4569, 6537, 7131, 7302, 7319, 7635, 7890, 8383, 9563, 9903. We could continue by varying the quadratic fields K, and in principle, if the rank is not too large, apply Chabauty–Coleman, quadratic Chabauty or variations thereof—possibly combining with other Diophantine techniques—to determine the K-rational points on X_0(37). But eventually the ranks outpace our current collection of Diophantine tools. For instance, over K = (√(-139)), a computation reveals that the elliptic curve E_0 has rank 3, as does E_1, and so J_0(37)(K) here altogether has rank 6, making it a challenge for existing methods. Now indeed, since X_0(37) is hyperelliptic, it has infinitely many quadratic points. Nevertheless, one can describe all quadratic points on X_0(37), using the ^2 perspective and the maps to the various quotients of X_0(37) in the diagram (<ref>), as was done by Box <cit.>. The hyperelliptic covering map x: X_0(37) →ℙ^1 is one source of infinitely many rational points, and the rank 1 elliptic curve quotient E_1 is another source of infinitely many rational points. Finally, the elliptic curve quotient E_0 gives three rational points, and Box pieced together these three sources of rational points to describe ^2(X_0(37))(), as below. The x-map gives us all points {(x_i,√(g(x_i))), (x_i,-√(g(x_i)))}∈^2(X_0(37))(), where x_i ranges through all rational numbers. We can find P_1 ∈ X_0(37)((√(-3))) such that [P_1 + P_1 - ∞- 0] generates the free part of the Mordell–Weil group of J_0(37)(), and we have the points 𝒫_1,0{P_1, P_1} and 𝒫_0,1{∞, w( 0)}. Finally, for any (a,b) ∈×/3∖{(0,0)}, there is a point 𝒫_a,b∈^2(X_0(37)()) defined by the unique effective degree 2 divisor P such that P - ∞ - 0∼ a𝒫_1,0 + b𝒫_0,1 - (a+b)(∞ + 0) for any lift of b to . § THANKS This paper expands the 45-minute talk that B.M. gave at the conference at the IAS (Talks Celebrating the Ogg Professorship in Mathematics - October 13, 2022). We are grateful to Barinder Banwait, Francesca Bianchi, Maarten Derickx, Netan Dogra, Minhyong Kim, Steffen Müller, Filip Najman, Ken Ribet, and Preston Wake for their illuminating comments. Thanks also to Netan Dogra for providing the appendix on Bring's curve. Thanks to the organizers in the IAS for organizing and hosting the conference in Andrew Ogg's honor, and thanks to Andrew for inspiring all of us. The research for this paper was partially supported by NSF grant DMS-1945452, the Clare Boothe Luce Professorship (Henry Luce Foundation), Simons Foundation grant no. 550023 (J.S.B.), and NSF grant DMS-2152149 (B.M.). § QUADRATIC POINTS ON BRING'S CURVE, BY NETAN DOGRA We consider Bring's curve, the smooth projective genus 4 curve X in ℙ^4 given as the common zeros of the following system of equations: x_1 + x_2 + x_3 + x_4 + x_5 = 0 x_1^2 + x_2^2+ x_3^2 + x_4^2 + x_5^2 = 0 x_1^3 + x_2^3+ x_3^3 + x_4^3 + x_5^3 = 0. From the quadratic defining equation of Bring's curve, we see that X() = ∅, so we have that X() = ∅. However, considering the curve instead over K = (i), we see several K-rational points: for instance, all permutations of the coordinates of the points (1: ± i: -1: ∓ i: 0) are in X((i)). Could there possibly be more points? The only quadratic points on Bring's curve are over (i), and up to permutation of coordinates, they are (1: ± i: -1: ∓ i: 0). The automorphism group of X is the symmetric group S_5, given by permutation of the five coordinates. Using the action of S_5 on X, one can see that the Jacobian J of X is isogenous to E^4 <cit.>, where E is the rank zero elliptic curve with LMFDB label https://www.lmfdb.org/EllipticCurve/Q/50/a/350.a3. Since Bring's curve is not hyperelliptic, the map ^2(X) ↪^0(X) is injective, and since ^0 (X)(ℚ) is finite it follows that there are only finitely many quadratic points on Bring's curve. There is also a simple description of a map ^2 (X)→ E^4 with finite fibers. The quotient of Bring's curve by the involution swapping two coordinates is isomorphic to the curve E': x^3 +y^3 +1 + x^2 y +y^2 x + x^2 +y^2 +xy+x+y=0 by projecting the three non-permuted coordinates to ℙ^2. This is isomorphic to the elliptic curve E:y^2 +5x^3+5x^2+4 = 0 (LMFDB label https://www.lmfdb.org/EllipticCurve/Q/50/a/350.a3) via (x,y)↦( 2/1+2x+2y,4(y-x)/1+2x+2y) . We have E() = {∞, (-2,± 4)}. The S_3-action on E' corresponds to the action of E( ) and -1 on E. Now fix a quadratic point P=(x_0 :x_1 :x_2 :x_3 :1 ) on Bring's curve. Up to an S_3 permutation, we may assume it maps to ∞ in E after quotienting by the involution switching x_0 and x_1. Suppose σ generates the Galois group of the field of definition of P. Let x=x_2 and y=x_3. Then y-x/1+2x+2y=-σ y-σ x/1+2σ x+2σ y. This reduces to the equation y+4 y = x +4 x. Thus quadratic points on Bring's curve are 5-tuples (x_1 :x_2 :x_3 :x_4 :x_5 ) of quadratic points in ℙ^5 satisfying, for all i_1 ,i_2 ,i_3 ⊂{1 ,2,3,4,5}, ∏ _σ∈ S_3 ( (x_i_σ (1)/x_i_σ (3))+4 (x_i_σ (1)/x_i_σ (3))- (x_i_σ (2)/x_i_σ (3))-4 (x_i_σ (2)/x_i_σ (3)))=0. Up to the S_5-action, we may reduce to finding tuples (x_1 ,x_2 ,x_3 ,x_4 ) defining a quadratic point (x_1 :x_2 :x_3 :x_4 :1) on Bring's curve and satisfying x_1+4 x_1 = x_2 +4 x_2 . and either x_1+4 x_1 = x_3 +4 x_3 , (1/x_1) +4(1/x_1) =(x_3/x_1) +4(x_3/x_1), or (1/x_3) +4(1/x_3) =(x_1/x_3) +4(x_1/x_3). Writing each quadratic point x_i = u_i + w_i, where u_i and w_i are in plus and minus eigenspaces for the Galois involution, these equations define a finite scheme over , and one may check that its rational points correspond exactly to the quadratic points in the statement of the proposition. bib 1 D. Abramovich and J. Harris, Abelian varieties and curves in W_d (C), Compositio Math. 78 227-238 (1991) 1.1 N. Adžaga, V. Arul, L. Beneish, M. Chen, S. Chidambaram, T. Keller, B. Wen, Quadratic Chabauty for Atkin–Lehner Quotients of Modular Curves of Prime Level and Genus 4, 5, 6 <https://arxiv.org/abs/2105.04811>, to appear, Acta Arithmetica. 1.1a N. Adžaga, T. Keller, P. Michaud-Jacobs, F. Najman, E. Ozman, B. Vukorepa, Computing Quadratic Points on Modular Curves X_0(N), <https://arxiv.org/abs/2303.12566>, 2023. 1.2 V. Arul, S. Müller, Rational points on X_0^+(125), <https://arxiv.org/pdf/2205.14744.pdf>, to appear, Edixhoven memorial volume of Expositiones Mathematicae. 1.3 J.S. Balakrishnan, A. J. Best, F. Bianchi, B. Lawrence, J.S. Müller, N. Triantafillou, J. Vonk, Two recent p-adic approaches towards the (effective) Mordell conjecture. in Arithmetic L-functions and differential geometric methods, 2021, 31–74. 4.2 J.S. Balakrishnan, N. Dogra, J.S. Müller, J. Tuitman, and J. Vonk. Explicit Chabauty-Kim for the split Cartan modular curve of level 13. Ann. of Math. (2), 189(3), 2019. 2 J.S. Balakrishnan, N. Dogra, J.S. Müller, J. Tuitman, J. Vonk, Quadratic Chabauty for modular curves: Algorithms and examples <https://arxiv.org/abs/2101.01862>, to appear, Compositio Mathematica. 3 J.S. Balakrishnan, A. Besser, F. Bianchi, J.S. Müller, Explicit quadratic Chabauty over number fields, Israel Journal of Mathematics, (2021), 1–48. 4 J.S. Balakrishnan, N. Dogra (with an appendix by J.S. Müller), Quadratic Chabauty and rational points I: p-adic heights, Duke Mathematical Journal, 167, no. 11 (2018), 1981-2038. BMcode J.S. Balakrishnan, B. Mazur, SageMath code, <https://github.com/jbalakrishnan/QC_bielliptic>, 2023. bmAWS J.S. Balakrishnan, J.S. Müller, Computational tools for quadratic Chabauty, Arizona Winter School Lecture notes 2020. 5 B.S. Banwait, Explicit isogenies of prime degree over quadratic fields International Mathematics Research Notices, rnac134, <https://doi.org/10.1093/imrn/rnac134> (2022) <https://arxiv.org/abs/2101.02673> BM B.S. Banwait, M. Derickx, Explicit isogenies of prime degree over number fields (2022), <https://arxiv.org/abs/2203.06009> BNP B.S. Banwait, F. Najman, O. Padurariu, Cyclic isogenies of elliptic curves over fixed quadratic fields (2022), <https://arxiv.org/abs/2206.08891> 6 F. Bars, Bielliptic Modular Curves, Journal of Number Theory 76, 154-165 (1999) 6.1 A. Besser, J.S. Müller, P. Srinivasan, p-adic adelic metrics and quadratic Chabauty I, <arXiv:2112.03873>, 2021. bianchi F. Bianchi, Quadratic Chabauty for (bi)elliptic curves and Kim's conjecture, Algebra & Number Theory 14(9): 2369-2416 (2020). bianchi-padurariu F. Bianchi, O. Padurariu, Rational points on rank 2 genus 2 bielliptic curves in the LMFDB, <arXiv:2212.11635>, 2022. 7 J. Box, Quadratic points on modular curves with infinite Mordell-Weil group, Mathematics of Computation 90 (2021), 321–343. bgg J. Box, S. Gajović, P. Goodman, Cubic and quartic points on modular curves using generalised symmetric Chabauty, International Mathematics Research Notices, Volume 2023, Issue 7, March 2023, 5604-5659, <https://doi.org/10.1093/imrn/rnab358> 8 P. Bruin, F. Najman, Hyperelliptic modular curves and isogenies of elliptic curves over quadratic fields, LMS Journal of Computation and Mathematics (2015) CHM L. Caporaso, J. Harris, B. Mazur Corrections to Uniformity of rational points and further comments, <https://arxiv.org/abs/2012.14461> CHM1 L. Caporaso, J. Harris, and B. Mazur, Uniformity of rational points. J. Amer. Math. Soc., 10 1-5 (1997) Col82 R.F. Coleman, Dilogarithms, regulators, and p-adic L-functions. Invent. Math., 69(2):171 – 208 (1982). Col85a R.F. Coleman, Torsion points on curves and p-adic abelian integrals. Ann. of Math. (2), 121(1):111–168 (1985). Col85b R.F. Coleman, Effective Chabauty. Duke Math. J., 52(3): 765–770 (1985). colemangross R.F. Coleman and B.H. Gross. p-adic heights on curves. In Algebraic number theory, volume 17 of Adv. Stud. Pure Math., pages 73–81. Academic Press, Boston, MA, 1989. CES B. Conrad, B. Edixhoven, and W. Stein, J_1(p) has connected fibers. Doc. Math. 8 331-408 (2003). Cor19 David Corwin. From Chabauty's method to Kim's non-abelian Chabauty's method. 2019. <https://math.berkeley.edu/ dcorwin/files/ChabautytoKim.pdf> D M. Derickx, Torsion points on elliptic curves over number fields of small degree. Several variations of Kamienny's criterion, <https://wstein.org/wiki/attachments/seminar(2f)nt(2f)20110318/slides.pdf> DEHMZ M. Derickx, A. Etropolski, M. van Hoeij, J. S. Morrow, D. Zureick-Brown, Sporadic Cubic Torsion, Algebra & Number Theory 15 (7) 1837 – 1864 (2021). EL B. Edixhoven and G. Lido. Geometric quadratic Chabauty. Journal of the Institute of Mathematics of Jussieu, 2021, 1–55. G1 S.D. Galbraith. Rational points on X^+_0(p) Experiment. Math.,8 (4) 311-318 (1999) G2 S.D. Galbraith. Rational points on X^+_0(N) and quadratic -curves. J. Théor. Nombres Bordeaux, 14(1) 205-219 (2002) GLQ J. Gonzalez, J-C. Lario, and J. Quer, Arithmetic of ℚ-curves, Progress in Mathematics, 224, 125-139 (2004) 9 S. Kamienny, Torsion points on elliptic curves over all quadratic fields, Duke Mathematical Journal, 53 157-162 (1986) 10 S. Kamienny, Torsion points on Elliptic Curves over all quadratic fields II, Bull. Soc. Math. de France. bf 114 (1986) 119-122 11 S. Kamienny, Torsion points on elliptic curves. Proceedings of the Conference on Number Theory, (1991) 12-15, GH Essen Preprint Series (G. Frey, ed.) (1991). 12 S. Kamienny, B.Mazur, Rational torsion of prime order in elliptic curves over number fields Astérisque, 228 (1995) 81-98 <http://www.numdam.org/item?id=AST_1995__228__81_0> kzb E. Katz, D. Zureick-Brown, The Chabauty-Coleman bound at a prime of bad reduction and Clifford bounds for geometric rank functions. Compos. Math. 149 (2013), no. 11 1818–1838 19 M. A. Kenku, The modular curve X_0(39) and rational isogeny, Mathematical Proceedings of the Cambridge Philosophical Society, 85, Cambridge University Press, (1979) 21-23. 20 M. A. Kenku, The modular curve X_0(169) and rational isogeny, Journal of the London Mathematical Society 2 (1980), no. 2, 239-244. 21 M. A. Kenku, The modular curves X_0(65) and X_0(91) and rational isogeny, Mathematical Proceedings of the Cambridge Philosophical Society, 87 Cambridge University Press (1980) 15-20. 22 M. A. Kenku, On the modular curves X_0(125), X_0(25), and X_0(49), Journal of the London Mathematical Society 2 (1981), no. 3, 415-427 12.5 M. A. Kenku, F. Momose, Torsion points on elliptic curves defined over quadratic fields, Nagoya Math. J. 109 (1988) 125-149 kim M. Kim, The unipotent Albanese map and Selmer varieties for curves. Publ. RIMS, 45:89 – 133, 2009. kimp1 M. Kim, The motivic fundamental group of 𝐏^1∖{0, 1,∞} and the theorem of Siegel. Invent. Math., 161:629 – 656, 2005. kimmassey M. Kim, Massey products for elliptic curves of rank 1. J. Amer. Math. Soc., 23(3):725 – 747, 2010. Kubert-Lang D. S. Kubert, S. Lang, Modular Units (Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], Springer-Verlag, New York, NY, 1981), vol. 244. L D. Lorenzini, Torsion points on the modular Jacobian J_0(N). Compos. Math. 96, 149-172 (1995). MT B. Mazur, J. Tate, Points of order 13 on elliptic curves, Invent. Math. 22 41-49 1973. MSw B. Mazur, P. Swinnerton-Dyer, Arithmetic of Weil Curves, lnventiones math. 25 1-61 (1974) M0 B. Mazur, Rational points on modular curves. Proceedings of a conference on modular functions held in Bonn 1976. Lecture Notes in Math. 601 Berlin-Heidelberg-New York: Springer (1977) M1 B. Mazur, Modular curves and the Eisenstein ideal, Publ. Math. IHES 47 33-186 (1977) M2 B. Mazur, Rational Isogenies of Prime Degree, Inventiones math. 44 129-162 (1978) MR1 B. Mazur, K. Rubin (With an appendix by Michael Larsen) Diophantine stability Amer. J. Math. 140 571- 616 (2018) <https://doi-org.jproxy.lib.ecu.edu/10.1353/ajm.2018.0014> Mer L. Merel, Bornes pour la torsion des courbes elliptiques sur les corps de nombres, Inventiones Mathematicae 124 437-449 (1996), 13 P. Michaud-Rodgers, Quadratic points on non-split Cartan modular curves, International Journal of Number Theory 18 (2022), no. 2, 245-26. NV F. Najman, B. Vukorepa, Quadratic points on bielliptic modular curves, Mathematics of Computation, to appear. nekovar J. Nekovář, On p-adic height pairings. In Séminaire de Théorie des Nombres, Paris 1990-1991, pages 127– 202. Birkhäuser, 1993. 14 A. P. Ogg, Hyperelliptic modular curves, Bulletin de la S. M. F., 102 449-462 (1974) 14.1 A. P. Ogg, Diophantine equations and modular forms. Bull. Amer. Math. Soc. 81 14-27 (1975) 14.2 A. P. Ogg, On the cusps of Γ_0(N), Proceedings of the Number Theory Conference (Univ. Colorado, Boulder, Colo., 173-177 (1972) 14.3 A. P. Ogg, Rational points of finite order on elliptic curves. Invent. Math. 12 105-111 (1971) O1 M. Ohta, Eisenstein ideals and the rational torsion subgroups of modular Jacobian varieties J. Math. Soc. Japan 65 733-772 (2013). O2 M. Ohta, Eisenstein ideals and the rational torsion subgroups of modular Jacobian varieties II.Tokyo J. Math. 37 273-318 (2014). 15 E. Ozman, S. Siksek, Quadratic Points on Modular Curves, Math. Comp. 88 (2019), 2461 – 2484. P P. Parent, Bornes effectives pour la torsion des courbes elliptiques sur les corps de nombres, Journal für die reine und angewandte Mathematik (Crelle's Journal) (1996) R1 Y. Ren, Rational torsion subgroups of modular Jacobian varieties. J. Number Theory 190, 169-186 R2 Y. Ren, Quadratic torsion subgroups of modular Jacobian varieties, Israel Journal of mathematics, 245 675-710 (2021), RW K. Ribet, P. Wake, Another look at rational torsion of modular Jacobians, PNAS 2022 119 No. 41 <https://doi.org/10.1073/pnas.2210032119> 17 N. Schappacher and R. Schoof, B. Levi and the Arithmetic of Elliptic Curves, The Mathematical Intelligencer 18, 57-69 (1996) <https://irma.math.unistra.fr/ schappa/NSch/Publications_files/1996_RSchNSch.pdf> serre-galois J.-P. Serre. Topics in Galois theory, Volume 1 of Research Notes in Mathematics. Jones and Bartlett Publishers, Boston, MA, 1992. Lecture notes prepared by Henri Damon, With a foreword by Darmon and the author. sikseksymm S. Siksek, Chabauty for symmetric powers of curves, Algebra & Number Theory 3 (2009), no. 2, 209-236. siksek S. Siksek, Quadratic Chabauty for modular curves, <https://arxiv.org/pdf/1704.00473.pdf>, 2017. Si J. Silverman, The Arithmetic of Elliptic Curves, Springer-Verlag (1986) 16 A. V. Sutherland, Torsion subgroups of elliptic curves over number fields, <https://math.mit.edu/ drew/MazursTheoremSubsequentResults.pdf> takagi T. Takagi, The cuspidal class number formula for the modular curves X_0(M) with M square-free. J. Algebra 193, 180-213 (1997). Tr A. Trbović, Torsion groups of elliptic curves over Quadratic fields ℚ(√(d)), 0 < d < 100 (2018) <https://arxiv.org/pdf/1806.05993> Y H. Yoo, Rational torsion points on Jacobians of modular curves. Acta Arith. 172 299-304 (2016). Zan U. Zannier, Torsion in algebraic groups and problems which arise, To appear. zhang S. Zhang, Small points and adelic metrics. J. Algebraic Geom., 4(2):281-300, 1995.
http://arxiv.org/abs/2307.04801v1
20230710235330
Metastability exchange optical pumping of $^3$He at low pressure and high magnetic field
[ "X. Li", "J. D. Maxwell", "D. Nguyen", "J. Brock", "C. D. Keith", "R. G. Milner", "X. Wei" ]
physics.ins-det
[ "physics.ins-det", "nucl-ex" ]
MIT]X. [email protected] JLab]J. D. Maxwell JLab]D. Nguyen JLab]J. Brock JLab]C. D. Keith MIT]R. G. Milner JLab]X. Wei [MIT]organization=Laboratory for Nuclear Science, Massachusetts Institute of Technology, city=Cambridge, state=MA 02139, country=USA [JLab]organization=Thomas Jefferson National Accelerator Facility, city=Newport News, state=VA 23606, country=USA [cor1]Corresponding author. Systematic studies on metastability exchange optical pumping of ^3He nuclei have been performed at Jefferson Lab using a 1-torr sealed cell at magnetic fields from 2 to 4 T. The effects of the discharge intensity, pump laser power, and pumping transition schemes on achievable nuclear polarization and pumping rate have been investigated. A maximum steady-state nuclear polarization of about 75% has been obtained. This work provides a baseline for the development of the novel polarized ^3He target for CLAS12 at Jefferson Lab. Polarized helium-3 Metastability exchange optical pumping High magnetic field Metastability exchange optical pumping of ^3He at low pressure and high magnetic field [ October 2023 ====================================================================================== § INTRODUCTION Nuclear spin-polarized ^3He is a powerful effective polarized neutron target which plays a significant role in the studies on neutron spin structure. Spin-polarized ^3He gas targets have been successfully implemented in scattering experiments at MIT-Bates <cit.>, SLAC <cit.>, DESY <cit.>, Mainz <cit.>, HIGS <cit.> and JLab <cit.> using either the metastability exchange optical pumping (MEOP) <cit.> or the spin exchange optical pumping (SEOP) <cit.> technique. The MEOP approach utilizes 1083-nm circularly polarized laser light to produce nuclear polarization in metastable-state ^3He atoms via optical pumping and hyperfine coupling. The polarization is then transferred to the ground-state ^3He nuclei through metastability-exchange collisions. As MEOP is performed at mbar-scale pressure and room temperature, usually the low-temperature <cit.> or compression <cit.> technique is used to increase the target thickness. The SEOP method involves a mixture of alkali-metal atoms with ^3He gas where the alkali atoms are optically pumped with 795-nm laser light and the electronic polarization is passed onto ^3He nuclei via spin-exchange collisions. SEOP operates at higher pressures, typically bar scale, and therefore is more favorable in high-luminosity scattering experiments. See comprehensive reviews on optical pumping techniques of polarized ^3He in <cit.>. Both aforementioned techniques are normally performed in low magnetic fields (on the order of 10^-3 T). At higher fields, SEOP fails for the increased wall relaxation and MEOP was considered to be less efficient due to weakened hyperfine coupling, largely limiting the implementation of polarized ^3He in high-field experimental apparatus. The recent development of high-field MEOP technique in the last two decades <cit.> has opened up opportunities for new physics projects using nuclear spin-polarized ^3He to study the fundamental quark and gluon dynamics inside the nucleon and nucleus. At Brookhaven National Lab (BNL), development on polarized ^3He ion source within the 5-T solenoid at the Electron Beam Ion Source for the future electron-ion collider is currently underway at the Relativistic Heavy Ion Collider <cit.>. At Jefferson Lab (JLab), a new physics project of spin-dependent electron scattering from polarized ^3He using the CLAS12 spectrometer has been approved in Hall B <cit.>. A conceptual design for a novel polarized ^3He target has been proposed <cit.>, aiming to produce polarized ^3He inside the 5 T solenoid of CLAS12. Recently, a new high-field MEOP system for polarized ^3He has been established at JLab to systematically study the effect of the discharge intensity, pump laser power, and optical-pumping-transition schemes to the key parameters for ^3He polarization. In this work, we will report the major findings from such systematic studies. § HIGH-FIELD MEOP The production of ^3He nuclear polarization using MEOP involves optical pumping of ^3He in the metastable state and metastability-exchange collisions. A radio-frequency (RF) signal is employed to induce electrical plasma discharge in ^3He gas and excite a small population of ^3He atoms from the ground state to the 2^3S_1 metastable state. The 2^3S_1 – 2^3P optical pumping transition is then driven by circularly polarized 1083-nm laser light. Atoms in the 2^3P state are brought back to the 2^3S_1 state by spontaneous or stimulated emissions. The optical pumping process gives rise to the electronic polarization in the metastable-state ^3He atoms, which is then partially passed to the ^3He nuclei by hyperfine interaction. Finally the nuclear polarization in metastable-state ^3He is transferred to the ground-state ^3He via metastability-exchange collisions. The Zeeman sublevels of the 2^3S_1 and 2^3P states of ^3He significantly differ between low and high magnetic fields, and thence the 1083-nm optical pumping transitions. This results in different optical-pumping and polarimetry approaches for low- and high-field MEOP. In a low field, the C_8 and C_9 transition lines are adopted to promote the metastable-state ^3He to the 2^3P state (see Fig. 14 in Ref. <cit.>) and the nuclear polarization of ^3He is measured by observing the circular polarization of the 668-nm light emitted by the discharge <cit.>. In a high magnetic field (B ≳1.5 T), four pumping schemes can be used for the 2^3S_1 – 2^3P transitions (see Fig. 1 in Ref. <cit.>), in this paper denoted as f_2^± and f_4^± where the subscript indicates the number of unresolved transition lines of the pumping scheme and + (-) represents the σ^+ right-handed (σ^- left-handed) circular polarization of the 1083-nm pump light. For each pumping scheme, a separate pair of well resolved transition lines (probe doublet) of which the 2^3S_1 sublevels are not addressed by the pumping lines can be used for optical polarimetry. In such polarimetry approach, a probe laser is directed to the ^3He with periodical sweeping frequency over the probe doublet. The nuclear polarization of ^3He M is inferred by measuring the absorption coefficients a_1 and a_2 (a_1^0 and a_2^0) for the probe doublet as the ^3He is polarized (unpolarized, M=0) and a_2/a_1/a_2^0/a_1^0=1+M/1-M, the derivation of which can be found in Section 2 of  <cit.>. Fig. <ref> shows the measured absorption spectra for the σ^+ and σ^- 1083-nm light at magnetic fields from 2 to 4 T. The pump and probe peaks are subjected to Doppler broadening at room temperature and 1-torr pressure. Note that the degree of circular polarization of the 1083-nm light is not highly critical for the high-field MEOP as the σ^+ and σ^- lines are well resolved due to the enhanced Zeeman splitting in high fields. § EXPERIMENTAL SETUP AND METHOD The schematic layout of the experimental apparatus is shown in Fig. <ref>. The ^3He gas cell and all optical components are enclosed in a laser-tight enclosure which is geometrically a box (59 cm in length, 43 cm in width, and 33 cm in height) attached to a cylindrical volume (62 cm in length and 10 cm in diameter). The inner walls of the enclosure as well as all surfaces of the optical parts are darkened to minimize light reflection. The ^3He glass cell is located near the end of the cylindrical volume, which is inserted into the warm bore of a superconducting magnet. The setup includes i) the optical pumping system consisting of the magnetic field, the ^3He gas cell, the RF electrodes to generate discharge plasma in ^3He gas, and the pump laser and related optics, and ii) the optical polarimeter consisting of the probe laser and the photodiode. §.§ Optical pumping system The superconducting magnet (FROST) provides a homogeneous magnetic field up to 5 T within the central area of its 76-cm-long and 13-cm-diameter warm bore. In this work, the magnet is operated at 2, 3, and 4 T for the high-field MEOP tests of polarized ^3He. Pure ^3He gas of 1-torr pressure is sealed in a cylindrical borosilicate glass cell 5 cm in length and 5 cm in diameter. Electrical plasma discharge is induced by the electrodes spirally wound around the outer surface of the cell wall. A 41-MHz RF signal is generated by an SRS generator (SG382), amplified by an RF amplifier, and tuned with a radio transformer before being sent to the electrodes. A Keopsys continuous-wave ytterbium-fiber laser system provides linearly polarized laser light with a tunable frequency range of about 100 GHz, a nominal bandwidth of 2 GHz, and an output power range of 3 – 10 W. The pumping light is delivered to the laser enclosure with an optical fiber and then passes through a linearly polarizing beamsplitter cube followed by a quarter-wave plate to ensure circular polarization, and finally guided by a lens to illuminate the full volume of the ^3He cell. A broadband mirror (750 – 1100 nm) is mounted downstream of the cell to enhance the pumping power with the reflected laser light back to the cell. §.§ Optical polarimeter Taking advantage of the light absorption technique introduced in <cit.>, the optical polarimetry adopts the design as in <cit.> with slight modification. The probe laser light is produced by a Toptica laser system (DFB pro L-33508) with a tuning wavelength range of 1080.6 – 1084.2 nm and an output power of 70 mW. The frequency of the laser can be tuned by changing either the diode temperature or the operating current. The full range of probe laser frequency is explored by scanning the temperature to obtain the absorption spectrum for all pumping and probe peaks as shown in Fig. <ref>. Then the current is swept for a smaller frequency range to map the two absorption peaks of the probe doublet. An iris aperture is installed in front of the probe entrance inside the laser enclosure to adjust the probe laser power delivered to the cell. The probe laser beam is incident on the cell at a small angle (∼5) with respect to the pumping light propagating direction, then reflected from the mirror downstream of the cell and finally detected by a photodiode (Thorlabs DET36A2) which is collimated to reduce the signal background from the reflected pumping light. To better isolate the probe signal received in the photodiode, the RF discharge is amplitude modulated from the SRS signal generator at 1 kHz by 50% modulation depth, which is taken as the reference for the lock-in amplifier (SRS SR860). The lock-in amplifier signal is read to the computer using a Python program where the measured spectrum for the probe doublet is fitted with two side-by-side Gaussian peaks on top of a linear function accounting for the background from the pump laser light as well as the linear shift in probe laser power caused by frequency sweeps. The absorption coefficients a_1 and a_2 for the two probe peaks are extracted as the fitted amplitudes of the two Gaussian functions. The calibration measurement is taken at the beginning of each measurement cycle before the pump laser is turned on to obtain the absorption coefficients a_1^0 and a_2^0 for null polarization. The nuclear polarization M is determined from the measured value of a_1, a_2, a_1^0, and a_2^0 using Eq. <ref>. § RESULTS A typical optical pumping and relaxation cycle for the polarization measurement at 2 T using the f_4^- pumping scheme is shown in Fig. <ref>. The sweeps for the probe laser frequency over the probe doublet periodically and continuously proceed during the measurement cycle. Each polarization data point in Fig. <ref> is obtained from one full period of the sweep which typically takes about 14 s. The pump laser is turned on at 0 s and nuclear polarization of ^3He starts to build up as an exponential function of time, M(t) = M_s(1-e^-t/T_b), where M_s is the steady-state polarization and T_b is the build-up time constant. Following the convention in <cit.>, the build-up rate, or effective pumping rate (pumping rate for short throughout this paper), R is defined as R = NM_s/T_b, where N is the total number of atoms in the cell. Then the pump laser is turned off at 770 s to measure the relaxation process with the discharge on. The relaxation time T_r is determined by fitting the relaxation data with an exponential decay function. M_s, T_b, and T_r were measured with B-field magnitudes of 2, 3, and 4 T to study the effects of the discharge intensity, pump laser power, and different optical pumping transition schemes on high-field MEOP performance. The results will be presented and discussed in the following subsections. §.§ Discharge intensity The influence of the discharge condition on obtainable ^3He nuclear polarization is two fold. On one hand, MEOP relies on the existence of the metastable-state ^3He atoms which are produced by the RF discharge. The intensity of the discharge and its spatial distribution relative to the pump laser light directly determine the pumping rate. On the other hand, discharge can lead to spin depolarization which is the major relaxation mechanism competing against the optical pumping process and hence affects the steady-state polarization of ^3He nuclei. Generally, more intense discharge results in a stronger depolarization effect. The distribution and intensity of the discharge are correlatively determined by the frequency and amplitude of the RF signal, the electrode configuration, and the magnetic holding field. The overall intensity of the discharge can be quantitatively controlled by varying the voltage amplitude of the RF signal and can be characterized by the relaxation time constant determined from discharge-on relaxation measurements. Longer relaxation time indicates weaker discharge within the cell. In this work, the discharge intensity was varied by fine tuning the output voltage of the RF generator and the transmitter. Figure <ref> shows the steady-state nuclear polarization and the pumping rate measured at different discharge intensity levels, represented by the relaxation time, in magnetic fields of 2, 3, and 4 T. The optical pumping were performed with the f_4^± transition schemes and the output pump laser power was 3 W. A saturation in the steady-state polarization is observed as the relaxation time prolongs. A trend of decreasing pumping rate and hence suppressed nuclear polarization with increasing magnetic field is obvious. The results are in reasonable agreement with those in <cit.>, which were obtained with the same ^3He cell. The discrepancy in saturation level could be rooted in the use of different superconducting magnets with different transverse B field gradients and adoption of different electrode schemes which would in turn affect the discharge condition. To benchmark the depolarization effect resulted from the transverse field gradient, we measured the relaxation times of 2800 – 3800 s with the discharge turned off. §.§ Pump laser power The influence of the pump laser power to the steady-state nuclear polarization and the pumping rate was evaluated. The setting range for the power output of the Keopsys laser is 3 – 10 W. The laser light is attenuated by the linearly polarizing beamsplitter cube and the attenuation factor depends on the relative angle between the linear polarization plane of the output laser and that of the beamsplitter cube. By rotating the fiber mount around the propagation direction of the pump laser light, the on-cell laser power can be tuned between 0 and 3 W. The actual laser power coming out of the quarter-wave plate was measured with a power meter and was recorded accordingly to the setting value of the power output and the rotating angle of the fiber mount. Fig. <ref> shows the dependence of the steady-state nuclear polarization and the pumping rate on laser power for the f_4^- transition scheme at 2 T. An on-cell power as low as about 2.5 W is sufficient to reach the saturation of the attainable polarization. §.§ Pumping transition scheme Figure <ref> shows the maximum steady-state polarization achieved with the four optical pumping transition schemes described in Section <ref>. The measurements were performed at 2, 3, and 4 T with a pump laser power output of 3 W. The results for all three magnetic fields consistently show that the f_4^± schemes yield considerably higher nuclear polarization than the f_2^± schemes. The σ^+ and σ^- of the pumping light for either f_2 or f_4 schemes does not give apparent difference in the steady-state polarization taking into account the measurement uncertainties. The full results including the extracted pumping rate and relaxation time are tabulated in Table <ref>. §.§ Uncertainties The systematic uncertainties in the measurements for nuclear polarization mainly come from the following three aspects. i) The unsteadiness of the discharge light, which is dependent on the discharge level and the holding field, causes noise in the photodiode signal. ii) The background light from the pump laser contributing to the photodiode signal which might not be completely addressed by the linear function in the fitting process. Both i) and ii) lead to uncertainties in the extraction for the probe-absorption coefficients. iii) Residual non-zero nuclear polarization might exist in the calibration runs for a_1^0 and a_2^0 which may result in a baseline offset in the measured nuclear polarization. These three factors give total uncertainties of 2 – 4% in the measured nuclear polarization M. In addition, the selection of the data range for the exponential fitting introduces uncertainties in the extraction of M_s, T_b and T_r, particularly prominent for T_b. The total uncertainties assigned for M_s, T_b and T_r are about 4%, 5%, and 4%, respectively. The resultant total uncertainty for R is about 6% given that the uncertainty in the total number of atoms in the cell is negligible. § CONCLUSION We present the first series of tests on MEOP at JLab for polarized ^3He in high magnetic fields. The experiments have studied the dependence of discharge intensity and pump laser power for the attainable steady-state nuclear polarization and pumping rate, and indicated the optimal optical-pumping scheme for the 1-torr gas. This work has reproduced and extended the earlier high-field MEOP results at BNL for the polarized ^3He ion source for the EIC and serves as the baseline for the development for a novel polarized ^3He gas target for CLAS12 at JLab. Ongoing investigations on the B-field uniformity of the FROST magnet, the adoption of the electrode scheme, and the pressure dependence at JLab will provide further input to the upcoming prototyping for the new double-cell cryogenic ^3He target. § ACKNOWLEDGMENTS We thank the JLab target group for mechanical support. We are grateful for valuable discussions and support from Pierre-Jean Nacher and his colleagues at Laboratoire Kastler Brossel, Paris, France, and from Thomas Gentile at the National Institute of Standards and Technology, Gaithersburg, Maryland. We acknowledge the support of Nathan Isgur Fellowship. This research is supported by the U.S. Department of Energy Office of Nuclear Physics to the Massachusetts Institute of Technology under grant number DE-FG02-94ER40818 and to the Jefferson Lab under grant number DE-AC05-06OR23177. Jones:1993hg C. E. Jones, E. J. Beise, J. E. Belz, R. W. Carr, B. W. Filippone, W. Lorenzon, R. D. McKeown, B. A. Mueller, T. G. O'Neill and G. W. Dodson, et al. ^3He (e, e') quasielastic asymmetry, Phys. Rev. C 47, 110-130 (1993), <https://doi.org/10.1103/PhysRevC.47.110>. Johnson:1994cq J. R. Johnson, A. K. Thompson, T. E. Chupp, T. B. Smith, G. D. Cates, B. Driehuys, H. Middleton, N. R. Newbury, E. W. Hughes and W. Meyer, The SLAC high density gaseous polarized He-3 target, Nucl. Instrum. Meth. A 356, 148-152 (1995), <https://doi.org/10.1016/0168-9002(94)01465-5>. DeSchepper:1998gc D. DeSchepper, L. H. Kramer, S. F. Pate, K. Ackerstaff, R. W. Carr, G. R. Court, A. Dvoredsky, H. Gao, A. Golendoukhin and J. O. Hansen, et al. The HERMES polarized He-3 internal gas target, Nucl. Instrum. Meth. A 419, 16-44 (1998), <https://doi.org/10.1016/S0168-9002(98)00901-2>. Krimmer:2009zz J. Krimmer, M. Distler, W. Heil, S. Karpuk, D. Kiselev, Z. Salhi and E. W. Otten, A highly polarized He-3 target for the electron beam at MAMI, Nucl. Instrum. Meth. A 611, 18-24 (2009), <https://doi.org/10.1016/j.nima.2009.09.064>. Kramer:2007zzb K. Kramer, X. Zong, D. Dutta, H. Gao, X. Qian, Q. Ye, X. Zhu, R. Lu, T. Averett and S. Fuchs, A high-pressure polarized He-3 gas target for the High Intensity Gamma Source (HIγS) facility at Duke Free Electron Laser Laboratory, Nucl. Instrum. Meth. A 582, 318-325 (2007), <https://doi.org/10.1016/j.nima.2007.08.243>. Singh:2010 J. Singh, Alkali-Hybrid Spin-Exchange Optically-Pumped Polarized ^3He Targets Used For Studying Neutron Structure, Ph.D. thesis, University of Virginia (2010), <http://galileo.phys.virginia.edu/research/groups/spinphysics/thesis/singh_thesis_2010.pdf>. Colegrove:1960 F. D. Colegrove, L. D. Schearer and G. K. Walters, Polarization of He^3 Gas by Optical Pumping, Phys. Rev. 132, 2561 (1963), <https://doi.org/10.1103/PhysRev.132.2561>. Bouchiat:1960dsd M. A. Bouchiat, T. R. Carver and C. M. Varnum, Nuclear Polarization in He3 Gas Induced by Optical Pumping and Dipolar Exchange, Phys. Rev. Lett. 5, no.8, 373 (1960), <https://doi.org/10.1103/PhysRevLett.5.373>. Milner:1989 R. G. Milner, R. D. McKeown and C. E. Woodward, A polarized ^3He target for nuclear physics, Nucl. Instrum. Meth. A 274, 56-63 (1989), <https://doi.org/10.1016/0168-9002(89)90365-3>. Eckert:1992 G. Eckert, W. Heil, M. Meyerhoff, E.W. Otten, R. Surkau, M. Werner, M. Leduc, P.J. Nacher and L.D. Schearer, A dense polarized ^3He target based on compression of optically pumped gas, Nucl. Instrum. Meth. A 320, 53-65 (1992), <https://doi.org/10.1016/0168-9002(92)90769-Z>. Walker:1997zzc T. G. Walker and W. Happer, Spin-exchange optical pumping of noble-gas nuclei, Rev. Mod. Phys. 69, 629-642 (1997), <https://doi.org/10.1103/RevModPhys.69.629>. Batz:2011 M. Batz, P.-J. Nacher and G Tastevin, Fundamentals of metastability exchange optical pumping in helium, J. Phys. Conf. Ser. 294, 012002 (2011), <https://dx.doi.org/10.1088/1742-6596/294/1/012002>. Gentile:2016uud T. R. Gentile, P. J. Nacher, B. Saam and T. G. Walker, Optically Polarized ^3He, Rev. Mod. Phys. 89, no.4, 045004 (2017), <https://doi.org/10.1103/RevModPhys.89.045004>. Courtade:2000 E. Courtade, F. Marion, P. Nacher, G. Tastevin, T. Dohnalik and K. Kiersnowski, Spectroscopy of the helium 2 ^3S–2 ^3P transition above 0.01 tesla – application to optical pumping studies, Hyperfine Interact. 127 (1) 451–454 (2000), <https://doi.org/10.1023/A:1012673902661>. Courtade:2002 E. Courtade, F. Marion, P.-J. Nacher, G. Tastevin, K. Kiersnowski and T. Dohnalik, Magnetic field effects on the 1 083 nm atomic line of helium, Eur. Phys. J. D 21, 25–55 (2002), <https://doi.org/10.1140/epjd/e2002-00176-1>. Abboud:2004 M. Abboud, A. Sinatra, X. Maître, G. Tastevin and P.-J. Nacher, High nuclear polarization of ^3He at low and high pressure by metastability exchange optical pumping at 1.5 tesla, Europhys. Lett. 68 (4) 480–486 (2004), <https://doi.org/10.1209/epl/i2004-10237-y>. Abboud:2005 M. Abboud, A. Sinatra, G. Tastevin, P.-J. Nacher and X. Maître, Metastability Exchange Optical Pumping of Helium-3 at High Pressures and 1.5 T: Comparison of two Optical Pumping Transitions, https://doi.org/10.48550/arXiv.physics/0506044arXiv:physics/0506044. Nikiel:2007 A. Nikiel, T. Palasz, M. Suchanek, M. Abboud, A. Sinatra, Z. Olejniczak, T. Dohnalik, G. Tastevin and P.-J. Nacher, Metastability exchange optical pumping of ^3He at high pressure and high magnetic field for medical applications, Eur. Phys. J. Spec. Top. 144, 255–263 (2007), <https://doi.org/10.1140/epjst/e2007-00138-3>. Suchanek:2007 K. Suchanek, M. Suchanek, A. Nikiel, T. Pałasz, M. Abboud, A. Sinatra, P.-J. Nacher, G. Tastevin, Z. Olejniczak and T. Dohnalik, Optical measurement of ^3He nuclear polarization for metastable exchange optical pumping studies at high magnetic field, Eur. Phys. J. Spec. Top. 144 (1) 67–74 (2007), <https://doi.org/10.1140/epjst/e2007-00109-8>. Nikiel-Osuchowska:2013 A. Nikiel-Osuchowska, G. Collier, B. Głowacz, T. Pałasz, Z. Olejniczak, W. P. Wglarz, G. Tastevin, P.-J. Nacher and T. Dohnalik, Metastability exchange optical pumping of ^3He gas up to hundreds of millibars at 4.7 Tesla, Eur. Phys. J. D 67, 200 (2013), <https://doi.org/10.1140/epjd/e2013-40153-y>. Maxwell:2018dyf J. D. Maxwell, J. Alessi, G. Atoian, E. Beebe, C. S. Epstein, R. G. Milner, M. Musgrave, A. Pikin, J. Ritter and A. Zelenski, Enhanced polarization of low pressure ^3He through metastability exchange optical pumping at high field, Nucl. Instrum. Meth. A 959, 161892 (2020), <https://doi.org/10.1016/j.nima.2019.02.019>. Zelenski:2023kof A. Zelenski, G. Atoian, E. Beebe, S. Ikeda, T. Kanesue, S. Kondrashev, J. Maxwell, R. Milner, M. Musgrave, M. Okamura, A. A. Poblaguev, D. Raparia, J. Ritter, A. Sukhanov and S. Trabocchi, Optically Pumped Polarized ^3He^++ Ion Source Development for RHIC/EIC, https://doi.org/10.48550/arXiv.2303.10409arXiv:2303.10409. PAC:2020 J.P. Committee, https://www.jlab.org/exp_prog/PACpage/PAC48/PAC48_PrelimReportPlus_FINAL.pdf48th program advisory committee report, 2020. Maxwell:2021ytu J. Maxwell and R. Milner, A concept for polarized ^3He targets for high luminosity scattering experiments in high magnetic field environments, Nucl. Instrum. Meth. A 1012, 165590 (2021), <https://doi.org/10.1016/j.nima.2021.165590>. Pavlovic:1970 M. Pavlović and F. Laloë, Study of a new method for orienting excited atomic levels by optical pumping. Application to the measurement of the hyperfine structure of 1D levels of ^3He, J. Phys. France 31, 173-194 (1970), <http://dx.doi.org/10.1051/jphys:01970003102-3017300>. Gentile:1993 T. R. Gentile and R. D. McKeown, Spin-polarizing ^3He nuclei with an arc-lamp-pumped neodymium-doped lanthanum magnesium hexaluminate laser, Phys. Rev. A 47 456–467 (1993), <http://dx.doi.org/10.1103/PhysRevA.47.456>.
http://arxiv.org/abs/2307.07264v2
20230714103830
On Interpolating Experts and Multi-Armed Bandits
[ "Houshuang Chen", "Yuchen He", "Chihao Zhang" ]
cs.LG
[ "cs.LG", "cs.DS", "stat.ML" ]
AudioInceptionNeXt: TCL AI LAB Submission to EPIC-SOUND Audio-Based-Interaction-Recognition Challenge 2023 Kin Wai Lau, Yasar Abbas Ur Rehman, Yuyang Xie, Lan Ma TCL AI Lab {stevenlau, yasar, yuyang.xie, rubyma} @tcl.com August 12, 2023 ======================================================================================================================================= Learning with expert advice and multi-armed bandit are two classic online decision problems which differ on how the information is observed in each round of the game. We study a family of problems interpolating the two. For a vector m=(m_1,…,m_K)∈ N^K, an instance of m-indicates that the arms are partitioned into K groups and the i-th group contains m_i arms. Once an arm is pulled, the losses of all arms in the same group are observed. We prove tight minimax regret bounds for m-and design an optimal PAC algorithm for its pure exploration version, m-, where the goal is to identify the arm with minimum loss with as few rounds as possible. We show that the minimax regret of m-is Θ√(T∑_k=1^Klog (m_k+1)) and the minimum number of pulls for an (,0.05)-PAC algorithm of m-is Θ1/^2·∑_k=1^Klog (m_k+1). Both our upper bounds and lower bounds for m-can be extended to a more general setting, namely the bandit with graph feedback, in terms of the clique cover and related graph parameters. As consequences, we obtained tight minimax regret bounds for several families of feedback graphs. § INTRODUCTION A typical family of online decision problems is as follows: In each round of the game, the player chooses one of N arms to pull. At the same time, the player will incur a loss of the pulled arm. The objective is to minimize the expected regret defined as the difference between the cumulative losses of the player and that of the single best arm over T rounds. The minimax regret, denoted as R^*(T), represents the minimum expected regret achievable by any algorithm against the worst loss sequence. There are variants of the problem according to amount of information the player can observe in each round. In the problem of multi-armed bandit (), the player can only observe the loss of the arm just pulled. The minimax regret is Θ√(NT) (<cit.>). Another important problem is when the player can observe the losses of all arms in each round, often refered to as learning with expert advice. The minimax regret is Θ√(Tlog N) (<cit.>). Bandit with graph feedback generalizes and interpolates both models. In this model, a directed graph G, called the feedback graph, is given. The vertex set of G is the set of arms and a directed edge from i to j indicates that pulling the arm i can observe the loss of arm j. As a result, the corresponds to when G consists of singletons with self-loop, and learning with expert advice corresponds to when G is a clique. A number of recent works devote to understanding how the structure of G affects the minimax regret (<cit.>). In this paper, we consider a natural interpolation between learning with expert advice and multi-armed bandit. Let m=(m_1,m_2,…,m_K)∈ N^K be a vector with each m_i≥ 1. An instance of m-is that the all N arms are partitioned into K groups and the pull of each arm can observe the losses of all arms in the same group. In the language of bandit with graph feedback, the feedback graph G is the disjoint union of K cliques with size m_1,m_2,…,m_k respectively. We show that the minimax regret for m-is Θ√(T·∑_k∈[K]log (m_k+1)). As a result, this generalizes the optimal regret bounds for both and learning with expert advice. A closely related problem is the so-called “pure exploration” version of bandit, often referred to as the best arm identification () problem where the loss of each arm follows some (unknown) distribution. The goal of the problem is to identify the arm with minimum mean loss with as few rounds as possible. Similarly, we introduced the problem of m-with the same feedback pattern as m-. We design an (,0.05)-PAC algorithm for m-BAI which terminates in T=O1/^2∑_k∈ [K]log (m_k+1) rounds for every <1/8. This means that after T rounds of the game, with probability at least 0.95, the algorithm can output an arm whose mean loss is less than plus the mean of the best one. We show that our algorithm is optimal by proving a matching lower bound Ω1/^2∑_k∈ [K]log (m_k+1) for any (,0.05)-PAC algorithm. Both our upper bounds and lower bounds for the minimax regret of m-can be generalized to bandit with graph feedback. To capture the underlying structure necessary for our proofs, we introduce some new graph parameters which yield optimal bound for several families of feedback graphs. The main results are summarized in <Ref>. Our algorithm deviates from the standard online stochastic mirror descent (OSMD) algorithm for bandit problems. We employ the two-stage OSMD developed in <cit.> and give a novel analysis which yields the optimal regret bound. For the lower bound, we prove certain new “instance-specific” lower bounds for the best arm identification problem. These lower bounds may find applications in other problems. We will give an overview of our techniques in <Ref>. §.§ Main Results We summarize our main results in this section. Formal definitions of m-, m-and bandit with graph feedback are in <Ref>. There exists an algorithm such that for any instance of (m_1,…,m_K)-, any T>0 and any loss sequence ℓ^(0),ℓ^(1),…,ℓ^(T-1)∈ [0,1]^N, its regret is at most c·√(T·∑_k=1^Klog (m_k+1)), where c>0 is a universal constant. Given an instance of m-, for ,δ∈ (0,1), an (,δ)-PAC algorithm can output an arm whose mean loss is less than plus the mean of the optimal one with probability at least 1-δ. Using a reduction from m-to m-(<Ref>), we obtain a PAC algorithm for m-: There exists an (,0.05)-PAC algorithm for (m_1,…,m_K)-which pulls T≤ c·∑_k=1^Klog (m_k+1)/^2 arms where c>0 is a universal constant. Let Ber(p) denote the Bernoulli distribution with mean p. We complement the above algorithm with the following lower bound: There exists an instance H such that for every , 0.05-PAC algorithm A of (m_1,…,m_K)-with ∈0,1/8, the expected number of pulls T of A on H satisfies T≥ c'·∑_k=1^Tlog (m_k+1)/^2, where c'>0 is a universal constant. Moreover, we can pick H as the one in which each arm follows Ber(1/2). Using the reduction from m-to m-(<Ref>) again, we obtain the lower bound for m-. For any algorithm A of (m_1,…,m_k)-, for any sufficiently large T>0, there exists a loss sequence ℓ^(0),ℓ^(1),…,ℓ^(T-1) such that the regret of A in T rounds is at least c'·√(T·∑_k=1^K log (m_k+1)), where c'>0 is a universal constant. Our results generalize to the setting of bandit with graph feedback. Let G=(V,E) be a directed graph with self-loop on each vertex. Let V_1,…,V_K ⊆ V be subsets of vertices. We say that they form a (V_1,…,V_K)-clique cover of G if each induced subgraph G[V_k] for k∈ [K] is a clique and ⋃_k∈ [K] V_k = V. Let G be a feedback graph with a self-loop on each vertex. If G contains a (V_1,…,V_K)-clique cover where V_k=m_k for every k∈ [K], then the minimax regret of bandit with graph feedback G is at most c·√(T·∑_k=1^Klog (m_k+1)) for some universal constant c>0. Our lower bounds generalize to bandit with graph feedback as well. The terms “strongly observable feedback graphs” and “weakly observable feedback graphs” are defined in <Ref>. Let G=(V,E) be the feedback graph. Assume that there exist K disjoint sets S_1,… ,S_K⊆ V such that * each G[S_k] is a strongly observable graph with a self-loop on each vertex; * there is no edge between S_i and S_j for any i j. Then for any algorithm A and any sufficiently large time horizon T>0, there exists some loss sequence on which the regret of A is at least c'·√(T·∑_k=1^Klog(S_k+1)) for some universal constant c'>0. The following lower bound for weakly observable feedback graphs confirms a conjecture in <cit.> and implies the optimality of several regret bounds established there, e.g., when the feedback graph is the disjoint union of loopless complete bipartite graphs. The notion of t-packing independent set is defined in <Ref>. Let G=(V,E) be the feedback graph. Assume that V can be partitioned into K disjoint sets V=V_1 ∪ V_2 ∪…∪ V_K such that * for every k∈ [K], each G[V_k] is observable; * for every k∈ [K], there exists a t_k-packing independent set S_k in G[V_k] such that every vertex in S_k does not have a self-loop; * there is no edge from V_i to S_j for any i j in G. Then for any algorithm A and any sufficiently large time horizon T>0, there exists some loss sequence on which the regret of A with feedback graph G is at least c'· T^2/3·∑_k=1^KmaxlogS_k, S_k/t_k^1/3 for some universal constant c'>0. <Ref> implies tight regret lower bounds for several weakly observable graphs. We summarize the minimax regret for some feedback graphs, weakly or strongly observable, in <Ref>. §.§ Overview of Technique We note that a simple reduction (<Ref>) implies that any algorithm for m-can be turned into a PAC algorithm for m-. As a result, <Ref> follow from a minimax regret upper bound for m-and a lower bound for m-. §.§.§ Upper bounds for  - We design a new two-stage algorithm (Algorithm <ref>) to establish an upper bound for m-. The algorithm is similar to the one used in <cit.> to study weakly observable graphs with a few tweaks to incorporate our new analysis. The algorithm maintains a distribution over K groups and for each group, it maintains a distribution for arms in that group. In each round of the game, the algorithm pulls an arm in a two-stage manner: First pick the group according to the distribution over groups and then pick the arm in that group following the distribution in the group. At the end of each round, all distributions are updated in the manner similar to online stochastic mirror descent (OSMD) with carefully designed loss vectors and various potential functions. Our main technical contribution is a novel analysis of this two-stage algorithm. We design auxiliary two-stage piecewise continuous processes whose regret is relatively easy to analyze. Then we view our algorithm as a discretization of the process and bound the accumulated discretization errors. Since the notion of m-generalizes both learning with expert advice and multi-armed bandit, we remark that our analysis of Algorithm <ref> can specialize to an analysis of both ordinary mirror descent (MD) algorithm and OSMD algorithm. We believe that the viewpoint of discretizing a piecewise continuous process is more intuitive than the textbook analysis of OSMD and may be of independent pedagogical interest. §.§.§ Lower bounds for  - Our lower bound for the number of rounds in an (,0.05)-PAC algorithm for m-where m=(m_1,…,m_K) is Ω(∑_k=1^K log (m_k+1)/^2), which is the sum of lower bounds on each (m_k)-instance. To achieve this, we show that the instance where all arms are Ber(1/2) is in fact a universal hard instance in the sense that every (,0.05)-PAC algorithm requires Ω(∑_k=1^K log (m_k+1)/^2) to identify. Via a reduction of “direct-sum” flavor, we show that every (,0.05)-PAC algorithm, when applied to this instance, must successfully identify that each group consists of Ber(1/2) arms. As a result, the lower bound is the sum of the lower bounds for each “all Ber(1/2)” (m_k)-instance. We then prove the lower bound for “all Ber(1/2)” (m)-instance for every m≥ 2. We use H_0^(m) to denote this instance. The H_0^(m) specified lower bound is obtained by constructing another m instances H_1^(m),…,H_m^(m) and compare the distribution of losses generated by H_0^(m) and the distribution of losses generated by a mixture of H_1^(m),…,H_m^(m). For technical reasons, we first prove the lower bound when all arms are Gaussian and reduce the Gaussian arms to Bernoulli arms. §.§ Organization of the Paper In this paper, we focus on the m-MAB and the m-and provide a fine-grained analysis to achieve tight bounds for both problems. The paper is organized in the following way. We outline our main results in <Ref> and introduce the preliminaries in <Ref>. A two-stage optimal algorithm for m-is given in <Ref>, along with continuous-time and discretized analysis. We then generalize this result to bandit with strongly observable graphs in <Ref>. We also construct an (,0.05)-algorithm for m-which terminates in bounded rounds in <Ref> via a reduction to m-problems. In <Ref>, we derive a corresponding lower bound for m-. Based on the results in <Ref>, we provide a regret lower bound for m-in <Ref> which matches the upper bound in <Ref>. We also prove the lower bounds for bandit with strongly and weakly observable feedback graphs in <Ref> and <Ref> respectively. The result on weakly observable graphs solves an open problem in <cit.>. §.§ Related Works The bandit feedback setting as an online decision problem has received considerable attention. The work of <cit.> first provided a tight bound for the bandit feedback setting, while the full information feedback case has been well studied in <cit.>. Building upon these works, <cit.> introduced an interpolation between these two extremes and generalized the feedback of the classic bandit problem to a graph structure. Several prior studies, such as <cit.>, have proposed various graph parameters to characterize the factors that influence regret. However, the algorithms proposed in these works for more general graphs do not yield a tight bound in our specific setting. The pure exploration version of the bandit problem, known as the best arm identification () problem, has also received significant attention in the literature (<cit.>). While the problem may appear deceptively simple, determining the precise bound for under the bandit feedback setting remains an open question. However, for the problem of identifying an -optimal arm with high probability, <cit.> established a tight bound for the bandit feedback setting, while the bound for the full feedback model is relatively straightforward (see e.g. <cit.>). §.§.§ Comparison with <cit.> The very recent work of <cit.> studied interpolation of learning with experts and multi-armed bandit as well from a different perspective. They proved an O√(Tα(1+logN/α)) upper bound for the minimax regret of bandit with strongly feedback graph G where α is the independence number of G. The parameter is in general not comparable with clique covers used in this work for feedback graphs. Particularly on an m-instance where m=(m_1,…,m_K), the independence number is K and therefore their upper bound becomes to O√(TKlog(N/K)) while our results showed that the minimax regret is indeed Θ√(T∑_k=1^Klog (m_k+1)). To see the difference, assume K=⌊log N⌋ and m=(1,1,…,1,N-K+1), then the minimax regret is Θ√(Tlog N) while the upper bound in <cit.> is O√(T)log N. § PRELIMINARIES In this section, we formally define the notations used and introduce some preparatory knowledge that will help in understanding this work. §.§ Mathematical Notations Let n be a non-negative integer. We use [n] to denote the set 1,2,…,n and Δ_n-1=x∈ R_≥ 0^n:∑_i=1^n x(i)=1 to denote the n-1 dimensional standard simplex where R_≥ 0 is the set of all non-negative real numbers. For a real vector x∈ R^n, the i-th entry of x is denoted as x(i) for every i∈ [n]. We define e^[n]_i as the indicator vector of the i-th coordinate such that e^[n]_i(i)=1 and e^[n]_i(j)=0 for all j≠ i and j∈[n]. We may write e^[n]_i as e_i if the information on n is clear from the context. Given two vectors x,y∈ R^n, we define their inner product as xy=∑_i=1^n x(i)y(i). For any a,b∈ R, let [a,b]=c∈ R|mina,b≤ c≤maxa,b be the interval between a and b. For any x,y∈ R^n, we say y≥x if y(i)≥x(i) for every i∈[n]. Then we can define the rectangle formed by x and y: (x,y)=z∈ R^n:y≥z≥x. For any positive semi-definite matrix M∈ R^n× n, let x_M=√(x^TMx) be the norm of x with respect to M. Specifically, we abbreviate x_∇^2 ψ^-1 as x_∇^-2ψ where ∇^2 ψ is the Hessian matrix of a convex function ψ. Let F: R^n→ R be a convex function which is differentiable in its domain (F). Given x,y ∈(F), the Bregman divergence with respect to F is defined as B_F(𝐱, 𝐲)=F(𝐱)-F(𝐲)-x-y∇ F(y). Given two measures P_1 and P_2 on the same measurable space (Ω, F), the KL-divergence between P_1 and P_2 is defined as P_1, P_2=∑_ω∈Ω[1]ωlog[1]ω/[2]ω if Ω is discrete or P_1, P_2=∫_Ωlog[1]ω/[2]ω[1]ω if Ω is continuous provided P_1 is absolutely continuous with respect to P_2. §.§ Graph Theory Let G=(V,E) be a directed graph where V=N. We use (u,v) to denote the directed edge from vertex u to vertex v. For any U⊆ V, we denote the subgraph induced by U as G[U]. For v∈ V, let (v)u∈ V (u,v)∈ E be the set of in-neighbors of v and (v)u∈ V (v,u)∈ E be the set of out-neighbors. If the graph is undirected, we have (v)=(v), and we use ­N(v) to denote the neighbors for brevity. We say S⊆ V is an independent set of G if for every v∈ S, u∈ S| u≠ v,u∈(v)∪(v)=∅. The maximum independence number of G is denoted as α(G) and abbreviated as α when G is clear from the context. Furthermore, we say an independent set S is a t-packing independent set if and only if for any v∈ V, there are at most t out-neighbors of v in S, i.e., (v)∩ S≤ t. We say the subsets V_1,…,V_K ⊆ V form a (V_1,…,V_K)-clique cover of G if each induced subgraph G[V_k] for k∈ [K] is a clique and ⋃_k∈ [K] V_k = V. §.§  -and  - Let K>0 be an integer. Given a vector m=m_1,m_2,…,m_K∈ Z_≥ 1^K with ∑_k∈ [K] m_k = N, we now define problems m-and m-respectively. §.§.§  - In the problem of m-, there are N arms. The arms are partitioned into K groups and the k-th group contains m_k arms. Let T∈ N be the time horizon. Then m-is the following online decision game. The game proceeds in T rounds. At round t=0,1,…,T-1: * The player pulls an arm A_t∈ [N]; * The adversary chooses a loss function ℓ^(t)∈ [0,1]^N; * The player incurs loss ℓ^(t)(A_t) and observes the losses of all arms in the group containing A_t. Clearly the vector m encodes the amount of information the player can observe in each round. Two extremes are the problem of learning with expert advice and multi-armed bandit, which correspond to (N)-and (1,…,1)-respectively. We assume the player knows m and T in advance and use A to denote the player's algorithm (which can be viewed as a function from previous observed information and the value of its own random seeds to the arm pulled at each round). The performance of the algorithm A is measured by the notion of regret. Fix a loss sequence L⃗=ℓ^(0),…,ℓ^(T-1). Let a^*= _a∈ [N]∑_t=1^Tℓ^(t)(a) be the arm with minimum accumulated losses. The regret of the algorithm A and time horizon T on L⃗ with respect to the arm a is defined as R_a(T,A,L⃗)=∑_t=0^T-1ℓ^(t)(A_t)-∑_t=0^T-1ℓ^(t)(a). If there is no ambiguity, we abbreviate R_a(T,A,L⃗) as R_a(T). We also use R(T) to denote R_a^*(T). We are interested in the regret of the best algorithm against the worst adversary, namely the quantity R^*_a(T)=inf_Asup_L⃗R_a(T,A,L⃗). We call R^*_a^*(T) the minimax regret of m-and usually write it as R^*(T). We may use the following two ways to name an arm in m-: * use the pair (k,j) where k∈ [K] and j∈ [m_k] to denote “the j-th arm in the k-th group”; * use a global index i∈ [N] to denote the i-th arm. Following this convention, we use ℓ^(t)(i) and ℓ^(t)_k(j) to denote the loss of arm i and arm (k,j) at round t respectively. §.§.§ Best Arm Identification and  - The best arm identification () problem asks the player to identify the best arm among N given arms with as few pulls as possible. To be specific, each arm i is associated with a parameter p_i and each pull of arm i gives an observation of its random loss, which is drawn from a fixed distribution with mean p_i independently. The loss of each arm is restricted to be in [0,1]. The one with smallest p_i, indexed by i^*, is regarded as the best arm. An arm j is called an -optimal arm if its mean is less than the mean of the best arm plus for some ∈ (0,1), namely p_j< p_i^*+. With fixed ,δ>0, an (, δ)-probably approximately correct algorithm, or (,δ)-PAC algorithm for short, can find an -optimal arm with probability at least 1-δ. In most parts of this paper, we choose δ=0.05. For an algorithm A of , we usually use T to denote the number of arms A pulled before termination. Similarly for any arm i, we use T_i to denote the number of times that the arm i has been pulled by A before its termination. We also use N_i to denote the number of times that the arm i has been observed by A. Let m=m_1,m_2,⋯,m_K∈ Z_≥ 1^K be a vector. Similar to m-, the arms are partitioned into K groups and the k-th group consists of m_k arms. Each pull of an arm can observe the losses of all arms in the group. As usual, the goal is to identify the best arm (the one with minimum p_i) with as few rounds as possible. Similar to m-, we use i∈ [N] or (k,j) where k∈ [K] and j∈ [m_k] to name an arm. For a fixed algorithm, we use T_i or T_(k,j) to denote the number of times the respective arm has been pulled and use N_i or N_(k,j) to denote the number of times it has been observed. For every k∈ [K] we use T^(k) to denote the number of times the arms in the k-th group have been pulled, namely T^(k) = ∑_j∈ [m_k] T_(k,j). By definition, it holds that T=∑_k∈ [K] T^(k) and N_(k,j)=T^(k) for every j∈ [m_k]. §.§ Bandit with Graph Feedback A more general way to encode the observability of arms is to use feedback graphs. In this problem, a directed graph G=(V,E) is given. The vertex set V=[N] is the collection of all arms. The game proceeds in the way similar to m-. The only difference is that when an arm A_t is pulled by the player at a certain round, all arms in (A_t) can be observed. As a result, given a vector m=m_1,m_2,⋯,m_K∈ Z_≥ 1^K, the m-problem is identical to bandit with graph feedback G=(V,E) where G is the disjoint union of K cliques G_1=(V_1,E_1),G_2=(V_2,E_2),…,G_K=(V_K,E_K) with m_k=V_k and E_k=V_k^2 for every k∈ [K]. According to <cit.>, we measure the observability of each vertex in terms of its in-neighbors. If a vertex has no in-neighbor, we call it a non-observable vertex, otherwise it is observable. If a vertex v has a self-loop or (v) exactly equals to V∖v, then v is strongly observable. If an observable vertex is not strongly observable, then it is weakly observable. In this work, we assume each vertex is observable. If all the vertices are strongly observable, the graph G is called a strongly observable graph. If G contains weakly observable vertices (and does not have non-observable ones), we say G is a weakly observable graph. We can also define the notion of regret for bandit with graph feedback. Assume notations before, the regret of an algorithm A with feedback graph G and time horizon T on a loss sequence L⃗ with respect to the arm a is defined as R_a(G,T,A,L⃗)=∑_t=0^T-1ℓ^(t)(A_t)-∑_t=0^T-1ℓ^(t)(a). If there is no ambiguity, we abbreviate R_a(G,T,A,L⃗) as R_a(G,T) or R_a(T). We also use R(T) to denote R_a^*(T). Then minimax regret is again R^*_a^*(G,T)=inf_Asup_L⃗R_a^*(G, T,A,L⃗). When G is clear from the context, we write it as R^*(T). § THE UPPER BOUNDS In this section, we prove <Ref> and <Ref>. We describe the algorithm for m-in <Ref> and analyze it in <Ref>. The algorithm for m-is obtained by a reduction to m-described in <Ref>. Finally we discuss how to extend the algorithm to bandit with strongly observable feedback graphs and prove <Ref> in <Ref>. §.§ The Algorithm As discussed in the introduction, our algorithm basically follows the framework of the two-stage online stochastic mirror descent developed in <cit.>. However, our updating rules is slightly different from the one in <cit.> in order to incorporate with our new analysis. Given a K-dimensional vector m=(m_1,…,m_K) as input, in each round t, the algorithm proceeds in the following two-stage manner: * A distribution Y^(t) over [K] is maintained, indicating which group of arms the algorithm is going to pick. * For each k∈ [K], a distribution X^(t)_k is maintained, indicating which arm in the k-th group the algorithm will pick conditioned on that the k-th group is picked in the first stage. * The algorithm then picks the j-th arm in the k-group with probability Y^(t)(k)· X^(t)_k(j). The algorithm is described in Algorithm <ref> and we give an explanation for each step below. Assuming Y^(0) and X_k^(0) for all k∈ [K] are well initialized, in each time step t= 0,1,… ,T-1, the player will repeat the following operations: Sampling: For each arm (k,j), the algorithm pulls it with probability Z^(t)(k,j)=Y^(t)(k)· X_k^(t)(j). The arm pulled at this round is denoted by A_t=(k_t,j_t). Our algorithm can guarantee that Z^(t) is a distribution over all arms. Observing: Observe partial losses ℓ^(t)_k_t(j) for all j∈ [m_k_t]. Estimating: For each arm (k,j), define the unbiased estimator ℓ̂_k^(t)(j) = k=k_t/k=k_t·ℓ_k^(t)(j). It is clear that ℓ̂_k^(t)(j) = ℓ_k^(t)(j). Updating: * For each k∈[K], update X^(t)_k in the manner of standard OSMD: ∇ϕ_k(X_k^(t+1))=∇ϕ_k(X_k^(t))-ℓ̂_k^(t); X_k^(t+1)= _x∈Δ_m_k-1 B_ϕ_k(x,X_k^(t+1)), where ϕ_k(x)=η_k^-1∑_i x(i)log x(i) is the negative entropy scaled by the learning rate η_k. * Define Y^(t) in the way that 1/√(Y^(t+1)(k))=1/√(Y^(t)(k))+∑_j∈[m_k]η/η_kX_k^(t)(j)1-exp-η_k·ℓ̂_k^(t)(j), ∀ k∈ [K] where η is the learning rate. Then let Y^(t+1) be the projection of Y^(t+1) on Δ_K-1: Y^(t+1)=_y∈Δ_K-1 B_ψ(y,Y^(t+1)), where ψ(y)=-2∑_i √(y(i)) for any y=(y(1),…,y(K))∈ R^K, referred to as Tsallis entropy in literature. Note that when x is small, 1-exp-x≈ x. So when η_k is small (and it is so), the updating rule is approximately 1/√(Y^(t+1)(k))=1/√(Y^(t)(k))+η∑_j∈[m_k]X_k^(t)(j)·ℓ̂_k^(t)(j), ∀ k∈ [K], which is equivalent to ψ(Y^(t+1)) = ψ(Y^(t)) - η· L^(t), where L^(t)=( L^(t)(1),…, L^(t)(K))∈ R^K satisfying L^(t)(k)=∑_j∈ [m_k] X^(t)_k(j)·ℓ̂_k^(t)(j). One can think of L^(t)(k) as the “average loss” of the arms in the k-th group at round t. Nevertheless, we use rule  (<ref>) in the algorithm since it is convenient for our analysis later. In the realization of Algorithm <ref>, we will choose η=1/√(T) and η_k=logm_k+1/√(T∑_k=1^Klog (m_k+1)). §.§ Analysis We prove the following theorem, which implies <Ref>. For every T>0 and every loss sequence ℓ^(0),…,ℓ^(T-1)∈ [0,1]^N, the regret of Algorithm <ref> satisfies R(T)≤ O√(T∑_k=1^Klog (m_k+1)). Instead of directly bounding the regret of the sequence of the action distributions Z^(t)_0≤ t≤ T-1, we study an auxiliary piecewise continuous process Z^(s)_s∈ [0,T). We define and bound the regret of Z^(s)_s∈ [0,T) in <Ref>, and compare it with the regret of Z^(t)_0≤ t≤ T-1 in <Ref>. Finally, we prove <Ref> in <Ref> §.§.§ The piecewise continuous process Assuming notations in Algorithm <ref>, the process Z^(s)_s∈ [0,T) is defined as Z^(s)(k,j) = Y^(s)(k)·X^(s)_k(j), ∀ k∈ [K], j∈ [m_k], where Y^(s)_s∈ [0,T) and X^(s)_k_s∈ [0,T) for every k∈ [K] are piecewise continuous processes defined in the following way. * For every integer t∈0,1,…,T-1, we let Y^(t) = Y^(t) and X^(t)_k = X^(t)_k for every k∈ [K]. * For every integer t∈0,1,…,T-1 and every k∈ [K], the trajectory of X^(s)_k_s∈ [t,t+1) is a continuous path in R^m_k governed by the ordinary differential equation ϕ_k(X^(s)_k)s = -ℓ̂^(t)_k. * For every integer t∈0,1,…,T-1, the trajectory of Y^(s)_s∈[t,t+1) is a continuous path in R^K governed by the ordinary differential equation ψ(Y^(s))s = - L^(s), where L^(s)= L^(s)(1),…, L^(s)(K)∈ R^K satisfies L^(s)(k) = ∑_j∈ [m_k]X_k^(s)(j)·ℓ̂_k^(t)(j). Clearly the trajectories of Z^(s), Y^(s) and X^(s)_k for every k∈ [K] are piecewise continuous paths in the time interval s∈ [0,T). An important property is that the end of each piece of the trajectories of Y^(s) and X^(s)_k coincides with its discrete counterpart before performing projection to the probability simplex. Formally, for every t∈ [T] and k∈ [K], define X^(t)^-_klim_s→ t^-X^(s)_k and Y^(t)^-lim_s→ t^-Y^(s). We have the following lemma. For every t∈ [T] and k∈ [K], it holds that X^(t)^-_k = X^(t)_k and Y^(t)^- = Y^(t). To ease the notation, for any fixed t∈0,1,…,T-1 and fixed k∈[K], we now prove that X^(t+1)^-_k = X^(t+1)_k and Y^(t+1)^- = Y^(t+1) respectively. In fact, X^(t+1)^-_k = X^(t+1)_k immediately follows by integrating both sides of (<ref>) from t to t+1 and noting that X^(t)_k=X^(t)_k. More efforts are needed to prove the identity for Y^(t). Recall ϕ_k(x) = η_k^-1∑_j x(j)log x(j) for every x=(x(1),…,x(m_k)). It follows from (<ref>) that for every s∈ [t,t+1) every k∈ [K] and every j∈ [m_k], X^(s)_k(j) = X^(t)_k(j)·exp-(s-t)η_kℓ̂_k^(t)(j). As a result, we know that L^(s)(k) = ∑_j∈ [m_k]X_k^(t)(j)·exp-(s-t)η_kℓ̂_k^(t)(j)·ℓ̂^(t)_k(j). Integrating (<ref>) from t to s, plugging in above and noting that Y^(t)=Y^(t), we obtain 1/√(Y^(s)(k)) = 1/√(Y^(t)(k)) + η/η_k∑_j∈[m_k]X_k^(t)(j)1-exp-η_k· (s-t)·ℓ̂_k^(t)(j), which is exactly our rule to define Y^(t+1) in Line <ref> of Algorithm <ref> (take s=t+1). We define the regret for the piecewise continuous process as follows. The continuous regret contributed by the process Z^(s)_s∈ [0,T) with respect to a fixed arm a∈ [N] is defined as R_a(T) ∑_t=0^T-1∫_t^t+1Z^(s)-e_a^[N]ℓ^(t)s. Then we are ready to bound R_a(T). Recall that we may write e_a^[N] as e_a if the information on N is clear from the context. For any time horizon T>0, any loss sequence ℓ^(0),ℓ^(1),…,ℓ^(T-1)∈ [0,1]^N, and any arm a=(k,j), it holds that R_a(T) ≤ B_ψ(e_k^[K], Y^(0)) + B_ϕ_k(e_j^[m_k],X^(0)_k). Assume a=(k,j). For every t∈0,1,…,T-1, we compute the decreasing rate of the Bregman divergence caused by the evolution of Y^(s) and X^(s)_k respectively. First consider the change of B_ψ(e_k,Y^(s)) over time: sB_ψ(e_k,Y^(s)) =sψ(e_k)-ψ(Y^(s))-e_k-Y^(s)ψ(Y^(s)) =ψ(Y^(s))sY^(s)-e_k =- L^(s)Y^(s)-e_k. Integrating above from t to t+1, we have ∫_t^t+1 L^(s)Y^(s)-e_ks=B_ψ(e_k, Y^(t)) - B_ψ(e_k,Y^(t+1)^-) = B_ψ(e_k, Y^(t)) - B_ψ(e_k,Y^(t+1)), where the last equality follows from <Ref>. Note that projection never increases Bregman divergence; that is, we have =B_ψ(e_k,Y^(t+1))- B_ψ(e_k,Y^(t+1)) =ψ(Y^(t+1))-ψ(Y^(t+1))+ψ(Y^(t+1))e_k-Y^(t+1)-ψ( Y^(t+1))e_k- Y^(t+1) =ψ(Y^(t+1)) - ψ( Y^(t+1))-ψ( Y^(t+1))Y^(t+1)- Y^(t+1)_A +ψ( Y^(t+1))-ψ(Y^(t+1))Y^(t+1)-e_k_B. Since ψ is convex, we have A≥ 0. By the definition of Y^(t+1), Y^(t+1)=_y∈Δ_K-1 B_ψ(y, Y^(t+1)) = _y∈Δ_K-1ψ(y) - yψ( Y^(t+1)). The first-order optimality condition (see Section 26.5 in <cit.>) implies that B≥ 0. As a result, B_ψ(e_k,Y^(t+1)) ≥ B_ψ(e_k,Y^(t+1)) and it follows from <Ref> that ∫_t^t+1 L^(s)Y^(s)-e_ks≤ B_ψ(e_k, Y^(t)) - B_ψ(e_k,Y^(t+1)). Then we consider the change of B_ϕ_k(e_j,X^(s)_k) over time. Likewise we have sB_ϕ_k(e_j,X^(s)_k) =ϕ_k(X^(s)_k)sX^(s)_k-e_j = -ℓ̂^(t)_kX^(s)_k-e_j. By an argument similar to the one for Y^(s) above, we can obtain ∫_t^t+1ℓ̂^(t)_kX^(s)_k-e_js≤ B_ϕ_k(e_j,X^(t)_k)-B_ϕ_k(e_j,X^(t+1)_k). On the other hand, we have for every s∈ [t,t+1) and any arm a^*=(k^*,j^*), Z^(s)-e_a^*ℓ^(t) = Z^(s)-e_a^*ℓ̂^(t) =∑_k∈ [K]∑_j∈ [m_k]Y^(s)(k)·X^(s)_k(j)·ℓ̂^(t)_k(j)-ℓ̂^(t)(a^*). Recall that for every k∈ [K], it holds that L^(s)(k) = ∑_j∈ [m_k]X^(s)_k(j)·ℓ̂^(t)_k(j). Rearranging above yields Z^(s)-e_a^*ℓ^(t) =∑_k∈ [K]Y^(s)(k)· L^(s)(k) - ℓ̂^(t)(a^*) =Y^(s) L^(s) - ℓ̂^(t)(a^*) =Y^(s)-e_k^* L^(s)+ L^(s)(k^*) -ℓ̂^(t)_k^*(j^*) =Y^(s)-e_k^* L^(s) +X^(s)_k^*-e_j^*ℓ̂^(t)_k^*. Integrating above from t to t+1 and plugging in <Ref>, we obtain ∫_t^t+1Z^(s)-e_a^*ℓ^(t)s = ∫_t^t+1Y^(s)-e_k^* L^(s) s +∫_t^t+1X^(s)_k-e_j^*ℓ̂^(t)_k^*s ≤ B_ψ(e_k, Y^(t)) - B_ψ(e_k,Y^(t+1)) + B_ϕ_k(e_j,X^(t)_k)-B_ϕ_k(e_j,X^(t+1)_k). Summing above over t from 0 to T-1 finishes the proof. §.§.§ Comparison of   and   For any fixed loss sequence ℓ^(0), ℓ^(1),…,ℓ^(T-1), we bound the difference between the regret R_a(T) of Algorithm <ref> and the continuous regret R_a(T) for any arm a. Formally, we establish the following lemma: R_a(T)-R_a(T) ≤1/2∑_t=0^T-1sup_ξ∈(Y^(t), Y^(t+1)) L^(t)^2_^-2ψ(ξ) + ∑_k∈ [K] Y^(t)(k)·sup_ζ_k∈(X^(t)_k, X^(t+1)_k)ℓ̂^(t)_k^2_^-2ϕ_k(ζ_k). By the definition of the regret, we have R_a(T) = ∑_t=0^T-1Z^(t)-e_aℓ̂^(t) = ∑_t=0^T-1Z^(t)-e_aℓ̂^(t) = ∑_t=0^T-1∫_t^t+1Z^(s)-e_aℓ̂^(t) s + ∫_t^t+1Z^(t)-Z^(s)ℓ̂^(t) s =R_a(T) + ∑_t=0^T-1∫_t^t+1Z^(t)-Z^(s)ℓ̂^(t) s, where the first equality holds due to Fubini's theorem. Therefore, we only need to bound the term ∑_t=0^T-1∫_t^t+1Z^(t)-Z^(s)ℓ̂^(t) s. Fix t∈0,1,…,T-1. We have shown in the proof of <Ref> that X_k^(s)(j)=X_k^(t)(j)·exp-(s-t)η_kℓ̂_k^(t)(j)≤ X_k^(t)(j) for any s∈ [t,t+1) and any j∈ [m_k]. Recall that L^(s)(k) = ∑_j∈ [m_k]X^(s)_k(j)·ℓ̂^(t)_k (j) for every k∈ [K]. Then by the discussion above, we have L^(s)≤ L^(t) for any s∈ [t,t+1). As a result, it follows from (<ref>) that for any s∈ [t,t+1), ψ (Y^(s))-ψ(Y^(t))=∫_t^s -L^(w) w≥ -(s-t)·L^(t). Recall that for any two vectors x,y of the same dimension, (x,y) is the rectangle between x and y. Since our ψ is a separable function (and therefore ^2ψ is diagonal), we can apply the mean value theorem entrywise and obtain ψ (Y^(s))-ψ(Y^(t))=^2ψ(ξ^(s))(Y^(s)-Y^(t)) for some ξ^(s)∈(Y^(s), Y^(t)). By our choice of ψ, it holds that ^2ψ(ξ^(s)) 0 for any ξ^(s)∈(Y^(s), Y^(t)). Therefore, combining <Ref>, we have Y^(s)≥ Y^(t)-(s-t)·^-2ψ(ξ^(s))·L^(t). Similar argument yields that X_k^(s)≥ X_k^(t)-(s-t)·^-2ϕ_k(ζ_k^(s))·ℓ̂_k^(t) for some ζ_k^(s)∈(X_k^(s), X_k^(t)). Therefore for any k∈ [K], j∈ [m_k] and any s∈ [t,t+1), we can bound the difference between Z^(t)(k,j) and Z^(s)(k,j): =Z^(t)(k,j) - Z^(s)(k,j) =Y^(t)(k)· X^(t)_k(j) - Y^(s)(k)·X^(s)_k(j) ≤ Y^(t)(k)· X^(t)_k(j) - Y^(t)(k)-(s-t)·[^-2ψ(ξ^(s))·L^(t)](k)·X_k^(t)(j)-(s-t)·[^-2ϕ_k(ζ_k^(s))·ℓ̂_k^(t)](j) =-(s-t)^2·[^-2ψ(ξ^(s))· L^(t)](k)·[^-2ϕ_k(ζ^(s)_k)·ℓ̂^(t)_k](j) +(s-t)· X^(t)_k(j)·[^-2ψ(ξ^(s))· L^(t)](k) =+(s-t)· Y^(t)(k)·[^-2ϕ_k(ζ^(s)_k)·ℓ̂^(t)_k] (j) ≤ (s-t)· X^(t)_k(j)·[^-2ψ(ξ^(s))· L^(t)](k) +(s-t)· Y^(t)(k)·[^-2ϕ_k(ζ^(s)_k)·ℓ̂^(t)_k] (j) for some ξ^(s)∈(Y^(s), Y^(t)) and ζ_k^(s)∈(X_k^(s), X_k^(t)). We are now ready to bound the gap between R_a(T) and R_a(T): R_a(T)-R_a(T) =∑_t=0^T-1∫_t^t+1Z^(t)-Z^(s)ℓ̂^(t) ≤∑_t=0^T-1∫_t^t+1 (s-t)∑_k∈ [K]∑_j∈ [m_k] X^(t)_k(j)·sup_ξ∈(Y^(t), Y^(t+1))[^-2ψ(ξ)· L^(t)](k)·ℓ̂^(t)_k(j) s_(A) +∑_t=0^T-1∫_t^t+1 (s-t)∑_k∈ [K]∑_j∈ [m_k]Y^(t)(k)·sup_ζ_k∈(X^(t)_k, X^(t+1)_k)[^-2ϕ_k(ζ_k)·ℓ̂^(t)_k](j)·ℓ̂^(t)_k(j) s_(B). Note that in both expressions (A) and (B) above, only the term (s-t) depend on s. So we can integrate and obtain: (A) =1/2∑_t=0^T-1∑_k∈ [K]∑_j∈ [m_k] X^(t)_k(j)·sup_ξ∈(Y^(t), Y^(t+1))[^-2ψ(ξ)· L^(t)](k)·ℓ̂^(t)_k(j) =1/2∑_t=0^T-1∑_k∈ [K]sup_ξ∈(Y^(t), Y^(t+1))[^-2ψ(ξ)· L^(t)](k) ·∑_j∈[m_k] X^(t)_k(j)·ℓ̂^(t)_k(j) =1/2∑_t=0^T-1∑_k∈ [K]sup_ξ∈(Y^(t), Y^(t+1))[^-2ψ(ξ)· L^(t)](k) · L^(t)(k) =1/2∑_t=0^T-1sup_ξ∈(Y^(t), Y^(t+1)) L^(t)^2_^-2ψ(ξ). Similarly, (B) =1/2∑_t=0^T-1∑_k∈ [K]∑_j∈ [m_k]Y^(t)(k)·sup_ζ_k∈(X^(t)_k, X^(t+1)_k)[^-2ϕ_k(ζ_k)·ℓ̂^(t)_k](j)·ℓ̂^(t)_k(j) =1/2∑_t=0^T-1∑_k∈ [K] Y^(t)(k)·sup_ζ_k∈(X^(t)_k, X^(t+1)_k)ℓ̂^(t)_k^2_^-2ϕ_k(ζ_k). Combining <Ref>, we have R_a(T)-R_a(T) ≤1/2∑_t=0^T-1sup_ξ∈(Y^(t), Y^(t+1)) L^(t)^2_^-2ψ(ξ) + ∑_k∈ [K] Y^(t)(k)·sup_ζ_k∈(X^(t)_k, X^(t+1)_k)ℓ̂^(t)_k^2_^-2ϕ_k(ζ_k). If we apply the “regret decomposition theorem” in <cit.> and use the standard OSMD bound for each stage, we will get the term sup_ζ_k^*∈(X^(t)_k^*, X^(t+1)_k^*)ℓ̂^(t)_k^*^2_^-2ϕ_k^*(ζ_k^*) where k^* is the index of the group containing the optimal arm instead of the term ∑_k∈ [K] Y^(t)(k)·sup_ζ_k∈(X^(t)_k, X^(t+1)_k)ℓ̂^(t)_k^2_^-2ϕ_k(ζ_k) in <ref>. The new Y^(t)(k) term is crucial to our optimal regret bound since it cancels a Y^(t)(k) term hidden in the denominator of ℓ̂^(t)_k^2_^-2ϕ_k(ζ_k). This will be clear in <Ref>. §.§.§ The Regret of Algorithm <ref> Note that the regret of Algorithm <ref> is composed of the two parts in <Ref> and <Ref>. In this section, we will prove <Ref> by providing more specific bounds for the terms in these two lemmas. By definition of Bregman divergence, B_ψ(e_k, Y^(0)) =ψ(e_k)-ψ(Y^(0)) -∇ψ(Y^(0))e_k-Y^(0). Since we initialize Y^(0)=_b∈Δ_K-1ψ(b), Y^(0)(k)=1/K for k∈ [K] and ∇ψ(Y^(0))e_k-Y^(0)≥ 0 follows the first-order optimality condition for Y^(0). Thus B_ψ(e_k, Y^(0)) ≤ψ(e_k)-ψ(Y^(0)) =-2+ 2√(K)/η≤2√(K)/η. Similarly we have X_k^(0)(j)=1/m_k for j∈ [m_k] and B_ϕ_k(e_j,X^(0)_k)≤ϕ_k(e_j)-ϕ_k(X^(0)_k)=log m_k/η_k. Therefore R_a(T)≤2√(K)/η+log m_k/η_k. Recall that A_t=(k_t,j_t) is the arm pulled by the algorithm at round t. Now we plug our estimator ℓ̂_k^(t)(j)=k_t=k/Y^(t)(k)ℓ_k^(t)(j) and ∇^2ψ(ξ)=diag1/2ηξ(1)^3/2,1/2ηξ(2)^3/2,⋯, 1/2ηξ(K)^3/2 into the first term on the RHS of <Ref>. sup_ξ∈(Y^(t), Y^(t+1)) L^(t)^2_^-2ψ(ξ) = 2ηsup_ξ∈(Y^(t), Y^(t+1)) ∑_k∈ [K]ξ(k)^3/2·k_t = k/Y^(t)(k)∑_j∈[m_k]ℓ_k^(t)(j) X_k^(t)(j) ^2 (a)≤2η∑_k∈ [K]Y^(t)(k)^3/2·k_t=k/Y^(t)(k)∑_j∈[m_k]ℓ_k^(t)(j) X_k^(t)(j) ^2 (b)≤ 2η∑_k∈ [K]k_t=k/√(Y^(t)(k)) | Y^(t) = 2 η∑_k=1^K√(Y^(t)(k))(c)≤ 2η∑_k=1^K√(Y^(t)(k))≤ 2η√(K). In the calculation above: (a) follows from Y^(t+1)(k)≤ Y^(t)(k), (b) is due to ∑_j∈[m_k]ℓ_k^(t)(j) X_k^(t)(j)∈[0,1], and (c) is due to Jensen's inequality. Similarly we have for the second term with ∇^2ϕ_k(ζ_k)=diag1/η_k ζ_k(1),1/η_k ζ_k(2),⋯, 1/η_k ζ_k(m_k) =∑_k∈ [K] Y^(t)(k)·sup_ζ_k∈(X^(t)_k, X^(t+1)_k)ℓ̂^(t)_k^2_^-2ϕ_k(ζ_k) =∑_k∈[K]η_kY^(t)(k)·sup_ζ_k∈(X_k^(t), X_k^(t+1))∑_j∈[m_k]ζ_k(j)·k_t = k/Y^(t)(k)ℓ_k^(t)(j)^2 (d)≤∑_k∈[K]η_kY^(t)(k)·∑_j∈[m_k] X^(t)_k(j)·k_t=k/Y^(t)(k)ℓ_k^(t)(j)^2 (e)≤∑_k∈[K]η_k·∑_j∈[m_k] X^(t)_k(j)·k_t= k/Y^(t)(k) | Y^(t)(k) = ∑_k∈[K]η_k ∑_j∈[m_k]X_k^(t)(j) = ∑_k∈[K]η_k. In the calculation above: (d) follows from X_k^(t+1)(j)≤ X_k^(t)(j) and (e) is due to ℓ_k^(t)(j)∈[0,1]. Hence, summing up above two terms from 0 to T-1, we obtain R_a(T)-R_a(T)≤η√(K)T +1/2T ∑_k∈[K]η_k. Combining <Ref> and choosing η=1/√(T) and η_k=logm_k+1/√(T∑_k=1^Klog (m_k+1)), we obtain for any fixed arm a, R_a(T)≤2√(K)/η+log m_k/η_k + T/2∑_k∈ [K]η_k+η T√(K)≤ O√(T∑_k=1^Klog (m_k+1)). §.§ A Reduction from to In this section, we prove an upper bound of O∑_k=1^Klog (m_k+1)/^2 for m-. We achieve this by constructing a PAC algorithm for m-from an algorithm for m-through the following lemma. Let r(T,L⃗) be a real valued function with the time horizon T and loss sequence L⃗=ℓ^(1),…, ℓ^(T) as its input. Let H be a instance. With fixed T>0, we use [H]r(T,L⃗) to denote the expectation of r(T,L⃗) where ℓ^(t) in L⃗ is drawn from H independently for every t∈ [T]. Let H be a set of instances. Let A be an algorithm for m-with regret R_a^*(T,A,L⃗)≤ r(T,L⃗) for every time horizon T and every loss sequence L⃗. Then there exists an (,0.05)-PAC algorithm A' for m-that terminates in T^* rounds where T^* is the solution of the equation T^* = 2500·max_L⃗r(T^*,L⃗)/. Moreover, if we only care about identifying an -optimal arm with probability 0.95 when the input is chosen from a known family H, we can construct an algorithm solving this problem that terminates in T^*_H rounds where T^*_H is the solution of the equation T^*_H = 2500·max_H∈H[H]r(T^*_H,L⃗)/. Given an instance H of m-, we run A for T^* rounds. Let T_i be the number of times that the arm i has been pulled, i.e., T_i=∑_t=0^T^*-11[A_t=i]. Let Z=Z_1,Z_2,…,Z_N = T_1/T^*, T_2/T^*, …, T_N/T^* be a distribution on N arms. We construct A' by simply sampling from Z=T_1/T^*, T_2/T^*, …, T_N/T^* and outputting the result. Recall that p_i is the mean of the i-th arm in H and arm a^* is the one with the minimum mean. Define the gap vector Δ=(p_1-p_a^*,⋯,p_N-p_a^*). Note that Z is a random vector and define conditional expected regret R(Z)= ΔZ· T^* given Z. Thus the expected regret [Z]R(Z)≤max_L⃗r(T^*,L⃗). By Markov's inequality, R(Z)≤ 100max_L⃗r(T^*,L⃗) with probability at least 0.99. Now we only consider Z conditioned on R(Z)≤ 100max_L⃗r(T^*,L⃗). Let B⊆ [N] denote the “bad set” which contains arms that are not -optimal. Then T^*∑_i∈ BZ_i≤ 100 max_L⃗r(T^*,L⃗). Note that T^* = 2500·max_L⃗r(T^*,L⃗)/. Therefore ∑_i∈ BZ_i≤ 0.04. In total, this algorithm will make a mistake with probability no more than 0.05 by the union bound. When we only care about the input instances chosen from H, we run A for T^*_H rounds and similarly, we output an arm drawn from T_1/T^*_H, T_2/T^*_H, …, T_N/T^*_H. It is easy to verify via the same arguments that this algorithm can output an -optimal arm with probability 0.95 when the input is chosen from H. Then we can use the Algorithm <ref> and <Ref> to give an upper bound for m-. We use Algorithm <ref> to construct an ,0.05-PAC algorithm for m-as described in Lemma <ref>. Since the regret satisfies R(T)≤ c√(T∑_k=1^Klog (1+m_k)) for some constant c on every loss sequence by <Ref>, running Algorithm <ref> with T^*=(2500c)^2∑_k=1^Klog (1+m_k)/^2, we can get an (,0.05)-PAC algorithm which always terminates in O∑_k=1^Klog (m_k+1)/^2 rounds. §.§ The Strongly Observable Graph with Self-loops We can generalize our results to any strongly observable graph G=(V,E) with each vertex owning a self-loop. Assume G contains a (V_1,…,V_K)-clique cover. We construct a new graph G^'=(V,E^') by ignoring the edges between any two distinct cliques. It is clear that R^*(G,T)≤ R^*(G^',T). Then we can prove <Ref> by directly applying Algorithm <ref> with feedback graph G'. This proves <Ref>, which asserts that R^*(G,T) = O√(T·∑_k=1^Klog(m_k+1)). Although we assume that each vertex contains a self-loop for the sake of simplicity, we note that our algorithm can still be applied to strongly observable graphs that have some vertices without self-loops. In such cases, we can incorporate an additional exploration term into our algorithm, and a similar analysis to that in <Ref> still works. There have been several works using the clique cover as the parameter to bound the minimax regret of graph bandit. For example, <cit.> applies FTRL algorithm with a carefully designed potential function which combines the Tsallis entropy with negative entropy. It achieves a regret of log T^O(1)· O√(KT). Our new bound takes into account the size of each clique and is always superior. § LOWER BOUNDS FOR  - Let A be an algorithm for m-where m=(m_1,…,m_K) is a vector. Given an instance of m-, we use T to denote the number of rounds the algorithm A proceeds. Recall that for every group k∈ [K] and j∈ [m_k], we use T_(k,j) to denote the number of times that the arm (k,j) has been pulled. For every k∈ [K], let T^(k) = ∑_j∈ [m_k] T_(k,j) be the number of rounds the arms in the k-th group have been pulled. We also use N_(k,j) to denote the number of times the arm (k,j) has been observed. Clearly N_(k,j) = T^(k). In the following part, we only consider stochastic environment. That is, ℓ^(t) is independently drawn from the same distribution for each t∈ N. Therefore, we omit the superscript (t) and only use ℓ(i) or ℓ_k(j) to denote the one-round loss of arm i or arm (k,j) respectively when the information is clear from the context. In <Ref>, we lower bound the number of rounds for a PAC algorithm on a specific m-instance with m=(m) and then prove the result for m-in <Ref>. We then use these results to prove a regret lower bound for m-and bandit problems with general feedback graphs in <Ref>. §.§ An Instance-Specific Lower Bound for  - In this section, we study the number of rounds required for (m)-in an (,0.05)-PAC algorithm. In this setting, the pull of any arm can observe the losses of all arms. We will establish a lower bound for a specified instance, namely the one where all arms follow Ber(1/2). This is key to our lower bound later. We focus on instances of (m)-where each arm is Bernoulli. As a result, each instance can be specified by a vector p_1,…,p_m-1,p_m∈ R^m meaning the loss of arm i follows Ber(p_i) in each round independently. Let ∈0,1/2. In the following context, when we denote an instance as H^m, the superscript m indicates that it is an m-instance. Consider the following m+1 (m)-instances H_j^(m)_j∈[m]∪0: * The instance H_0^(m) is 1/2, 1/2,1/2, ⋯, 1/2. That is, p_i=1/2 for every i∈[m] in H_0^(m); * For j∈[m], H_j^(m) = 1/2,1/2,⋯,1/2, 1/2-_↑ j, 1/2, ⋯, 1/2; that is, the instance satisfies p_j=1/2- and p_i=1/2 for every i≠ j. We say an algorithm A distinguishes H_j^(m)_j∈[m]∪0 with probability p if A j|H^(m)_j≥ p, and the output can be arbitrary among 0,1,… m when the input is not in H_j^(m)_j∈[m]∪0. The main result of this section is Let A be an (,0.05)-PAC algorithm. Assume m≥ 2. There exists a universal constant c_1>0 such that A terminates on H^(m)_0 after at least c_1/^2log (m+1) rounds in expectation. We will prove the lemma in <Ref> via a reduction from a lower bound for Gaussian arms established in <Ref>. §.§.§ The Gaussian Arms In this section, we relax the constraint on the range of each arm's loss and allow the losses to be arbitrary real numbers. Let ∈0,1/2 and σ∈1/2√(2π),1/√(2π). We construct m+1 instances N_j_j∈0∪ [m] with Gaussian distributions: * In the instance N_0, for each i∈[m], ℓ(i) is independently drawn from a Gaussian distribution N(0,σ^2); * In the instance N_j for j∈[m], ℓ(j)∼N(-,σ^2) and ℓ(i)∼N(0,σ^2) for each i≠ j and i∈[m] independently. Let P_1 and P_2 be two probability measures on the same measurable space Ω,F, and let E∈F be an arbitrary event. Then P_1[E]+P_2[ E]≥1/2e^-P_1, P_2 Let N_mix be the mixture of N_j_j∈[m] meaning that the environment chooses k from [m] uniformly at random and generates losses according to N_k in the following game. Let A be an algorithm distinguishing N_j_j∈[m]∪0. Let Ω be the set of all possible outcomes during the first t^* rounds, including the samples according to the input distribution and the output of A (if A does not terminate after the t^*-th round, we assume its output is -1). Note that if the algorithm terminates in t'<t^* rounds, we can always add t^*-t' virtual rounds so that it still produces a certain loss sequence in R^m× t^*. As a result, each outcome ω∈Ω can be viewed as a pair ω = (w, x) where w∈ R^m× t^* is the loss sequence and x∈-1,0,1,…,m indicates the output of A. Thus Ω = W×-1,0,1,…,m where W= R^m× t^*. To ease the proof below, we slightly change A's output: if the original output is x∈-1,0,…,m, we instead output a uniform real in [x,x+1). Therefore, we can let Ω=W× X where W= R^m× t^* and X= R. The benefit of doing so is that we can let F be the Borel sets in Ω which is convenient to work with. Clearly it is sufficient to establish lower bounds for the algorithms after the change. For any instance H^(m), let P_H^(m) be the measure of outcomes of A in t^* rounds with input instance H^(m) and p_H^(m) be the corresponding probability density function (PDF). Then P_N_0 and P_N_mix are two probability measures on (Ω,F) and p_N_mix(ω)=1/m∑_j∈[m]p_N_j(ω) for any ω=(w,x)∈Ω= R^m× t^*+1. We also let p^W_H^(m) be the PDF of the samples during the first t^* rounds according to the input H^(m) and p^X_H^(m) be the PDF of A's output. Furthermore, we let p^X|W_H^(m) to be the conditional density function of X given W. By definition, we have p^X|W_H^(m)(x|w) = p_H^(m)(ω)/p^W_H^(m)(w). P_N_mix,P_N_0≤logm-1+exp^2t^*/σ^2/m. For any ω=(w,x) ∈Ω, let w_j,t denote the (j,t)^th entry of the matrix w for every j∈[m] and t∈[t^*]. That is, w_j,t=ℓ^(t)(j), which is the loss of arm j in the t-th round. Then for each i∈[m], p^W_N_i(w)=2πσ^2^-mt^*/2exp-∑_t∈ [t^*](w_i,t+)^2+∑_j≠ iw_j,t^2/2σ^2 and p^W_N_0(w)=2πσ^2^-mt^*/2exp-∑_t∈ [t^*],j∈[m]w_j,t^2/2σ^2. Therefore we have p_N_i(ω)/p_N_0(ω)= p^W_N_i(w)/p^W_N_0(w) =2πσ^2^-mt^*/2exp-∑_t∈ [t^*](w_i,t+)^2+∑_j≠ iw_j,t^2/2σ^2/2πσ^2^-mt^*/2exp-∑_t∈ [t^*],j∈[m]w_j,t^2/2σ^2 =exp-^2t^*+2∑_t∈[t^*]w_i,t/2σ^2. From Jensen's inequality, we have P_N_mix,P_N_0 = ∫_Ωlogp_N_mix(ω)/p_N_0(ω)P̣_̣Ṇ_̣ṃịx̣(̣ω̣)̣≤log∫_Ωp_N_mix(ω)/p_N_0(ω)P̣_̣Ṇ_̣ṃịx̣(̣ω̣)̣ = log∫_Ω1/m∑_j∈[m]p_N_j(ω)1/m∑_i∈[m]p_N_j(ω)/p_N_0(ω)ω̣. Note that for ω=(w,x), For i,j∈[m] and i≠ j, ∫_Ωp_N_i(ω)p_N_j(ω)/p_N_0(ω)ω̣ =∫_W∫_Xp^W_N_i(w)·p^X|W_N_i(x|w) p^W_N_j(w)/p^W_N_0(w) x w = ∫_Wp^W_N_i(w) p^W_N_j(w)/p^W_N_0(w) w = 2πσ^2^-mt^*/2·∫_Ωexp-∑_t∈[t^*](w_i,t+)^2+(w_j,t+)^2+∑_j'≠ i j'≠ jw_j',t^2/2σ^2ẉ = 1. For i∈[m], ∫_Ωp_N_i(ω)p_N_i(ω)/p_N_0(ω)ω̣ = ∫_W∫_Xp^W_N_i(w)·p^X|W_N_i(x|w) p^W_N_i(w)/p^W_N_0(w) x w = ∫_Wp^W_N_i(w) p^W_N_i(w)/p^W_N_0(w) w = 2πσ^2^-mt^*/2·∫_Ωexp -∑_t∈[t^*](w_i,t+2)^2+∑_j^'≠ iw_j',t^2-2^2t^*/2σ^2ẉ = exp^2t^*/σ^2. Therefore, combining the equations above, we get ∫_Ω1/m∑_j∈[m]p_N_j(ω)1/m∑_i∈[m]p_N_i(ω)/p_N_0(ω)ω̣ = 1/m^2∑_i,j∈[m]∫_Ωp_N_i(ω)p_N_j(ω)/p_N_0(ω)ω̣ = m(m-1) + m·exp^2t^*/σ^2/m^2 = m-1+exp^2t^*/σ^2/m, where the first equality follows from Fubini's theorem. This indicates that P_N_mix,P_N_0≤logm-1+exp^2t^*/σ^2/m. Let t^*=c_0log (m+1)/^2, where c_0≤σ^2 is a universal constant. We have the following lemma to bound [N_0]T≥ t^*. Here the randomness comes from the algorithm and environment when the input instance is N_0. For any algorithm distinguishing N_j_j∈[m]∪0 with probability 0.925, we have [N_0]T≥ t^*≥ 0.1. Let A be an algorithm that can distinguish N_j_j∈[m]∪0 with probability 0.925. Let E be the event that A terminates within t^* rounds and gives answer N_0. Recall that T is a random variable which represents the rounds that A runs. Assume [N_0]T≥ t^*< 0.1. Then we have [N_0] E< 0.075+0.1 from the union bound. Combining <Ref> and <Ref>, we get [N_mix]E≥m/2m-1+exp^2t^*/σ^2-[N_0] E> m/2m-1+m+1-0.1-0.075 ≥ 0.075 for every m≥ 1. This indicates the existence of some j∈[m] such that [N_j]E>0.075, which is in contradiction to the promised success probability of A. Therefore A satisfies [N_0]T≥ t^*≥ 0.1. §.§.§ From Gaussian to Bernoulli We then show a reduction from Gaussian arms to Bernoulli arms which implies lower bounds for instances H_j^(m)_j∈[m]∪0. Given an input instance from N_j_j∈[m]∪0, we can map it to a corresponding instance among H_j^(m)_j∈[m]∪0 by the following rules. In each round, if an arm receives a loss ℓ∈ R, let ℓ= 0, ℓ<0; 1, ℓ≥ 0. Obviously, losses drawn from Gaussian distribution N(0, σ^2) are mapped to Ber1/2 losses. For a biased Gaussian N-,σ^2, as <Ref> shows, it holds that ℓ< 0 =∫_-∞^-1/√(2π)σe^-(x+)^2/2σ^2x + ∫_-^01/√(2π)σe^-(x+)^2/2σ^2x =1/2+∫_-^01/√(2π)σe^-(x+)^2/2σ^2x. Let f(σ)=∫_-^01/√(2π)σe^-(x+)^2/2σ^2x denote the shadowed area in <Ref>. Note that f is continuous with regard to σ and f(σ)∈/√(2π)σe^-^2/2σ^2, /√(2π)σ. Assume that <1/8. Therefore, there exists σ_0∈1/2√(2π),1/√(2π) such that f(σ_0)=. Choose σ=σ_0. Then we map N(-,σ^2) to Ber1/2- and transform the sample space from R^m× t^* to 0,1^m× t^*. Let be a number in 0,1/8. For any algorithm distinguishing H^(m)_j_j∈[m]∪0 with probability 0.925, we have [H^(m)_0]T≥ t^*≥ 0.1. Assume that there exists such an algorithm A with [H^(m)_0]T≥ t^*< 0.1. We then construct an algorithm A' to distinguish N_j_j∈[m]∪0. The algorithm A' proceeds as follows: When A' receives a loss ℓ, it first calculates ℓ as <Ref> and treats ℓ as the loss to apply A. If A outputs H^(m)_j, A' output N_j. Therefore, A' also succeeds with probability 0.925 while satisfying [N_0]T≥ t^*< 0.1. This violates <Ref>. We remark that we cannot replace H^(m)_0 by H^(m)_j for any j∈ [m] in <Ref>, since an “H^(m)_j favourite” algorithm exists for every j∈ [m]. For example, an “H^(m)_1 favourite” algorithm is as follows: one first sample the arms for 2log1/0.03/^2 rounds. If the empirical mean p_1< 1/2-/2, terminate and output H^(m)_1. Otherwise apply an algorithm which can distinguish H_j^(m)_j∈[m]∪0 with probability 0.96. By the Hoeffding's inequality, the error probability in the first stage is at most 0.03. Therefore, this “H^(m)_1 favourite” algorithm has success probability 0.925 and with high probability, it only needs to play 2log1/0.03/^2 rounds when the input instance is H^(m)_1. Then we are ready to prove <Ref>, which is a direct corollary of the following lemma. Let be a number in 0,1/8 and assume m≥ 2. There exists a constant c_1>0 such that for any algorithm A which can output an -optimal arm on any instance among H^(m)_j_j∈[m]∪0 with probability at least 0.95, we have [H^(m)_0]T≥c_1log (m+1)/^2. We first consider the case c_0log (m+1)> 4log 40 where c_0 is the universal constant in the definition of t^*. We reduce from the hypothesis testing lower bound in <Ref>. Assume A satisfying [H^(m)_0]T≥c_0log (m+1)/2^2< 0.1. Then we construct an algorithm A' to distinguish H^(m)_j_j∈[m]∪0. Given an instance among H^(m)_j_j∈[m]∪0, we first apply A to get an output arm i. Then we sample 2log1/0.025/^2 rounds and check whether the empirical mean p_i≤1/2-/2. If so, output H^(m)_i. Otherwise, output H^(m)_0. The success probability of at least 0.925 is guaranteed by Hoeffding's inequality and the union bound. According to our assumption, with probability larger than 0.9, A' terminates in c_0log (m+1)/2^2 + 2log1/0.025/^2< c_0log (m+1)/^2 rounds. This violates <Ref>. Then we consider the case c_0log (m+1)≤ 4log 40; that is, when m is bounded by some constant. It then follows from <Ref> that A satisfies [H^(m)_0]T≥c_s/^2≥ 0.1 for a universal constant c_s when m≥ 2. Then choosing c_1=minc_0/20, c_s/10log (m_0+1) where m_0=⌊ e^4log 40/c_0-1 ⌋, we have [H^(m)_0]T≥c_1log (m+1)/^2 for any algorithms that can output an -optimal arm on any instance among H^(m)_j_j∈[m]∪0 with probability at least 0.95 when m≥ 2. §.§ The Lower Bound for  - Recall that in m-, the N arms are partitioned into K groups with size m_1,m_2,…, m_K respectively. Each pull of an arm results in an observation of all the arms in its group. Consider an m-instance H^m_0 which consists of all fair coins. Recall that we use T^(k) to denote the number of rounds in which the pulled arm belongs to the k-th group. We then prove the following lemma, which indicates the result of <Ref> directly. Let be a number in 0,1/8. For every ,0.05-PAC algorithm of m-, we have [H^m_0]T^(k)≥c_1log (m_k+1)/^2 for every k∈[K] with m_k≥ 2 and [H^m_0]T≥∑_k=1^K c_1log (m_k+1)/2^2 if the total number of arms ∑_k=1^K m_k≥ 2, where c_1 is the constant in <Ref>. Moreover, these lower bounds still hold even the algorithm can identify the -optimal arm with probability 0.95 only when the input arms have losses drawn from either Ber1/2 or Ber1/2-. We only prove the latter case which is stronger. Let H be the set of all m-instances where the input arms have losses drawn from either Ber1/2 or Ber1/2-. Let A be an algorithm that identifies the -optimal arm with probability 0.95 when the input instance is in H. Assume A satisfies [H^m_0]T^(k)<c_1log (m_k+1)/^2 for some k∈[K]. In the following, we construct an algorithm A' to find an -optimal arm given instances in H^(m_k)_j_j∈[m]∪0. Given any (m_k)-instance H^(m_k)∈H^(m_k)_j_j∈[m]∪0 , we construct an m-instance: set H^(m_k) to be the k-th group and all remaining arms are fair ones. Then we apply A on this instance. The output of A' is as follows: A'= j, A (k,j); , . Clearly, the correct probability of A' is at least 0.95. However, A' satisfies [H_0^(m_k)]T< c_1log (m_k+1)/^2, which violates <Ref>. Therefore, we have [H^m_0]T^(k)≥c_1log (m_k+1)/^2 for every k∈[K] with m_k≥ 2 and thus have proved [H^m_0]T≥∑_k=1^K c_1log (m_k+1)/^2 as long as each m_k≥ 2. For those groups of size one, we can pair and merge them so that each group contains at least two arms (in case there are odd number of singleton groups, we merge the remaining one to any other groups). Notice that this operation only makes the problem easier (since one can observe more arms in each round) and only affects the lower bound by a factor of at most 2. Therefore, we still have [H^m_0]T≥∑_k=1^K c_1log (m_k+1)/2^2. § REGRET LOWER BOUNDS In this section we prove lower bounds for minimax regrets in various settings. All lower bounds for regrets in the section are based on the lower bounds for m-established in <Ref>. §.§ Regret Lower Bound for  - Let us fix m=(m_1,…,m_K). We then derive a regret lower bound for m-MAB and thus prove <Ref>. Let T be the time horizon and c_1 be the constant in <Ref>. Consider a set of m-BAI instances where each arm has losses drawn from either Ber1/2 or Ber1/2- where =√(c_1∑_k=1^K log (m_k+1)/8T). Denote this set by H. For any algorithm A of (m_1,…,m_k)-, for any sufficiently large T>0, there exists H∈H such that the expected regret of A satisfies [H]R(T)≥ c'·√(T·∑_k=1^K log (m_k+1)) where c'>0 is a universal constant. Here the expectation is taken over the randomness of losses which are drawn from H independently in each round. Assume A satisfies [H]R(T) < √(T·1/2∑_k=1^K c_1log (m_k+1) )/5000 for every H∈H where c_1 is the constant in <Ref>. <Ref> shows that A implies an algorithm to identify the -optimal arm for m-instances in H with probability 0.95 which terminates in c_1·∑_k=1^K log (m_k+1)/8^2 rounds. We can assume <1/8 since T is sufficiently large. However, according to <Ref>, for any such algorithms, there exists some instances in H that need at least c_1∑_k=1^K log (m_k+1)/2^2 rounds. This violates <Ref> and thus indicates a regret lower bound of Ω√(T·∑_k=1^K log (m_k+1)). <Ref> is a direct corollary of <Ref>. §.§ Regret Lower Bounds for Strongly Observable Graphs Let G=(V,E) be a strongly observable graph with a self-loop on each vertex. Let N=V. Assume that there exist K disjoint sets S_1,… ,S_K⊆ V such that there is no edge between S_i and S_j for any i j. For every k∈ [K], let m_k=S_k. Let S=⋃_k∈[K] S_k. We present a reduction from m-to bandit with feedback graph G where m=(m_1,…,m_K). Let A be an algorithm for bandit with feedback graph G. Consider a set of instances where the loss of each arm is drawn from either Ber1/2 or Ber1/2- where =√(c_1∑_k=1^K log (m_k+1)/8T) (here c_1 is the constant in <Ref>). Denote this set by H. When we say the input of is an instance in H, we mean that the loss sequence is drawn from this instance independently in each round. Then we design an algorithm A' for m-to deal with instances in H as follows. For an m-instance H^m in H, we construct a bandit instance with feedback graph G: the losses of arms in S_k correspond to the losses of arms in the k-th group of H^m in the m-game and the losses of arms in V∖ S are always equal to 1. The algorithm A' actually makes decisions according to A. If A pulls an arm in S, A' pulls the corresponding arm in the m-game. Otherwise, when A requests to pull an arm A_t∈ V∖ S, we replace this action by letting A' pull the first arm in each group once and then feed the information that A_t should have observed back to A (Note that all arms outside S have fixed loss 1). We force A' to terminate after pulling exactly T arms. Note that ≪1/K since T is sufficiently large. If we use R(T) and R'(T) to denote the regret of A and A' respectively, then by our choice of , we have R(T)≥R'(T) where the expectation is taken over the randomness of loss sequences specified above. <Ref> shows that there exists H∈H such that [H]R'(T)≥ c'√(T·∑_k=1^K log (m_k+1)) Therefore, there exist some loss sequences on which A needs to suffer a regret of Ω√(T·∑_k=1^K log (m_k+1)). Although we assume each vertex has a self-loop in <Ref>, it is easy to verify that this result also holds for strongly observable graphs which contain some vertices without self-loops, as long as we can find legal S_k_k∈[K]. For example, for the loopless clique, we can also apply <Ref> with K=1 and S_1=V. It gives a minimax regret lower bound of Ω√(Tlog N), which matches the previous best upper bound in <cit.>. <Ref> gives a general regret lower bound for bandit with arbitrary feedback graphs. Intuitively, it allows us to partition the graph and consider the hardness of each single part respectively. For example, consider the graph shown in <Ref>: The feedback graph is the disjoint union of K_1 cliques and K_2=K-K_1 cycles where each clique contains m_1 vertices and each cycle contains m_2 vertices. Note that the clique cover of this graph contains K_1 cliques of size m_1 and ⌈K_2m_2/2⌉ cliques of constant size. According to <Ref>, our Algorithm <ref> gives a regret upper bound of O√(TK_1log m_1 + K_2m_2), which matches the lower bound given in <Ref>. The previous best lower bound (<cit.>) on this feedback graph is Ω√(K_1+K_2m_2T). When K_1 and m_1 are large, our result wins by a factor of Θ√(log m_1). §.§ Regret Lower Bounds for Weakly Observable Graphs Let G=(V,E) be a weakly observable graph. Assume that V can be partitioned into K disjoint sets V=V_1∪ V_2∪⋯∪ V_K and each G[V_k] contains a t_k-packing independent set S_k such that every vertex in S_k does not have a self-loop. Assume there are no edges from V_j to S_i for any i≠ j. Let m_k=S_k and S=⋃_k∈[K]S_k. Without loss of generality, we assume in the following proof that each m_k≥ 2. When there exists some m_k=1, we can pair and merge them into new sets of size at least 2 (in case there are odd number of singleton sets, we merge the remaining one to any other sets). This merging process only affects the result by at most a constant factor. Let m=(m_1,…,m_K). Our proof idea is to embed a certain m'-instance in G so that the lower bound follows from the lower bound of m'-. Let ξ_k=maxc_1log (m_k+1), c_2m_k/t_k for every k∈ [K] where c_1>0 is the constant in <Ref> and c_2=c_1log 3/4. Assume there exists an algorithm A such that R(T)<1/2· 1250^2/3∑_k=1^K ξ_k^1/3· T^2/3 for every loss sequence. We will construct an m'-game for some m'=m'_1,m'_2,…, m'_K' and reduce this game to the bandit problem with feedback graph G. The vector m' is obtained from m in the following ways. For every k∈ [K], we distinguish between two cases: * Case 1: if c_1log (m_k+1) ≥c_2m_k/t_k, we let the arms in S_k form a group in the m'-instance; * Case 2: if c_1log (m_k+1) < c_2m_k/t_k, we divide S_k into ⌊m_k/2⌋ small sets, each with size at least two. Each small set becomes a group in the m'-instance. In other words, each group in the m'-instance is either one of S_k (Case 1) or is a subset of a certain S_k (Case 2). Given an m'-instance and time horizon T>0, we now define the loss sequence for bandit with feedback graph G: the losses of arms in S in each round are sampled from the distribution of the corresponding arm in the m'-instance independently, and the losses of arms in V∖ S are always equal to 1. We then design an algorithm A' for the m'-game by simulating A on this graph bandit problem. If A pulls an arm in V∖ S and observes arms in S_k, we again consider two cases: * Case 1: if c_1log (m_k+1) ≥c_2m_k/t_k, we let A' pull an arbitrary arm in the corresponding group m'-instance; * Case 2: if c_1log (m_k+1) < c_2m_k/t_k, for each arm in S_k that will be observed, A' pulls the corresponding arm in the m'-instance once. Otherwise if A pulls an arm in S, A' does nothing and just skips this round. Note that A' can always observe more information about the feedback of arms in S than A. So A' can well simulate A just by feeding the information it observed to A and making decisions according to the behavior of A as described above. Let T_i be the number of times that arm i has been pulled by A. At the end of the game, A' samples an arm in V according to the distribution T_1/T, T_2/T,…, T_N/T. If the sampled arm is in V∖ S, A' outputs a random arm. Otherwise A' outputs the sampled arm. Choose =1250^1/3∑_k=1^K ξ_k/T^1/3. We can verify that A' is an , 0.05-PAC algorithm through an argument similar to the one in our proof of <Ref>. Let T^(k) be the number of times that the arms in group k have been pulled by A' in the m'-game. According to <Ref>, for each k∈[K'], [H^m'_0]T^(k)≥c_1log (m'_k+1)/^2, where H^m'_0 is the m'-instance with all fair coins. Let I_0 denote the graph bandit instance constructed from above rules based on H^m'_0. Recall that one pull of A corresponds to at most t_k pulls of A' in Case 2. Therefore, when the input is I_0, A must pull the arms in V_k∖ S_k for at least c_1⌊m_k/2⌋log 3/t_k^2≥c_2m_k/t_k^2 times if k is in Case 2 and at least c_1log (m_k+1)/^2 times if k is in Case 1. In other words, A must pull the arms in V_k∖ S_k for at least ξ_k/^2 times for every k∈ [K]. Plugging in our choice of , A needs to pull the arms in V∖ S for more than 1/1250^2/3·∑_k=1^K ξ_k^1/3 T^2/3 times in total on I_0. These pulls contribute a regret of at least 1/2· 1250^2/3∑_k=1^K ξ_k^1/3· T^2/3, which contradicts the assumption in <Ref>. Therefore, there exists some loss sequences such that A satisfies R(T)=ΩT^2/3·∑_k=1^Kmaxlogm_k, m_k/t_k^1/3. <Ref> confirms a conjecture in <cit.>. It can also generalize the previous lower bound for weakly observable graphs ΩT^2/3logS,S/t^1/3 in <cit.> by applying <Ref> with K=1 and V_1=V where S⊆ V is a t-packing independent set of G. As consequences, <Ref> provides tight lower bounds for several feedback graphs. For example, when G is the disjoint union of K complete bipartite graphs of size m_1,m_2,…, m_K respectively, it implies a lower bound of Ω∑_k∈[K]log m_k^1/3T^2/3, which matches the upper bound in <cit.>. alpha § LOWER BOUND FOR  -BAI WITH BOUNDED   In this section, we will lower bound the number of pulls in ,0.05-PAC algorithms of (m)-BAI when m is bounded by a constant. To this end, we first prove a likelihood lemma in <Ref>. §.§ Likelihood Lemma Consider two instances H_a and H_b which only differ at one arm (without loss of generality, assume it is the first arm). In H_a, ℓ(1) is drawn from Ber1/2 and in H_b, ℓ(1) is drawn from Ber1/2- where ∈0,1/2 is a fixed number. Let A be a PAC algorithm for . Let K_j^t=∑_r=1^t ℓ^(r)(j) be the accumulative loss of arm j before the (t+1)-th round and abbreviate K_j^N_j as K_j. Let A_j be the event that N_j<t̂ for a fixed t̂∈ N. Let C^a_j be the event that max_1≤ t≤t̂K_j^t-1/2t < √(t̂· c^2t̂) and C_j^b be the event max_1≤ t≤t̂K_j^t-1/2-t < √(t̂· c^2t̂) where c is a positive constant. If 0≤ x≤1/√(2) and y>0, then (1-x)^y≥ e^-dxy where d=1.78. Let S^a=A_1∩ B∩ C_1^a and S^b=A_1∩ B∩ C_1^b where B is an arbitrary event. Then we have [H_b]S^a≥ e^-8(1+√(c))^2t̂[H_a]S^a and [H_a]S^b≥ e^- 8(1+√(c))^2t̂[H_b]S^b We first prove <Ref>. For each ω∈ S^a (ω is a history of the algorithm, including the behavior of the algorithm and observed result in each round), we have [H_b]ω/[H_a]ω =1/2-^K_11/2+^N_1-K_1/1/2^N_1=1-2^K_11+2^N_1-K_1 =1-4^2^N_1-K_11-2^2K_1-N_1≥1-4^2^N_11-2^2K_1-N_1. From <Ref> and the definition of S^a, we have 1-4^2^N_1≥1-4^2^t̂≥ e^-8^2t̂ and 1-2^2K_1-N_1≥1-2^2√(t̂· c^2t̂)≥ e^-8√(c)^2t̂. Therefore [H_b]ω/[H_a]ω≥ e^-8(1+√(c))^2t̂ and thus [H_b]S^a≥∑_ω∈ S^a[H_b]ω/[H_a]ω·[H_a]ω≥ e^-8(1+√(c))^2t̂[H_a]S^a. The proof of <Ref> is similar. §.§ Lower Bound for  -with Constant   There exists a constant c_s such that for any algorithm A which can output an -optimal arm on any instance among H^(m)_j_j∈[m]∪0 with probability at least 0.95 when m≥ 2 and c_0log (m+1)≤ 4log 40, we have [H^(m)_0]T≥c_s/^2≥ 0.1. Note that there must exist j∈[m] such that [H^(m)_0]A j≤1/m. Let B be the event that the algorithm output any arm except for arm j. Apply <Ref> with t̂ = log 3/100^2, c=100, H_b = H^(m)_j and H_a = H^(m)_0. Assume that [H^(m)_0]T≥t̂< 0.1. By the Kolmogorov's inequality, we have [H^(m)_0]max_1≤ t≤t̂K_j^t-1/2t < √(t̂· c^2t̂)≥ 1- 0.25. Therefore, we have [H^(m)_0]S^a≥ 0.9-1/m-0.25≥ 0.15 by the union bound. Then from <Ref>, we have [H^(m)_j]B≥ e^-8(1+√(c))·log 3/100·[H^(m)_0]S^a> 0.15·1/3=0.05. However, this is in contradiction with the success probability of A. Therefore, letting c_s=log 3/100, we have [H^(m)_0]T≥c_s/^2≥ 0.1.
http://arxiv.org/abs/2307.04012v1
20230708164551
Learning Together: Towards foundational models for machine learning interatomic potentials with meta-learning
[ "Alice E. A. Allen", "Nicholas Lubbers", "Sakib Matin", "Justin Smith", "Richard Messerly", "Sergei Tretiak", "Kipton Barros" ]
physics.chem-ph
[ "physics.chem-ph", "physics.comp-ph" ]
Center for Nonlinear Studies, Los Alamos National Laboratory, Los Alamos, New Mexico 87545, United States Theoretical Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87545, United States Computer, Computational, and Statistical Sciences Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87545, United States Center for Nonlinear Studies, Los Alamos National Laboratory, Los Alamos, New Mexico 87545, United States Theoretical Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87545, United States Nvidia Corporation, Santa Clara, CA 9505, United States Theoretical Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87545, United States Center for Integrated Nanotechnologies, Los Alamos National Laboratory, Los Alamos, New Mexico 87545, United States Theoretical Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87545, United States Center for Nonlinear Studies, Los Alamos National Laboratory, Los Alamos, New Mexico 87545, United States Theoretical Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87545, United States ]Learning Together: Towards foundational models for machine learning interatomic potentials with meta-learning The development of machine learning models has led to an abundance of datasets containing quantum mechanical (QM) calculations for molecular and material systems. However, traditional training methods for machine learning models are unable to leverage the plethora of data available as they require that each dataset be generated using the same QM method. Taking machine learning interatomic potentials (MLIPs) as an example, we show that meta-learning techniques, a recent advancement from the machine learning community, can be used to fit multiple levels of QM theory in the same training process. Meta-learning changes the training procedure to learn a representation that can be easily re-trained to new tasks with small amounts of data. We then demonstrate that meta-learning enables simultaneously training to multiple large organic molecule datasets. As a proof of concept, we examine the performance of a MLIP refit to a small drug-like molecule and show that pre-training potentials to multiple levels of theory with meta-learning improves performance. This difference in performance can be seen both in the reduced error and in the improved smoothness of the potential energy surface produced. We therefore show that meta-learning can utilize existing datasets with inconsistent QM levels of theory to produce models that are better at specializing to new datasets. This opens new routes for creating pre-trained, foundational models for interatomic potentials. [ Kipton Barros August 12, 2023 =================== § INTRODUCTION Machine learning is fundamentally changing and expanding our capabilities for modeling chemical and materials systems <cit.>. A growing array of properties have been successfully predicted with machine learning models from materials' band gaps and formation energies to molecular energies and bond orders <cit.>. The development of machine learning models for various applications has involved the creation of a large number of datasets containing quantum-mechanical calculations at different fidelities (levels of theory) <cit.>. However, incorporating this multi-fidelity information into machine learning models remains challenging. In this work, we show that multiple datasets can be used to fit a machine learning model, even if the datasets were calculated with many varying QM levels of theory. To overcome this challenge, we incorporate meta-learning techniques into the training process and subsequently demonstrate improvements in accuracy for multiple applications. The aim of meta-learning is to use a wide collection of data to train a machine learning model that can then be easily re-trained to specialized tasks and we demonstrate the applicability of the meta-learning method to MLIPs. In the landscape of broader efforts to incorporate machine learning and molecular and material modelling, a particular attention has been paid to MLIPs <cit.>. Accurate atomistic simulations rely on interatomic potentials that closely recreate the interactions present between atoms and molecules <cit.>. Recreating these interactions involves a trade-off between accuracy and computational cost, with quantum mechanical techniques offering highly accurate simulations whilst classical force fields are fast and capable of modelling much larger systems over long timescales <cit.>. Within the last decade, MLIPs have increasingly been seen as a method that could provide a model that is both fast and accurate <cit.>. However, the development of MLIPs that are transferable to unseen organic molecules requires datasets that cover a large fraction of chemical space. This requirement has lead to the production of numerous datasets <cit.>. These datasets contain the quantum mechanical (QM) energies and forces of millions of structures spanning large regions of chemical space. However, the QM methods used to calculate the energies and forces vary considerably. As different QM methods result in different potential energy surfaces, this inconsistency in QM techniques limits the extent that datasets can used together to fit potentials. Numerous organic molecule datasets have been created for training MLIPs <cit.>. However, a consensus on the best QM techniques to employ to create these datasets has never been reached as a compromise between accuracy and computational cost must always be considered when performing QM calculations. This lack of consensus has led to a variety of different software, methods, basis sets and exchange-correlation functionals being used. For example, the QM7-x and ANI-1x datasets both contain energies and forces for millions of small organic molecules. However, QM7-x was calculated using the PBE0 exchange-correlation functional with many body dispersion whilst ANI-1x was calculated with the ωB97x functional and 6-31G* basis set <cit.> and does not include dispersion effects. Therefore, these two datasets describe similar, but slightly different potential energy surfaces. If both datasets were joined together to train a potential then problems would likely arise as contradictory information is present. For example, identical structures at different levels of theory can have different energy and forces. Whilst datasets from different sources have been fit together without further refinement <cit.>, this approach does not account for differences in the interactions described. Techniques exist in the machine learning literature to address the difference in the potential energy surface. Previous work on fitting MLIPs to multiple datasets is limited. In Ref. , a transferable molecular potential was first trained to ∼ 5 million density functional theory (DFT) training points before being refit, with frozen parameters, to 0.5 million CCSD(T)* energies. This technique, known as transfer learning has been used in several works <cit.>. The advantage of using transfer learning for training MLIPs is that it requires fewer calculations at a higher, and more expensive, level of theory. However, this kind of transfer learning technique, freezing neural network (NN) parameters, is limited to just two datasets. If we want to use multiple existing datasets, and expand the size and variety of training data, then new methods must be found. Fortunately, this problem is being explored in a branch of machine learning research known as meta-learning <cit.>. Meta-learning seeks to build a model that, although not specialized to any particular task, can be quickly re-trained to many new tasks - where a task is a specific learning problem. Furthermore, this retraining can be effective even if the amount of new data is limited <cit.>. For transferable MLIPs, the concepts of tasks naturally lends itself to quantum mechanical datasets calculated with different methods. By using meta-learning techniques, we will show how information from multiple levels of theory can be incorporated together. We begin by investigating training data with multiple levels of theory for an individual aspirin molecule and for the QM9 dataset (which contains over 100,000 molecules in their equilibrium configuration). With these systems, the problems associated with naively combining datasets together are seen and the benefits of meta-learning are clearly observed in the test set errors. We then move on to combining several large molecule datasets to pre-train an MLIP. Combining large organic datasets to fit MLIPs has never previously been attempted. Subsets, chosen using active learning, of six existing datasets (ANI-1x, GEOM, QMugs, QM7-x, Transition-1x and the QM9 dataset from Ref. ) were used to fit an adaptable potential using meta-learning – see Fig. <ref> for a visualization of the space the datasets cover <cit.>. Figure <ref> demonstrates the increase in chemical space possible when multiple datasets are combined together. The benefits of pre-training are then shown by retraining to the 3BPA molecule and testing various properties. These tests show that pre-training models using meta-learning produces a more accurate and smoother potential. The benefits of pre-training include enhanced accuracy and generalization capabilities in modeling interatomic potentials. Training machine learning models to large amounts of data before re-training to a specific task is related to the concept of foundational models <cit.>. This concept has been used to create large language models, ie. GPT-4, which have been pre-trained to extremely large datasets before being fine-tuned to specific tasks, i.e. ChatGPT which is fine-tuned for conversational usage <cit.>. Creating foundational models allows a wide range of information to be encoded before specialisation. With meta-learning techniques, we can now pre-train interatomic potentials to numerous large datasets and this is a step towards foundational models for MLIPs – MLIPs that could be quickly re-trained to diverse molecular systems. The number of QM datasets has grown rapidly over the last few years. However, a major bottleneck in exploiting this information has been the absence of methods that can effectively combine all of this information. In this work, we have overcome this limitation by exploiting techniques which enable the incorporation of datasets with different fidelities. Whilst we focus on MLIPs, these techniques are applicable to the wide range of predictive models that exist for material and molecular property prediction. By showing how meta-learning can be applied, we aim to encourage researchers to fully utilize the vast amount of existing data that the scientific community has already collected. § METHODS §.§ Meta-Learning Algorithm Meta-learning is an area of machine learning concerned with improving the learning process to produce models that can easily adapt to new problems <cit.>. A key component of meta-learning is the concept of different `tasks'. Tasks are datasets with similar properties but slight differences. For example, if we were interested in animal classification of a cat and a dog, a similar task might be to classify a lion and a bear. The task is not the same but we would expect fundamental similarities in the model needed to perform the classification. By using a meta-learning algorithm to learn multiple different tasks, less data will be required when a new learning problem is introduced. The objective of meta-learning algorithms is to train a model that can generalize more easily to new data<cit.>. We will use meta-learning to fit multiple different QM datasets with slightly different properties. To our knowledge, meta-learning for MLIPs has not been previously carried out, although it has been used in other areas of science <cit.>. The meta-learning algorithm we have chosen to fit multiple datasets for MLIPs is called Reptile <cit.>. Reptile works by repeatedly sampling a task (a dataset), performing a limited number of optimization steps on the task and then updating the weights of the machine learning model towards the new weights. Reptile was chosen over other meta-learning algorithms such as MAML <cit.> as Reptile is simpler to implement and therefore more likely to be adopted by the wider community. A comparison of methods such as MAML for interatomic potentials will therefore be left to future work. Reptile is described in Algorithm <ref> with a visual illustration also given. The algorithm works by separating the training data into distinct learning problems (tasks). An individual task is selected and multiple optimization steps are performed. The parameters of the model are then updated. A new task is then selected and the procedure is repeated multiple times. This moves the model to a region of parameter space where it can readily move between the different datasets present. Throughout this work, the k=1 result is used as comparison point. This is because when k=1 the algorithm becomes equivalent to stochastic gradient descent on the expected loss over all the training tasks <cit.>. This is referred to as joint training in Ref.  At k=1, the algorithm is not expected to account for differences in the QM theory but still uses all the information present from the datasets. §.§ Interatomic Potential In this work, we have used the NN architecture implemented in torchANI with the same structure as the ANI-1x model <cit.>. However, the meta-learning techniques described are not specific to this form of model and there is no reason that they could not be applied to other machine learning models that employ similar iterative solvers. The hyperparameters used for the ANI potential are the same as those used for previous training to the ANI-1x and ANI-1ccx datasets, see Ref.  for more details. §.§ Datasets §.§.§ Aspirin Aspirin structures were produced by molecular dynamic simulations at 300K, 600K and 900K. Density Functional based Tight Binding (DFTB) was used to perform the MD simulations and a total of 400 structures were created for each temperature. QM calculations of the energies and forces were then performed on these structures with three levels of theory: DFT with the ωB97x exchange-correlation function and 6-31G* basis set, DFT with Becke, 3-parameter, Lee–Yang–Parr (B3LYP) exchange-correlation functions and def2-TZVP basis set and Hartree-Fock with the def2-SVP basis set for 300K, 600K and 900K respectively. These datasets were used to pre-train a molecular potential. The pre-trained potential was then refit to a new dataset of MD configuration at the Møller–Plesset (MP2) level of theory with the def2-SVP basis set (a more accurate level of theory). The training dataset for refitting used 400 MD configurations sampled at 300K whilst the test set contained structures at 300K,600K and 900K. A batch size of 8 was used for training. §.§.§ QM9 The QM9 dataset contains over 100,000 equilibrium structures for small organic molecules with up to 9 heavy atoms <cit.>. In Ref. , the QM9 dataset was recalculated with 76 different exchange-correlation functionals and 3 basis sets <cit.>. §.§.§ Multiple Organic Molecules Seven separate datasets were chosen to fit a potential to organic molecule potential that could be easily re-trained to new data. The seven datasets used for meta-learning were chosen to cover both diverse regions of chemical space and multiples levels of theory – including the accurate recreation of dispersion effects. The chemical space covered included reactive paths and biologically and pharmacologically relevant structures. Whilst ANI-1x does cover a large number of conformations for organic molecules, it has limitations. This is demonstrated by Fig. <ref> and Fig. S1. Figure <ref> demonstrates how the additional datasets increase the size of the molecules and range of energies included. The E_0 energy is calculated using linear fitting an then subtracted from each dataset. The minimum energy for each dataset is then shifted to zero. Whilst it is not covered in this work as we use the ANI potential, including larger molecules in datasets may be increasingly important for newer generations of interatomic potentials that include message passing and describe longer length scales <cit.>. Figure S1 shows the distribution of uncertainty for the ANI-1x potential across the dataset space. Whilst ANI-1x dz, ANI-1x tz, GEOM and QMugs have similar probability distributions, QM7-x and Transition-1x contain larger uncertainties. Transition-1x contains reactive structures that are not contained in the original dataset and therefore higher uncertainties are expected. For QM7-x, there are also higher uncertainties and this may be due to the different sampling techniques used. A property that is not shown in Table 1 is the software used for the DFT calculations. Even when the same level of theory is used, we can expect different software to give slightly different results. This will cause further discrepancies between the datasets as a variety of codes are employed. For example, although Transition-1x and ANI-1x are calculated at the same level of theory, Transition-1x is calculated with the ORCA program whilst ANI-1x is calculated with Gaussian <cit.>. The individual description and justification for including each dataset used is as follows: * QM9 - This dataset contains a diverse range of 76 functionals and 3 basis sets for small equilibrium organic molecules <cit.>. * ANI-1x - This is a large dataset of small (up to 8 heavy atoms) organic molecules generated with active learning methods <cit.>. * QMugs - This dataset includes the largest molecules with up to 100 heavy atoms. It specializes in including drug-like molecules <cit.> * GEOM - This is the largest dataset and contains both large molecules and drug-like molecules <cit.>. * QM7-x - This is also a large dataset of small (up to 7 heavy atoms) organic molecules but has dispersion accurately described with many-body dispersion <cit.> * Transition-1x - This datasets includes minimum energy paths for 12,000 reactions <cit.>. * ANI-1ccx - This dataset contains coupled cluster level theory calculations for a subset of the ANI-1x dataset <cit.>. Other datasets considered for inclusion include SPICE, PubChemQC-PM6 and Tensormol <cit.>. However, with the existing datasets a sufficient representation of chemical space is covered. It is also worth noting that retraining to recreate the specific properties of the excluded datasets would also be quickly possible with the meta-learning potential. §.§ Meta-learning Hyperparameter Optimization There are three parameters in the Reptile algorithm. These control the number of steps (k) taken at each optimization step, how the parameters are updated (ϵ) from the task's individual NN parameters and the maximum number of epochs used for retraining. The number of epochs was investigated to see whether restricting the training improved accuracy by ensuring the potential remained close to the meta-learned potential or if longer retraining improved results. For a detailed discussion of the hyper parameters chosen when fitting to the seven separate datasets, see Section S1.2. The ϵ value used throughout this work is ϵ=1 whilst the k value is changed depending on the problem. The maximum number of epochs used for retraining for the meta-learning algorithm with k>1 is restricted to 150 epochs. §.§ Stages of Fitting for the Organic Molecule datasets In the first iteration, 100,000 structures were taken randomly from the ANI-1x, QMugs, GEOM, QM7-x and Transition-1x datasets. For QM9, 10,000 structures were used for each level of theory. This is restricted as 276 levels of theory exist, and each theory level samples different structures in the QM9 dataset. After the first iteration, the highest error structures were added to the next iteration <cit.>. The cutoffs used for adding structures are described in SI 1.6. This process was repeated 3 times. A diagram of the process is show in Fig. S3. § RESULTS §.§ A Simple Case Study on Aspirin As the initial test case we investigate the performance of meta-learning on a dataset containing a single aspirin molecule. Aspirin structures were produced by molecules dynamic simulations at 300K, 600K and 900K. The QM energies and forces were then calculated at three different levels of theory: two distinct DFT functionals, and Hartree-Fock. This created three different datasets, with each temperature corresponding to a different level of theory. These three datasets were used to pre-train a molecular potential to the energy and forces of 1,200 structures. The pre-trained potential was then refit to a new dataset of 400 MD configuration at the MP2 level of theory from the 300K simulation. The change in the RMSE error for the forces is shown with the value of k used in the meta-learning algorithm in Fig. <ref>. The k parameter controls the number of steps taken towards each dataset. As k is increased the speed of the algorithm also increases and this is an additional consideration in choosing the optimal value. In the limit of k →∞ the algorithm would correspond to iterative training to each dataset and then transfer learning to a new task. However, while this may work for small problems, this approach is impractical for large datasets. Figure <ref> shows that as the k parameter is increased the error in the test set decreases with the minimum error at around k=400. There is therefore an improvement in test set error in comparison to both no pre-training (5.35 ± 0.41 kcal/mol/ Å) and k=1 (3.38 ± 0.16 kcal/mol/ Å). Note that k=1 effectively corresponds to simultaneous training to all tasks. Therefore, when we attempt to combine multiple datasets at different levels of theory an improvement in performance can be seen when meta-learning is incorporated into the training process. §.§ Meta-learning many levels of theory using QM9 Next, we move onto the QM9 dataset that contains multiple different small organic molecules in their equilibrium structures. The QM9 dataset has been calculated at 228 different levels of theory and therefore provides an ideal dataset for analysing meta-learning techniques. We can use this dataset to test whether meta-learning can develop a potential which can be refit to a new level of theory encountered for the QM9 dataset with less data. In order to do this, a subset of the QM9 dataset was used to train a potential to 10,000 molecules, 50 different exchange-correlation functionals and three different basis set. The potential was then refit to a new exchange-correlation functional, that had not been previously encountered, and the performance of this new model was assessed and compared to no pre-training and k=1 meta-learning. The test set error for the meta-learning potential refit to a new level of theory in the QM9 dataset is shown in Fig. <ref>. Pre-training the potential greatly improves the test set error for this case. In Fig. S9 a comparison between meta-learning and k=1 is shown and we see that k=1 does not perform as well as k=10. This is because it does not account for the discrepancy in the interaction present. These results show that even when the number of levels of theory is relatively large, at 150, and multiple molecules are present that meta-learning improves test set error over k=1. §.§ Making the most of scarce data at CCSD(T) level We will now move to the datasets used to train transferable interatomic potentials. As a starting example, we will look at pre-training to the multiple levels of theory (ωB97x/ 6-31G* and ωB97x/ def2-TZVPP) contained in the ANI-1x dataset <cit.>. We will then retrain to the ANI-1ccx dataset <cit.>. Figure <ref> shows the distribution in error when pre-training to multiple levels of theory with meta-learning and k=1. The RMSE is 3.30 ± 0.10 kcal/mol and 2.39 ± 0.00 kcal/mol for k=1 and meta-learning respectively. Therefore, we can again see that meta-learning with a higher k values improves results compared to k=1. The comparative results for direct training to ωB97x/ 6-31G* and ωB97x/ def2-TZVPP and then transfer learning to CCSD(T) is 2.20± 0.01 kcal/mol and 2.09±0.02 kcal/mol respectively . Therefore, in this case fitting to multiple datasets does not improve results over fitting to just one. This is in part because both datasets contain the same structures and cover the same chemical and configurational space. The potential trained to multiple organic datasets was also refit to the CCSD(T) dataset and the benefits of meta-learning over k=1 were also seen with errors of 2.89± and 3.32± respectively. However, this is notably higher than training to the ANI-1x dataset alone. The CCSD(T) dataset is a subset of the ANI-1x dataset and contains identical structures. For these cases, adding additional data in other areas of chemical space may not improve results. §.§ Training to multiple transferable organic molecule datasets Numerous datasets have been created that contain quantum mechanical calculations for organic molecules. However, as these datasets use different levels of theory and software, combining the information from different datasets requires advanced training techniques. By using meta-learning, a pre-trained model was created that uses information from seven different datasets. This is the first instance, to our knowledge, of combining information from multiple organic molecule datasets in this manner. We have already seen that meta-learning can improve results compared to k=1 when multiple datasets are used. We will now use the pre-trained model to explore the benefits of pre-training with meta-learning in comparison to no pre-training, and k=1 when retraining to a single molecular system. The pre-trained model was re-trained to the 3BPA dataset taken from Ref.  and various properties explored <cit.>. The first properties we will analyze are the energy and force RMSE errors. The force errors for a dataset taken from MD at 1200K is shown in Fig. <ref> with the energy and force learning curves for datasets at 300K, 600K and 1200K given in Fig. S4. From these graphs, the improved performance of pre-training using the meta-learning approach (with three passes through the dataset) to both k=1 and no pre-training can be seen for energies and forces. Therefore, just by adapting the training scheme, with no change in the model architecture or the dataset itself, consistent improvements in accuracy can be seen with meta-learning. The importance of the training method used has previously been seen in Ref. . Here we see how it can improve performance for fitting multiple datasets together. In comparison to when the ANI-1x model is used for pre-training, meta-learning performs slightly better at force errors but slightly worse for energy predictions. Given that the ANI-1x model is fit to the same level of theory as the 3BPA dataset, the performance of the meta-learning potential is encouraging. However, it is known that RMSE errors alone are not enough to verify the performance of a potential <cit.>. We will therefore examine additional properties. The 3BPA molecule has three central dihedral angles which are illustrated in Fig. <ref>. The energy scans along these dihedral angles are shown in Fig. <ref> with the model refit to the energies and forces of just 62 3BPA conformations. When no pre-training is used, the surface at β=120 significantly over-estimates the high energy point and lacks smoothness. A similar shape is seen for the k=1 potential. However, when meta-learning is used for pre-training the surface remains noticeably smoother with significantly less over prediction. When k=1 is used, multiple different potential energy surfaces are combined together in a nonphysical way which destroys the smoothness of the underlying potential. The error in the gradient of the 2D energy surface is shown in Fig. <ref> b) and emphasizes this difference in smoothness. When meta-learning is used, the contradiction in the potential energy surface described is corrected resulting in a smoother model. When no pre-training or k=1 is used, an additional problem can occur with the high energy regions at α=0 failing to be recreated for the β=180 and β=150 scan respectively. In contrast, both the meta-learning pre-training model correctly recreate this behaviour. The results for ANI-1x pre-training are given in Fig. S6. One advantage of pre-training with multiple datasets over ANI-1x or QM7-x, is that reactive systems can be added that are not contained in ANI-1x. To test if this information has been effectively passed to the meta-learning potential, hydrogen bond dissociation for the 3BPA molecules was performed. There is no reactive information contained within the 3BPA training set and so this test relies entirely on the information contained in the pre-training. Figure <ref> shows the change in energy as a hydrogen molecule is removed from the 3BPA. The potential pre-trained with meta-learning recreates the smooth dissociation curve expected. In contrast, when no pre-training, k=1 or ANI-1x is used the curve lacks smoothness and has an additional barrier present. In Fig. S7, the bond dissociation energy when just 31 structures are used for retraining. Even in this low data limit the smooth dissociation curves for the meta-learning potential remain. To demonstrate that this is not unique to 3BPA, the hydrogen bond dissociation for ethanol is shown in Fig. S8. Again, k=1 fails to recreate the smooth curve expected whilst the meta-learning potential captures the correct shape. We have therefore shown how meta-learning can be used to combine multiple datasets and the resulting improvements in the error, torsion energy scans and bond dissociation. Joint-fitting can improve on no-pre-training. However, not accounting for the difference in QM level of theory causes a reduction in performance that can be seen in the test set errors, smoothness of the potential and performance in extrapolation regions. § CONCLUSION The quantum mechanical properties of millions of molecular species and many materials systems have already been calculated and composed into extended datasets <cit.>. However, the varying levels of theory used to perform the QM calculations has previously prevented different datasets being used together to make machine learning models, for example for MLIPs. In this work, we have shown that meta-learning techniques can be used to jointly fit multiple datasets and demonstrated the improvement in performance that results from including a diverse selection of datasets. We show the wide applicability of meta-learning by creating MLIPs for a variety of systems, from a single aspirin molecule to the ANI-1ccx dataset. By pre-training a model to multiple large organic molecule datasets we show that these datasets (QM7-x, QMugs, ANI-1x, Transition-1x and GEOM) can be combined together to pre-train a model. The benefits of using a pre-trained model are then shown for the 3BPA molecule, with a more accurate and smoother potential produced. Meta-learning greatly expands the variety of fitting data available for MLIPs and establishes the possibility of creating readily pre-trained, foundational models for MLIPs. Pre-training machine learning models has been extensively discussed in the machine learning literature in recent years <cit.>. Whilst pre-training has been carried out for MLIPs, its use has been limited to training from one dataset to another <cit.>. With techniques such as meta-learning, this pre-training does not need to be limited to one specific dataset but can include large numbers of existing datasets. In this work, we added only a single reactive dataset to pre-train a model. However, many different reactive datasets exist and combining this large amount of information could help build a general transferable potentials for reactions in both condensed and gas phase without the need for millions of new QM calculations. Additionally, datasets have been created for many different combinations of elements. Meta-learning techniques could help build more transferable MLIPs over a wider range of elements with fewer calculations required. However, combining multiple datasets together and training with meta-learning will not always improve results. This was seen with the CCSD(T) results where fitting straight from ANI-1x to CCSD(T) resulted in the lowest error. Therefore, adding more data when there is a specific application in mind is not always the best approach, particularly if the additional data is far from the final application. For specific applications, transfer learning from one dataset to another may yield the best training and test set errors. However, if multiple data sets need to be incorporate together, or a general model is desired which can be specialized to multiple different tasks, meta-learning methods are preferable. With the techniques described in this work, multiple datasets can be fit at once. However, this advancement has exposed a more practical problem with the datasets currently published. There is not a standard format for storing information. Manual manipulation of datasets to a standard format is extremely time-consuming. The need for uniformity in the structure of datasets produced is therefore becoming increasingly important. The growth of available datasets containing quantum mechanical information for molecular and material structures has given researchers unprecedented levels of QM information. However, combining data from multiple data-sources is a major challenge. We have shown how meta-learning can be used to combine information from multiple datasets generated with varying levels of theory. This advancement changes the way that existing datasets should be viewed, and opens up new avenues for MLIP fitting. Beyond this, the results suggest that meta-learning can be seen as a general approach for combining training datasets for the broad array of chemical and materials processes where data science models can benefit. This work was supported by the United States Department of Energy (US DOE), Office of Science, Basic Energy Sciences, Chemical Sciences, Geosciences, and Biosciences Division under Triad National Security, LLC (‘Triad’) contract grant no. 89233218CNA000001 (FWP: LANLE3F2). A. E. A. Allen and S. Matin also acknowledge the Center for Nonlinear Studies. Computer time was provided by the CCS-7 Darwin cluster at LANL.
http://arxiv.org/abs/2307.04418v1
20230710084854
Quantum error correction beyond the toric code: dynamical systems meet encoding
[ "Garima Rajpoot", "Komal Kumari", "Sudhir Ranjan Jain" ]
quant-ph
[ "quant-ph" ]
[ * Received / Accepted ======================== We construct surface codes corresponding to genus greater than one in the context of quantum error correction. The architecture is inspired by the topology of invariant integral surfaces of certain non-integrable classical billiards. Corresponding to the fundamental domains of rhombus and square torus billiard, surface codes of genus two and five are presented here. There is significant improvement in encoding rates and code distance, in addition to immunity against noise. § FROM GEOMETRY TO ENCODING Geometrical representations of algebraic and arithmetic relations <cit.>, and, algebraic representations of geometrical patterns <cit.> are both fascinating themes. In their turns, they have led to a deep understanding in physics and mathematics <cit.>. A one-to-one correspondence between Lie groups and reflection groups whose fundamental regions are simplexes in Euclidean space has been beautifully illustrated in <cit.>. These fundamental regions generate tori for “unit shapes" like a square, equilateral triangle, right isosceles triangle, or a hemi-equilateral triangle <cit.>. Here we bring out an application of geometry of regular polytopes <cit.> to encoding theory in the context of quantum information. The dynamical systems which are most relevant to the present theme are planar billiards wherein a particle moves freely inside a two-dimensional enclosure, reflecting from the boundary in accordance to the Snell's law of reflection. According to the Liouville-Arnol'd theorem <cit.>, for a system with f degrees of freedom, if there are f functionally independent invariants which are in involution, the (invariant) surface on which the trajectory of the system resides is topologically equivalent to an f-torus. Another condition stipulated for the applicability of the Liouville-Arnol'd theorem is that the vector fields in phase space must be smooth everywhere. The integrability of such systems is a fragile property, so much so that even if the vector fields become singular at points of measure zero, the system loses integrability <cit.>. Perhaps the simplest example is when the shape of the enclosure is a square or a rectangle, explained later in some detail, where the invariant surface is a torus. However, an interesting situation arises by deforming the square to a rhombus with an acute angle π /n. The vector fields in phase space become singular at a set of points of measure zero. Corresponding invariant surface is topologically equivalent to a sphere with few handles, the number of handles is related to n. In this work, instead of a lattice of spins, we employ the lattice constructed by stacking fundamental domains in a plane. On this lattice, we show how to place qubits and set up a stabilizer code. Somewhat unrelated but of great significance, a connection between billiard and computation was first realized by Fredkin and Toffoli <cit.>. Although it gave us the Toffoli gate, the connection between topology of invariant surfaces in billiards and surface codes was not relevant for them and has been brought out recently <cit.>. § GENUS-2 CODE Computation requires scalability of logical qubits on planar chips. One way to achieve this is to use “unit shapes" which can fill the plane on successive reflections to encode the information on a surface. Our aim is to make use of the fundamental domains of certain geometrical structures such as squares and rhombi, which upon successive reflections, fill the whole plane while maintaining the planarity of the surface. This suitable arrangement allows one to make changes anywhere else in the circuit by only locally changing parameters, inadvertently leading to scalability. For example, if we consider a square tile, upon successive reflections about its sides, four copies form a unit of tessellation - the fundamental domain, identifying the pairs of parallel edges gives a torus, which is characterized by a topological invariant, the genus being equal to one. Thus, the surface code corresponds to tori, and hence makes the well-known “toric code" <cit.>. The fundamental domain of a π/3-rhombus is another such structure, genus equal to two, that can be tessellated on the whole surface. Here, we use this to design a new code on a surface of genus two. §.§ “Tessellation" with Lg-rhombus We introduce a new surface code using the fundamental domain equivalent to a genus two surface (Fig. <ref>), constructed by stitching six copies of π/3-rhombus. Upon identification of edges as shown in Fig. <ref>, it creates a “double-torus" <cit.>, which is equivalent to a sphere with two handles. This can be tessellated over the whole plane as shown in Fig. <ref>. Hence, encryption on this surface is termed as “Genus two code" or “Double-toric code". As per Kitaev's idea, whereby increasing the genus will give a higher encryption, the double-toric code helped achieve a significantly higher encryption rate as compared to the surface code. §.§ Encoding on a plane Let us start with a unit structure of the genus two code - constructed by using n=6 data qubits (represented by circles) and m=4 ancilla qubits (represented by squares), shown in Fig. <ref>. The bold and dashed lines represent the control-X and control-Z operations, respectively from the ancilla qubit to the data qubits. Stabilizers are the operators which belong to the Pauli group and preserve the logical state, i.e. if the logical state is |Ψ⟩_L, then P_i|Ψ⟩_L=(+1)|Ψ⟩_L. The set of stabilisers for this code structure is P={X_1X_2X_3X_4, X_3X_4X_5X_6, Z_1Z_3Z_5, Z_2Z_4Z_6}. These four elements of the stabilizer set are the generators of the stabilizer group 𝒮. For this encoded logical qubit, the logical state |0⟩_L is <cit.>: |0⟩_L =1/𝒩∏_P_i∈⟨P⟩(I^⊗n+P_i)|0^⊗n⟩ =1/𝒩(I^⊗6+X_1X_2X_3X_4)(I^⊗6+X_3X_4X_5X_6)(I^⊗6+Z_1Z_3Z_5)(I^⊗6+Z_2Z_4Z_6)|0^⊗6⟩ =1/𝒩(|000000⟩+|001111⟩+|111100⟩+|110011⟩), where 𝒩 is the normalization factor. The circuit for this encryption is shown in Fig. <ref> (b). All the stabilizers commute with each other ([P_i,P_j]=0 ∀ i,j). To construct logical state |1⟩_L, we have to look for analogous Pauli sigma pairs of logical operators {X_i,Z_i}, that (i) commute with each of the stabilizers P_j ([X_i,P_j]=0=[Z_i,P_j] ∀ i,j) and (ii) pairwise anti-commute with each other ({X_i,Z_i}=0 and [X_i,Z_j]=0 ∀ i≠ j). To find the logical operators, first we have to identify the edges to specify the boundaries. The filling of plane using π/3-rhombus, forms periodically arranged branch-cuts, which help identify the boundaries. On these boundaries, the control-X (bold lines) and control-Z (dashed lines) are arranged alternately. We define a path, between the boundaries, by connecting a data qubit vertex of a rhombus to another data qubit vertex of a corresponding copy with respect to the fundamental domain of the rhombus. Two sets of six paths are found which form the logical X operator (X̅) and logical Z operator(Z̅). Thus we found two pairs of logical operators, which satisfy the above conditions {X̅_1=X_1X_3, Z̅_1=Z_1Z_4Z_6} and {X̅_2=X_4X_6, Z̅_2=Z_2Z_4Z_5}. The minimum weight of error E=E_a^† E_b violating the Knill-Laflamme conditions <cit.> was found to be 2. Thus it is a [[6,2,2]] code. The encoding rate, or the ratio of the number of logical qubits to the number to data qubits for this code structure is 1/3. To increase the code distance and the encoding rate of the double-toric code, we can stack a unit of this code (Fig. <ref>) vertically as well as horizontally. Reflecting the unit in equal number of vertical and horizontal directions, arranges the unit structures in equal number of rows and columns. To construct the code with p^2 number of unit structures, the number of rows and columns will be p, the the number of required data qubits is n=2p(2p+1), number of required ancilla qubits is m=2p(p+1), number of logical qubit is k=2p^2 and the code distance is d=⌊p+2/2⌋+1, where ⌊·⌋ is the floor function. So the general form of the code is [[2p(2p+1), 2p^2,⌊p+2/2⌋+1]]. The encoding rate of this code is k/n=p/(2p+1). For p→∞, the encoding rate is 1/2. §.§ Comparison of code distance in toric and genus-2 codes In the [[5,1,2]] code shown in Fig. <ref>, the code distance is 2. Let us try to make a logical operator of weight 3. The paths D1-A1-D3-A4-D5 and D2-A3-D3-A2-D4 provide such a pair of logical operator ⟨X̅=X_2X_3X_4, Z̅=Z_1Z_3Z_5⟩. Both the operators commute with all the stabilizers of the [[5,1,2]] code and anticommute with each other. In this way we achieved a pair of logical operators of weight 3 and so the code distance could be 3 making it a [[5,1,3]] code instead. But for the states corresponding to these operators, the minimum weight of error for which Knill-Laflamme conditions do not hold is d=2, indicating that this has to be a distance 2 code, hence the code is [[5,1,2]]. This is well-expected. It is important to note that we could have found all logical operators of weight 2, while maintaining the code distance two - {X_1X_3, Z_1Z_2} and {X_4X_6, Z_5Z_6}. In this case also, the minimum weight of errors for which the Knill-Laflamme conditions do not hold is two. So we could have chosen either set of logical operators. But it is our aim to maximize the code distance using the reflection property of the structure. This makes the [[2p(2p+1),2p^2,⌊p+2/2⌋+1]] code more suitable for achieving higher encryption rates and distances than a [[2p(2p+1),2p^2,2]] code. Consider now another unit stacked vertically on the single unit as shown in Fig. <ref>. Here, the number of physical qubits is n=10, while the number of ancilla qubits is m=7. The stabilizers for this code are, P={X_1X_2X_3X_4, X_3X_4X_5X_6X_7X_8, X_7X_8X_9X_10, Z_1Z_3Z_5, Z_2Z_4Z_6, Z_5Z_7Z_9, Z_6Z_8Z_10}. Following the arguments presented above for identifying paths between boundaries, we obtain X̅ and Z̅; the complete set of logical operators commuting with the stabilizers and anti-commuting pairwise is thus (i) {X̅_1=X_2X_6X_8, Z̅_1=Z_1Z_4Z_8Z_9}, (ii) {X̅_2=X_2X_6X_10, Z̅_2=Z_5Z_7Z_10}, (iii) {X̅_3=X_4X_6X_8, Z̅_3=Z_2Z_3Z_6}. The Knill-Laflamme conditions are violated for a weight of error three, giving the code distance three. However, we can again find logical operators of weight two - {X_1X_3,Z_1Z_2}, {X_3X_5X_7,Z_5Z_6} and {X_7X_9,Z_9Z_10}. This should give a distance of two which is also verified using the Knill-Laflamme conditions. Since both the cases are valid, we choose to use the one in which the distance is maximum without violating the stabilizer algebra. § GENUS-5 CODE The motivation to this code stems from another dynamical system, the square torus billiard where the integrable dynamics of a square billiard is interrupted by a square shaped scatterer <cit.>. Following the association discussed above for genus 2, we construct a code with this dynamical system in mind. §.§ Square torus billiard The free motion of a point particle in a square torus billiard (STB) is shown in Figure <ref>. According to the theorem by Zemlyakov and Katok <cit.>, this system is non-integrable albeit non-chaotic with zero Lyapunov exponent. The invariant integral surface is topologically equivalent to a sphere with five handles, as shown in <cit.>. The entire trajectory of the free particle in the STB can be folded in four copies using which we can construct the invariant surface (constant energy). This is explained in Figure <ref>. In statistical mechanics, this model is related to Ehrenfest gas where a beam of particles moving freely in a plane gets scattered by square-shaped scatterers (also called wind-tree model <cit.>). A new finite-time exponent was introduced to describe these systems <cit.> as the long-time average vanishes due to rather pathological behaviour of these systems. We shall now employ these features to our advantage in quantum encoding. §.§ Encoding We start with the fundamental domain of an equivalent genus five surfaces, Fig. <ref>, obtained by tessellating a square with a square-shaped scatterer inside it four times and placing the data and the ancilla qubits alternatively on the vertex of external squares as well as on the vertex of scatterers. The data qubits are represented as D (in the circles) and the ancilla qubits are represented as A (in the squares). As in earlier sections, the bold (dashed) lines represent the control-X(Z) operations from the ancilla qubits to the data qubits. The set of stabilizers is P={X_1X_2X_3X_6X_7, X_3X_4X_5X_12X_13, X_1X_6X_8, X_2X_7X_9, X_3X_10X_12, X_3X_11X_13, Z_1Z_3Z_4Z_8Z_10, Z_2Z_3Z_5Z_9Z_11, Z_3Z_6Z_8, Z_3Z_7Z_9, Z_4Z_10Z_12, Z_5Z_11Z_13}. The logical state |0⟩_L is: |0⟩_L= 1/𝒩∏_P_i∈⟨P⟩(I^⊗n+P_i)|0^⊗n⟩ = 1/𝒩(I^⊗13+X_1X_2X_3X_6X_7)(I^⊗13+X_3X_4X_5X_12X_13)(I^⊗13+X_1X_6X_8)(I^⊗13+X_2X_7X_9) (I^⊗13+X_3X_10X_12)(I^⊗13+X_3X_11X_13)(I^⊗13+Z_1Z_3Z_4Z_8Z_10)(I^⊗13+Z_2Z_3Z_5Z_9Z_11) (I^⊗13+Z_3Z_6Z_8)(I^⊗13+Z_3Z_7Z_9)(I^⊗13+Z_4Z_10Z_12)(I^⊗13+Z_5Z_11Z_13)|0^⊗13⟩. We next look for pairs of logical operators that commute with stabilizers and anti-commute pairwise. For this, we have to specify the boundaries. The filling of the plane using the fundamental domain of the equivalent genus five surfaces, forms periodically arranged branch cuts (edges EF and GH in Fig.<ref>), which are considered as the boundaries. Thus we define a path by connecting the data qubit vertex of one scatterer to the data qubit vertex of the corresponding copy with respect to the fundamental domain. The directed paths for the logical X operator are: X_6X_8X_10X_12, X_6X_8X_4X_12, X_7X_9X_11X_13, and X_7X_9X_5X_13. The directed paths for the logical Z operator are: Z_8Z_6Z_7Z_9, Z_8Z_6Z_2Z_9, Z_8Z_1Z_7Z_9, Z_8Z_1Z_2Z_9, and Z_10Z_12Z_13Z_11. From these paths, we found a pair of logical operators {X=X_6X_8X_4X_12, Z=Z_8Z_1Z_7Z_9}. The minimum weight the error E=E_a^† E_b, which violates the Knill-Laflamme conditions, is 3, thereby constructing a [[13,1,3]] code. To increase the distance of the code, we can stack the unit structure of the code (Fig. <ref>) vertically as shown in Fig.<ref>. The number of required data qubits is n=24 and the number of required ancillary qubits is m=23. The set of stabilizers is P={X_1X_2X_3X_4X_7, X_1X_3X_5, X_2X_4X_6, X_7X_8X_10, X_7X_9X_11, X_7X_10X_11X_12X_13X_14X_15X_18, X_12X_14X_16, X_13X_15X_17, X_18X_19X_21, X_18X_20X_22, X_18X_21X_22X_23X_24, Z_3Z_5Z_7, Z_4Z_6Z_7, Z_1Z_5Z_7Z_8Z_12, Z_2Z_6Z_7Z_9Z_13, Z_8Z_10Z_12, Z_9Z_11Z_13, Z_14Z_16Z_18, Z_15Z_17Z_18, Z_12Z_16Z_18Z_19Z_23, Z_13Z_17Z_18Z_20Z_24, Z_19Z_21Z_23, Z_20Z_22Z_24}. The pair of logical operators is {X=X_8X_12X_16X_14, Z=Z_8Z_10Z_15Z_17}. The minimum weight that violates the Knill-Laflamme conditions for this code is 4. Hence it is a [[24,1,4]] code. Thus, the distance of the code can be increased by stacking fundamental domains on the plane. §.§ Effect of noise Any logical qubit should be robust against dephasing due to an external noise. Recently, it has been shown <cit.> that certain observables formed by code space population and logical operators in the code space help determine the dynamical behaviour of logical qubits. We incorporate a time-dependent external fluctuating magnetic field in z-direction, which acts on the qubits globally, thus leading to global dephasing. To estimate the effect, consider the logical |1⟩_L: |1⟩_L= X|0⟩_L Let an initial logical quantum state be written as |ψ⟩_L=cosθ/2|0⟩_L+e^ιϕsinθ/2|1⟩_L where θ and ϕ are real parameters (θ≤π and 0≤ϕ≤ 2π). The evolution of |ψ⟩_L gives the logical Bloch sphere coordinates, X_L, Y_L and Z_L. Assuming the global dephasing process by a single fluctuating variable B(t) along the z-direction acting on all data qubits, the Hamiltonian representing the effect of noise may be written as H_G(t) =1/2B(t)∑_i=1^13σ_z_i. In case of local dephasing, the Hamiltonian reads as: H_L(t)=1/2∑_i=1^13B_i(t) σ_z_i. The randomly fluctuating variable B(t) obeys the Gaussian distribution P(B), which implies that <cit.>: ⟨exp (±ι∫_0^tB(t^')dt^')⟩ =exp[-1/2⟨(∫_0^tB(t^') dt^')^2⟩]= e^-γt/2 assuming the stationarity of the auto-correlation function of delta-correlated noise, with γ=⟨[B(0)]^2|$⟩. Following <cit.>, we analyze the effect of noise on theN-qubit system by grouping the physical states by their magnetization, defined as the difference between the number of spins in the state|0⟩, denoted byn^', and the remaining in state|1⟩,N-n^'. The magnetisation is,m^'=2n^'-N. The logical state|0⟩_Lis written as,|0⟩_L=∑_m^' ∑_l=1^N_m^'b_l^m^'|b⟩_l^m^'. Dephasing noise changes the state|ψ⟩_Lto another state|ψ^'⟩, where|ψ^'⟩=exp[-ι∫_0^tH_L, G(t^')dt^']|ψ⟩_L. The density matrix corresponding to the logical qubit isρ^'=∫|ψ^'⟩⟨ψ^'| P(B)dB.The Bloch coordinatesℛ≡{R_X, R_Y, R_Z}in the new state are obtained by evaluating the expectation values of the logical operators in the evolved state, given by⟨ℛ|=⟩Tr[ρ^'ℒ̅], whereℒ̅≡{X̅, Y̅, Z̅}represents the logical Bloch vectors in the initial state,|ψ⟩. For the single unit structure (Fig. <ref>), in the presence of global dephasing noise, the logical Bloch coordinates turn out to be ⟨R_X|=⟩ 1/32e^-(2γ t+ιϕ)(1+e^-γ t)^4(1+e^2ιϕ)sinθ ⟨R_Y|=⟩ ι/32e^-(2γ t+ιϕ)(1+e^-γ t)^4(-1+e^2ιϕ)sinθ ⟨R_Z|=⟩ cosθ In the absence of noise, i.e.,γ=0, the Bloch sphere coordinates in the new state,|ψ'⟩are⟨R_X|=⟩sinθcosϕ,⟨R_Y|=⟩sinθsinϕ, and⟨R_Z|=⟩cosθsame as that in the old state,|ψ⟩_L. Even in the presence of noise,⟨R_Z⟩remains unaffected. Thus the code is significantly robust against dephasing noise. § CONCLUDING REMARKS The basic idea underlying surface codes for error detection and correction is to be able to arrange the data and ancillary qubits in a way thatXandZerrors can be corrected by making Stabilizer measurements through ancillae. For a scalable architecture, planar structures are desirable. This brings us to the question of tessellation of the plane. While in Kitaev's construction, two-dimensional Ising model is considered where the lattice shape can be anything - however, it should be noted that “anything" is only under periodic boundary conditions where then, unit shapes could be square, equilateral triangle etc. Here we take the essence from Kitaev's construction and use the correspondence between Lie and reflection groups, ideas from well-known billiards, and present a novel way to realize architectures of higher genus. The encoding rates - number of logical qubits for the physical qubits - surpasses the value for all surface codes hitherto known. We believe that these results pave the way to a new direction of research in the field of quantum error correction. The codes presented here are not related to tessellations of hyperbolic surfaces. We have constructed fundamental domain using replicas of the billiard considered. We then stack the domains, thus taking care of all the symmetries of the system. It is at this point that we endow each vertex with a qubit or ancilla. This enables us to write the stabilizers and construct logical operators. This construction respects the commutation and anticommutation relations expected of a consistent and complete definition of a code. The spectra of the Hamiltonian made by the generators is studied. The degeneracy of the ground state increases with the number of qubits. For instance, for the genus-two codes[[n, k, d]], the degeneracy of the ground state is2^k. The code is not topological. However, the ground state of the codes has high degeneracy which is useful for encoding. The code distance increases with the size of the code. The main advantage, however, is that the codes have much higher encoding rates. For genus-two codes of large size, the encoding rate tends to one-half. For the genus-five codes, the code distance increases with size whereas the encoding rate does not. Future investigations along these lines would be useful. In classical dynamical systems, tori as invariant surfaces are synonymous to integrability. The surfaces of higher genus correspond to non-integrability, but not chaos, even when the dynamics is nonlinear. Nonlinearity of the dynamics leads to the appearance of special points in the phase space, which have been shown to play an important role in controlling of quantum jumps for error correction <cit.>. In quantum computing technology, almost all paradigms are related in an important way to aspects of nonlinearity, be it the nonlinearity of the Josephson junction, creation of EPR pair of photons from a nonlinear crystal and so on. Nonlinear resonances in coupled nonlinear quantum circuits with Josephson junctions have been shown to provide criteria for protection of qubits <cit.>. Ideas from nonlinear science would expectedly contribute to the development of quantum information theory and technology. 0.25 truecm Acknowledgements Authors thank the Referee for her(his) critique drawn on our work. They also thank Rhine Samajdar, Princeton University, for several helpful and stimulating discussions. 0.25 truecm Data Availability Statement: No Data associated in the manuscript 99kvant Ed. S. Tabachnikov, Kvant Selecta: Algebra and Analysis, I and II (Universities Press (India) Limited, 2002). weissman M. H. Weissman, An illustrated theory of numbers (American Mathematical Society, 2017). aop2014 R. Samajdar and S. R. Jain, Ann. Phys. 351, 1 (2014). aop2016 N. Manjunath, R. Samajdar, S. R. Jain, Ann. Phys. 372, 68 (2016). rmp2017 S. R. Jain and R. Samajdar, Rev. Mod. Phys. 89, 045005 (2017). nakahara M. Nakahara, Geometry, Topology, and Physics (Taylor and Francis, London, 2003). coxeter H. S. M. Coxeter, Regular Polytopes (Dover, New York, 1973). toffoli E. Fredkin and T. Toffoli (1982), International Journal of Theoretical Physics 21, 219 (1982). krj K. Kumari, G. Rajpoot, and S. R. Jain, A genus-two surface code (arXiv:2211.12695 [quant-ph]). weyl1926nachtrag Hermann Weyl, Mathematische Zeitschrift 24, 789 (1926). cartan1927geometrieÉlie Cartan, Annali di Matematica pura ed applicata 4, 209 (1927). arnold V. I. Arnol'd, Mathematical methods of classical mechanics (Springer, Heidelberg, 1978). jain1992 S. R. Jain and H. D. Parab, J. Phys. A 25, 6669 (1992). Kitaev Alexei Kitaev, Ann. Phys. 303, 2 (2003). eckhardt1984analytically Bruno Eckhardt, Joseph Ford and Franco Vivaldi, Physica D: Nonlinear Phenomena 13, 339–356 (1984). zemlyakov A. Zemlyakov and A. B. Katok, Math. Notes 18, 760 (1976). richens1981pseudointegrable P. J. Richens and M. V. Berry, Physica D: Nonlinear Phenomena 2, 495–512 (1981). Gottesman Daniel Gottesman, Stabilizer codes and quantum error correction, Ph. D. thesis (California Institute of Technology, 1997). aa V. I. Arnold and A. Avez, Ergodic problems of classical mechanics (W. A. Benjamin, Inc., Amsterdam, 1970). bob J. R. Dorfman, An introduction to chaos in nonequilibrium statistical mechanics (Cambridge Univ. Press, Cambridge, 1999). manan M. Jain, Student J. Phys. 5, 55 (2013). mcj S. Moudgalya, S. Chandra, and S. R. Jain, Ann. Phys. 361, 82 (2015). pal Amit Kumar Pal, Philipp Schindler, Alexander Erhard, Ángel Rivas, Miguel A. Martin-Delgado, Rainer Blatt, Thomas Monz and Markus P. Müller, Quantum 6, 632 (2022). krjj K. Kumari, G. Rajpoot, S. Joshi, and S. R. Jain, Ann. Phys. 450, 169222 (2023). ssj R. K. Saini, R. Sehgal, and S. R. Jain, Eur. Phys. J. Plus 137, 356 (2022). We start with the fundamental domain of genus five surface, by reflecting a square with a square shaped scatterer inside it four times. And placing the data and the ancilla qubits alternatively on the vertex of external squares as well as on the vertex of square shaped scatterer. The data qubits are represented asD(in circles) and the ancilla qubits are represented asA(in squares). The bold (dashed) lines are representing the control-X(Z)operations from the ancilla qubits to the data qubits. The set of stabilizers isP={X_1X_2X_3X_6X_7,X_3X_4X_5X_12X_13,X_1X_6X_8,X_2X_7X_9,X_4X_10X_12,X_5X_11X_13,Z_1Z_3Z_4Z_8Z_10,Z_2Z_3Z_5Z_9Z_11,Z_3Z_6Z_8,Z_3,Z_7Z_9,Z_3Z_10Z_12,Z_3Z_11Z_13}. The logical state|0⟩_Lis: |0⟩_L= 1/𝒩∏_P_i∈⟨ P⟩(I^⊗ n+P_i)|0^⊗ n⟩ = 1/𝒩(I^⊗ 13+X_1X_2X_3X_6X_7)(I^⊗ 13+X_3X_4X_5X_12X_13)(I^⊗ 13+X_1X_6X_8)(I^⊗ 13+X_2X_7X_9) (I^⊗ 13+X_4X_10X_12)(I^⊗ 13+X_5X_11X_13)(I^⊗ 13+Z_1Z_3Z_4Z_8Z_10)(I^⊗ 13+Z_2Z_3Z_5Z_9Z_11) (I^⊗ 13+Z_3Z_6Z_8)(I^⊗ 13+Z_3,Z_7Z_9)(I^⊗ 13+Z_3Z_10Z_12)(I^⊗ 13+Z_3Z_11Z_13)|0^⊗ 13⟩ . We next look for pairs of logical operators that commute with stabilisers and anti-commute pairwise. For this, we have to specify the boundaries. The filling of plane using the fundamental domain of genus five surface, forms periodically arranged branch cuts (edgesEFandGHin Fig.<ref>), which are considered as the boundaries. Thus we define the path by connecting the data qubit vertex of one square scatterer to the data qubit vertex of the corresponding copy with respect to the Fundamental Domain. The directed paths to for the logicalZoperator are:Z_8Z_6Z_7Z_9,Z_8Z_6Z_2Z_9,Z_8Z_1Z_7Z_9,Z_8Z_1Z_2Z_9,Z_10Z_12Z_13Z_11,Z_10Z_12Z_5Z_11,Z_10Z_4Z_13Z_11,andZ_10Z_4Z_5Z_11, all of these operators commute with all the stabilizers. The directed paths for the logicalXoperator are:X_6X_8X_10X_12,X_6X_3X_12,X_6X_3X_11,X_6X_3X_13,X_6X_3X_7,X_6X_3X_9X_11X_13,X_6X_8X_3X_11X_13,X_6X_8X_3X_9X_7,X_7X_9X_11X_13,X_7X_3X_13,X_7X_3X_11,X_7X_3X_10,X_7X_3X_8X_10X_12, andX_7X_9X_3X_10X_12. In these many operators, only two operatorsX_6X_8X_10X_12andX_7X_9X_11X_13commute with all the stabilizers. Thus we found a pair of logical operators{X=X_6X_8X_10X_12, Z=Z_8Z_1Z_7Z_9}. The minimum weight of the errorE=E_a^†E_b, which violates the Knill-Laflamme conditions, came out to be3. So it is a[[13,1,3]]code. LetSbe the generators of stabilizer group. Then, for ann-qubit code encodingk-logical operators, we can define an(n-k)-bit binary number, or error syndrome function,f_Mfor the code. Letf_M:𝒢→ℤ_2, such that f_M(E)= {[ f_M(E)=0, [M,E]=0; f_M(E)=1, {M,E}=0 ]., wheref_M(E)=f_M_1(E)f_M_2(E)…f_M_n-k(E). If all the values off_Mare different, the code is nondegenerate. For the single unit of the double-toric[[6,2,2]]code, stabilizer generators areM={X_1 X_2 X_3 X_4, X_3 X_4 X_5 X_6, Z_1 Z_3 Z_5, Z_2 Z_4 Z_6}. The functionf_M(E)for the error set,E={X_1,X_2,…, X_6,Z_1,Z_2,…,Z_6}is shown in table <ref>. Here,f_Mis four-bit binary function, which is not different for every error inE, thus making it a degenerate code. By contrast, the[[13,1,3]]surface code is a nondegenerate code, wheref_Mis a twelve-bit binary number, which is different for each error in the error setE.
http://arxiv.org/abs/2307.05187v2
20230711114955
Decorrelation using Optimal Transport
[ "Malte Algren", "John Andrew Raine", "Tobias Golling" ]
hep-ph
[ "hep-ph", "cs.LG", "hep-ex" ]
Using Linear Regression for Iteratively Training Neural Networks Harshad KhadilkarAlso visiting associate professor at IIT Bombay, [email protected] TCS Research Mumbai, India Received 17 March 2023 / Accepted 20 June 2023 ========================================================================================================================================================================================================= Being able to decorrelate a feature space from protected attributes is an area of active research and study in ethics, fairness, and also natural sciences. We introduce a novel decorrelation method using Convex Neural Optimal Transport Solvers (Cnots) that is able to decorrelate a continuous feature space against protected attributes with optimal transport. We demonstrate how well it performs in the context of jet classification in high energy physics, where classifier scores are desired to be decorrelated from the mass of a jet. The decorrelation achieved in binary classification approaches the levels achieved by the state-of-the-art using conditional normalising flows. When moving to multiclass outputs the optimal transport approach performs significantly better than the state-of-the-art, suggesting substantial gains at decorrelating multidimensional feature spaces. § INTRODUCTION AI-powered decision-making has become a large part of automated systems in banks, advertising, and healthcare, to name a few. This has resulted in increased awareness surrounding the fairness and biases of the decision models. Due to the nature of many datasets, biases towards protected attributes like gender and race in data result in biased models. These biases are not only causes for concern in terms of fairness and ethics, but are also relevant to research in natural sciences, where correlations to protected variables are fundamental in nature, but can lead to undesirable effects in statistical analyses. In High Energy Physics (HEP), classifiers are commonly used to separate different signal processes from background processes, in both event classification as well as object identification. One area which has seen a great deal of development is jet tagging, in particular identifying top quark initiated jets from the dominant QCD background of light quarks and gluons (see Ref. <cit.> for a comprehensive comparison of techniques). Identifying the origin of jets is not restricted to supervised classification, with anomaly detection being another area of active development in the hunt for physics beyond the standard model <cit.>. Here unsupervised or semisupervised classifiers are used to identify jets which may originate from new physics particles of unknown mass. In the case of jet tagging, the desired accuracy of the classifier should be independent of the invariant mass of the jet, and instead exploit the differences in the underlying structure of the jet, known as jet "substructure". However, while background processes often follow an exponentially decaying invariant mass distribution, the invariant mass of the signal processes is localised within some region of the mass spectrum. This overdensity of signal on the mass spectrum and correlations between the substructure of a jet and its invariant mass will lead to the classifier scores being correlated to the invariant mass. Several techniques have been developed which aim to decorrelate the scores of a classifier from the invariant mass of a jet <cit.>. These include methods that are employed during training as a means of regularisation, as well as post-training corrections. In this work we introduce a new method for decorrelation using Convex Neural Optimal Transport Solvers (Cnots). Inspired by Ref. <cit.>, which uses normalising flows to learn the monotonic transformation T(·|c) in 1D, given some protected attributes c, we propose to use the gradient of a convex neural network <cit.> for T(·|c), that by definition is monotonic in ℝ^N. We follow the case of conditional optimal transport studied in Refs. <cit.>. We use convex neural networks to solve the Kantorovich dual formulation and find the optimal transport (OT) <cit.> between correlated scores and decorrelated ones. Thereby, learning a monotonic transformation T(·|c) between the two spaces, that minimises the Wasserstein distance between them. § METHODS §.§ Dual formulation of optimal transport Let P(y,c) and Q(x,c) be two continuous densities, where x and y are coordinates in an ℝ^N space which follows, different distributions correlated to some latent conditional property c. The optimal transport between these two densities is an optimisation problem over possible transportation maps, T, where T^* = inf_T: T_#(Q)=P1/2𝔼||x-T(x|c)||^2. Here, T_# represents possible transports from Q to P, T^* is the optimal transport between Q to P and 𝔼 is the expectation value over Q. The problem can be formulated with a general cost or distance measure d(x,y), which here is chosen to be the squared Euclidean distance d(x,y) = ||x-T(x)||^2. For this cost function the optimal transport map is unique when Q is continuous <cit.>. It is possible to reformulate the primary problem in Eq. <ref> as a dual formulation following Ref. <cit.> as 𝕎^2_2(P,Q) = sup_f(y,c)+g(x,c)≤1/2 ||x-y||_2^2𝔼(f(y,c)) + 𝔼(g(x,c)), where 𝕎^2_2 is the Wasserstein-2 distance. Here both f and g are functions constrained by f(y,c)+g(x,c)≤1/2 ||x-y||_2^2. By requiring f and g to be convex functions, this can be rewritten as 𝕎^2_2(P,Q) = 𝒞(x,y) +sup_f(y,c)∈cvx(y)inf_g(x,c)∈cvx(x) f(∇g(x,c),c)-⟨x,∇g(x,c) ⟩- f(y,c), where 𝒞(x,y) = 1/2(x^2+y^2) <cit.>. Here both f and g are convex in x and y, respectively, but not in c. Under this formulation, the optimal transport map becomes T_#(x,c)=∇_x g(x,c)=P, the gradient of the convex function g with respect to its inputs Q, which, by definition, is monotonic in x for any given c. The monotonic transformation T_# ensures order preservation in ℝ^N by definition. For most generative models, this is not important, however, in classification tasks the ordering of jets are important and if not preserved, performance might be lost. This restriction assists the convergence towards the optimal transport map, which is also order preserving. One of the benefits of using 𝕎^2_2 in comparison to divergences such as the Kullback-Leibler (KL) divergence is that it only requires samples from the input and base distributions, as opposed to the probability densities. Furthermore, it is well defined for all values whereas the KL divergence is not. §.§.§ Convex neural networks In Ref <cit.>, it is shown that Eq. <ref> can be solved by parameterising the two convex functions with two Input-Convex neural networks (ICNN) <cit.> and thus learning the optimal transport as ∇_x g(Q;θ)=P and ∇_y f(P;θ)=Q, where θ is the trainable parameters of the network. Partially input convex neural networks (PICNN) <cit.> extend ICNNs to take a conditional vector c in addition to the input vector x in order to represent a conditional convex scalar function f(x,c;θ). In this case f(x,c;θ) does not need to be convex with respect to c. The general architecture of the PICNNs can be seen in Fig. <ref>, whereby removing the non-convex part will reduce it to the ICNN. Following the same notation as in Ref <cit.>, the PICNN is defined recursively in z_k and u_k, where k is the number of layers ranging from i=0,1,...,k, and W and b are learnable parameters. The recursive non-convex layers follow a simple feed-forward network u_i+1 = g_i(W_i u_i+b_i), with u_0 = c and g_i is the activation function. These non-convex layers have no restrictions on allowed transformations and operations, as long as they are differentiable. The convex layers are defined as z_i+1 = g_i ( W_i, SP^z( z_i ∘ [W_i^zuu_i+b^z]_SP) + W_i^y( y ∘ (W_i^yu u_i + b_i^y) ) + W_i^u u_i + b_i ), where z_0=0. W_i, SP^z are restricted to be strictly positive by passing the weights through a softplus transformation, and the activation functions g_i are required to be convex and non-decreasing. Lastly, the output is defined as f(x,c;θ) = w_0, SP(u_k)/2x^2 + w_1, SP(u_k) z_k, such that the transport function given by ∇_xf(x,c;θ) = w_0, SP(u_k) · x + w_1, SP(u_k) ∇_x z_k makes perturbations around the identity function easily accessible and the transport at initialisation for random weights θ. §.§ Convex neural networks for decorrelation In order to use convex neural networks for decorrelation purposes, the aim is to learn a monotonic transformation T(·|c) between the input feature space Q(·|c) and a target feature space P(·). To break the dependence on c, we construct P to be a distribution which is identical for all values c. In this framework, the choice of P is completely free and does not need to follow an analytically defined function. This is in contrast to normalising flows, which require the probability distribution function of the base density to be analytically calculable. Alternatively, normalising flows can also use arbitrary PDFs as base distributions <cit.>, though we do not study this here. The simplest way to construct P(·) is to take the distribution Q(·|c) and randomly permute the conditional vector, breaking the correlation. This should also define a target base distribution which is similar to the input distribution, simplifying the transport function. Other choices are to choose a uniform distribution or normal distribution, as is done for cflows in Ref. <cit.>. §.§ Other exiting methods §.§.§ Conditional normalising flows Conditional normalising flows (cflows) <cit.> are networks built to be fully invertible, therefore by definition f and f^-1 exist and are fast to compute. This makes cflows powerful as generative models using the change of variables formula p_x(x|c) = p_y(f(x,c)) | 𝒥(f(x,c)) | , to transform between base density p_y(x) independent in c and some complex density p_x(x|c) conditioned on c. The objective of the cflow will be to maximise the log-likelihood, which requires the knowledge of the PDF of the base density p_y(x). By construction, the cflow can be inverted f^-1(p_x(x|c))→ p_y(x) to be independent of c, producing the desired decorrelated features <cit.>. For p_x(x|c) in 1D, the cflow transformation can be restricted to be monotone ensuring order preservation, which is important for decorrelating discriminate scores. However, beyond 1D, the monotonicity of the transformations is not guaranteed. An invertible PICNN architecture can also be trained like a normalising flow to maximise log-likelihood <cit.> and find a monotonic transformation in ℝ^N. However, this involves calculating the Hessian in the forward pass, which is an expensive procedure and limits the choice of base distribution to analytically defined functions. Due to the additional complexity, this training scheme is not studied in this work. §.§.§ Decorrelating during training A wide range of established methods for the decorrelation of classifier outputs are applied before or during the training of the discriminator. Planing <cit.> can be applied to the data beforehand, as a form of preprocessing, ensuring that the distribution over the protected variables c follow the same distribution for both the signal and background jets. An alternative approach is penalising the classifier for producing outputs which are correlated with c during training by adding an additional loss or regularisation term ℒ_corr. Example methods calculate ℒ_corr with adversarial neural networks <cit.>, or with distance measures calculated using distance correlation <cit.> or the moments of the conditional cumulative distributions <cit.>. The total loss for these methods is given by ℒ = ℒ_class (s(x), y)+αℒ_corr(s(x), c), c being the protected attributes, y are the labels and s(x) is the classifer output during training. The decorrelation can then be controlled by a hyperparameter α. Both the optimal transport and cflow decorrelation approaches are applied only after training, or as a means of preprocessing, we restrict comparisons to these two methods. Furthermore, as shown in Ref. <cit.> for cflows, the OT approach can also be applied in addition to other decorrelation approaches, such as those used during training. § APPLICATION TO JET TAGGING In HEP, multiple studies have been conducted to decorrelate the predictions s(x|m) of binary classifiers from the invariant mass m of a reconstructed object <cit.>. However, balancing the classification loss and decorrelation loss has proven difficult. The current state-of-the-art approach uses conditional flows to and imposes no restrictions on either the dataset or the architecture of the classifier. To form a basis of comaprison and evaluate the performance between Cnots and cflows, we look at decorrelation of classifiers trained to identify the origin of boosted objects at particle colliders like the LHC <cit.>. When collisions produce particles with high transverse momentum, their decay products have a smaller opening angle. In the case where these particles decay to partons, the two resulting hadronic showers, known as jets, start to overlap. In this instance, their decay products are unable to be resolved individually and instead are reconstructed as a single large jet. Differences in the underlying structure within the jet can be exploited to predict the initial particle produced in collisions. However, the underlying structure of jets remains strongly correlated to the reconstructed invariant mass of the jet resulting in a biased prediction of the initial particle. In this work we study the performance of decorrelating classifiers trained to identify boosted jets origination from quarks and gluons (QCD), vector bosons (VB) and top quarks (Top), from the invariant mass of the jet. §.§ Datasets To get pure samples of the top quark initiated jets, samples of tt̅ jets are produced, in which both the top quarks decay hadronically, (t→ W(→ qq̅^')b). For a pure sample of vector boson initiated jets, samples of WZ diboson events are generated in which both the W and Z bosons decay to two quarks. For pure samples of QCD initiated jets, samples of two-to-two processes with a final state of two quarks and or gluons are simulated. All three samples are generated at a centre of mass energy √(s)= 13 TeV using MadGraph_aMC@NLO <cit.> (v3.1.0), with decays of top quarks and W bosons modelled with MadSpin <cit.>. Pythia8 <cit.> (v8.243) is used to model the parton shower and hadronisation with the NNPDF2.3LO PDF set <cit.>. The detector response is simulated using Delphes <cit.> (v3.4.2) with a parametrisation similar to the ATLAS detector <cit.>. Jets are reclustered using the anti-k_t clustering algorithm <cit.> with a radius parameter R=1.0 using the package <cit.>. Jets are required to have p_T>450 GeV and |η|<2.5, with only the jet with the highest p_T selected from each event. The minimum p_T of the leading parton in the hard scatter is optimised for each sample in order to increase the rate of jets passing the selection criteria and in order to produce similar distributions for the jet p_T across all three jet types. The relative four momenta (p_T^frac, Δη, Δϕ, E^frac) of up to the leading 100 constituents, ordered in descending p_T, are stored for each jet, alongside the jet four-momentum vector (p_T, η, ϕ, m). Jets with fewer than 100 constituents are zero-padded. In total, there are 840,000 each of Top, VB, and QCD jets in the training set, and 800,000, 93,000 and 900,000 jets, respectively, for each class in the test set. Only jets with an invariant mass between 20 and 450 GeV are selected for decorrelation. §.§ Classifiers A multiclass classifier (mDNN) is trained to predict the probabilities of a jet originating from each jet type, p_QCD, p_T and p_VB. The mDNN is constructed using the Particle-Transformer architecture from Ref. <cit.>, and is trained using the constituents of the jets. Whilst the multi dimensional scores from the mDNN are correlated to mass, visualising the correlated scores in ℝ^3 is difficult. Thus, for visualisation purposes, we project the 3D scores down to 1D distributions following the Neyman Peason lemma to create three discriminators 𝒟 𝒟_QCD = p_QCD/p_T+p_VB, 𝒟_T = p_T/p_QCD+p_VB, 𝒟_VB = p_VB/p_T+p_QCD. To evaluate the performance of decorrelating the output of a binary classifier, we use the discriminator scores 𝒟_VB normalised with a sigmoid transformation. For the decorrelation of a multiclass output, we decorrelate the joint distribution p_QCD, p_T and p_VB. In the 3D case, the discriminators in Eq. <ref> are used for visualisation. The three discriminators are shown in Fig. <ref>, where we can see that the scores change as a function of mass. The mass sculpting is also very apparent when applying a selection on the discriminate scores, as seen in Fig. <ref>, where, especially for the 𝒟_VB and 𝒟_T, the sculpting surrounding the resonance mass is evident. § RESULTS While the decorrelation methods are trained exclusively on QCD jets to ensure that there is no underlying correlation between classes, evaluation includes all classes. To evaluate the decorrelation performance, prior decorrelation studies in HEP have used the inverse Jensen-Shannon divergence (1/JSD) between the initial invariant mass distribution and the distribution after applying a selection on the classifier scores. This is an effective measure of the mass sculpting resulting from the classifier output. As the decorrelation methods are trained on QCD jets, fully decorrelated classifier scores should not sculpt the initial mass distribution after a selection. In order to estimate an upper bound on performance arising from the statistical variation of the data, we calculate the ideal 1/JSD using bootstraps <cit.> on the initial QCD mass distribution without a selection. To measure the discrimination power after decorrelation, we calculate the signal efficiency for both the Top and VB as a function of background rejection, and calculate how the AUC changes as a function of mass. §.§ Binary decorrelation The correlated 𝒟_VB scores are shown in Fig. <ref> as a function of mass, where a dependency on mass is apparent, especially around the W/Z-boson mass. In Fig. <ref>, we trained the decorrelation methods explained in Sec. <ref> and applied the learned transformation to decorrelate the 𝒟_VB scores. For both methods, the mass dependency is removed and the distribution of the scores are the same in the four mass bins. The Cnots method for 1D decorrelation of 𝒟_VB (OT-𝒟_VB) is able to decorrelate to an arbitrary base distribution, whereas the cflow method has to evaluate the likelihood of the PDFs. Therefore, we show two possible base distributions for Cnots, one using a uniform distribution, and another using the source distribution as the base distribution. After decorrelation, the inclusive separation power will degrade, as 𝒟_VB scores are not able to discriminate using the additional separation power arising from the jet mass. However, due to the monotonicity of the transformations, the integrated performance over the protected attributes should remain the same. In Fig. <ref>, we select jets within narrow mass bins to imitate the integration and calculate the AUC. We see here that the AUC as a function of mass remains unchanged after decorrelation. To evaluate the decorrelation performance, we measure the 1/JSD at different selections and simultaneously measure the signal efficiency of VB. This is illustrated in Fig. <ref> for the 𝒟_VB, cf-𝒟_VB and OT-𝒟_VB. While we see no large variation in the signal efficiency between the decorrelation, the cflow method outperforms the OT methods in the background sculpting. At very high background rejection we see comparable performances, as both methods are within the statistical uncertainty of the ideal background distribution. §.§ Multiclass decorrelation To assess the decorrelation performance of the methods, we look at the background sculpting and signal efficiency of the discriminators after applying decorrelation. The two methods are compared to the initial distributions in Fig. <ref>. We will be testing Cnots applied to the classifier outputs (OT-mDNN) with two different base distributions. OT-mDNN Dir(1,1,1) uses 3D logit-Dirichlet as a base distribution with the concentration parameters set to one, the OT-mDNN source uses the original mDNN scores as a base distribution, and the cf-mDNN uses a normal distribution as a base distribution. All methods are trained in logit space and normalised with softmax during evaluation. In Fig. <ref> we see both OT models outperform the cf-mDNN for all three discriminants, especially at high background rejection. Here the decorrelated OT-mDNN scores follow the ideal case of no sculpting. We also see that, in addition to the reduced levels of sculpting, the OT-mDNN models consistently have a higher signal efficiency than cf-mDNN for the discriminator optimised for the target jet type. The poor decorrelation performance of the cf-mDNN lies in its unconstrained transport maps, which do not contain the restriction that the map must be monotone and therefore allows order swapping. However, this is not the case for the OT approach, which is order preserving by construction. This is also clear from the differences in AUC as a function of jet mass shown in Fig <ref>, where the integrated performance of cf-mDNN declines significantly compared to the original distribution. We also see that the performance for the two OT-mDNN methods is close to the optimal performance of the original distribution. The small discrepancy may be due to the finite binning size of the integral. §.§.§ Order preservation The measure of order preservation is simple to quantify in 1D, however, it becomes non-trivial in higher dimensions. For an order preserving map, we expect to observe no rapid fluctuation in the transport map and minimum curl in the transport field. We attempt to visualise the level of order preservation by depicting the gradient of the transport maps across the input space. Low and smoothly changing gradients indicate a smooth and order preserving map. However, large gradients or abrupt changes do not necessarily indicate where the order preservation is broken. In order to compare the transport maps between OT-mDNN and cf-mDNN, a shared base distribution is required. We choose a normal distribution as the base distribution for both. In order to reintroduce unitarity per event, the outputs are rescaled using a softmax activation on the three outputs, ensuring two degrees of freedom. We then visualise the transport maps in 2D by using p_VB and p_T. Taking the difference between correlated and decorrelated scores, we can calculate a displacement vector that indicates the direction and amount a given point is transported during decorrelation. As the scores are in Dirichlet space, one of the three dimensions becomes redundant and can be dropped, enabling us to show the transport maps in 2D. The displacement vectors are shown in Fig. <ref> for OT-mDNN and cf-mDNN. We show that in some regions the transport map of the cf-mDNN changes rapidly and overlaps into other regions, whereas the OT-mDNN has smoother transitions. We will attempt to outline these rapid transitions by sampling random positional variations of 2% deviation from the original points in Fig. <ref>. We then measure the distance between displacement vectors of the original point and a small deviation. A large difference indicates a rapid change in the transport map, which can result in non-order preserving transport. In Fig. <ref>, the histogram of magnitudes is shown. Here, 1 % of the cf-mDNN displacement vectors have a magnitude larger than the maximum magnitude of the OT-mDNN. These displacement vectors are indicated in red in Fig. <ref>. These red arrows strongly indicate that we have regions in the cf-mDNN transport map where the monotonicity is broken, which results in the low signal efficiency we saw in Fig. <ref>. As another measure of order preservation, we try to quantify areas of the transport map where order swapping may occur. In the case of a perfectly monotonic transport in two dimensions, the transport field should be convex, we postulate that for this to hold no transport paths should cross with an angle greater than π/2. In Fig. <ref>, we show the same transport maps as in Fig. <ref>, but highlight transport vectors that cross any other transport vector by an angle greater than π/2. Here we see that this does not hold for cflows, however, it does hold for OT-mDNN. These rapid changes in the transport map will break the monotonicity. However, a clear quantification of monotonicity in higher dimensions are not known to us. § CONCLUSION In this work we have introduced a novel method for decorrelating feature spaces correlated to protected attributes by finding the optimal transport map between the correlated space and a decorrelated space. This map is cosntructed as the gradient of a partially input convex neural network, ensuring it is monotonic by construction. We study the decorrelation performance of our approach in comparison to the state-of-the-art for jet tagging at the LHC. Conditional normalising flows <cit.> have demonstrated success in decorrelating 1D distributions, with our approach reaching similar levels of performance. However, Cnots achieves state-of-the-art performance outperforming the normalising flows at decorrelating higher dimensional feature spaces. This increase in performance is achieved due to the enforced monotonicity in the architecture, which, although present in 1D for the normalising flows, is not enforced in higher dimensions. Furthermore, Cnots can perform decorrelation with an arbitrary distribution chosen as the target of the transport. The application of decorrelation is not restricted to classifier outputs, and due to the state-of-the-art performance in decorrelating higher dimensional feature spaces, Cnots should result in improved performance for tasks which require decorrelation of input feature spaces. The code is publicly available at <https://github.com/malteal/ot-decorrelation>. § ACKNOWLEDGEMENTS The authors would like to acknowledge funding through the SNSF Sinergia grant called Robust Deep Density Models for High-Energy Particle Physics and Solar Flare Analysis (RODEM) with funding number CRSII5_193716 and the SNSF project grant 200020_212127 called "At the two upgrade frontiers: machine learning and the ITk Pixel detector". We would also like to thank Matthew Leigh for training the classifier used in the analysis and Samuel Klein and Chris Pollard for useful discussions and feedback. tocchapterReferences [title=References] § APPENDIX tocsectionAppendices §.§ Multiclass classifier The transformer use 3 self-attention layers and 2 cross-attention layers, each with an embedding dimension of 128 and 16 attention heads. The 7 node features and 5 edge features are embedded into this space each using a dense network with a single hidden layer of 256 neurons. We also use the same dense network shape for the residual updates inside the transformer, and to extract the final 3-value discriminant from the cross attention blocks. Conditioning on the 2 high level variables is achived by concatenating them to the input of all dense networks use in this setup. We use the LeakyReLU activation and a dropout probability of 0.1 in all linear layers. We trained the transformer for 10 epochs using the AdamW optimiser with a learning rate of 0.001 and a weight decay strength of 0.0001. §.§ Cflow The cflow for binary decorrelation is training using Adam optimiser for 300 epochs with initial learning rate at 0.0005 that annealed to zero using cosine annealing with batch size of 512. The cflows is constructed by 5 rational quadratic spline layers with 12 bins in each layer using a uniform[0,1] or a normal distribution as a base distribution for either binary and multiclass decorrelation, respectively. The invariant mass is transformed to logarithmic mass. The multiclass decorrelation cflow is trained with same training setup as the binary one. However it is constructed by 6 rational quadratic spline layers with 12 bins bounded within [-3.5, 3.5] in each layer using a normal distribution as a base distribution. Outside the bins, the map is linear. Everything else followed the default settings from Ref. <cit.>. §.§ PICNN The architecture for the PICNNs use in both binary and multiclass decorrelation can be seen in Tab. <ref>. We always train our transport in logit space with logarithmic invariant mass as conditions. For both multiclass and binary decorrelation we test the same two base distributions, a uniform[0,1] and the original source distribution. §.§ Binary decorrelation To compare the 1/JSD values for the different methods, the binning and ranges between distribution has to be the same. We have chosen 50 bins equally spaced between 20 and 450 GeV for all mass sculpting figures. In Fig. <ref>, we see the mass sculpting at different background rejections. These are the distribution that are used to measure the 1/JSD values. We see that after decorrelating the scores of 𝒟_VB, the sculpting becomes non existent. §.§ Multiclass decorrelation In Fig. <ref>, we see the original mass sculpting for all three discriminators. In Fig. <ref>,<ref> and <ref>, we see the mass sculpting after using the decorrelation methods. OT-mDNN normal has a smooth change in magnitudes indicated by the smooth color transitions in Fig. <ref>. However, there are artifacts within magnitude difference of the cf-mDNN, meaning rapid changes in the transport map. These will break the monotonicity, which can also be seen from the overlapping displacement vectors from Fig. <ref>.
http://arxiv.org/abs/2307.04612v1
20230710145514
Emergence of Cooperation in Two-agent Repeated Games with Reinforcement Learning
[ "Zhen-Wei Ding", "Guo-Zhong Zheng", "Chao-Ran Cai", "Wei-Ran Cai", "Li Chen", "Ji-Qiang Zhang", "Xu-Ming Wang" ]
physics.soc-ph
[ "physics.soc-ph" ]
1]Zhen-Wei Ding 2]Guo-Zhong Zheng 3]Chao-Ran Cai 4]Wei-Ran Cai 2]Li Chen 1]Ji-Qiang Zhangcor1 [email protected] [cor1]Corresponding Author 1]Xu-Ming Wangcor2 [email protected] [cor2]Corresponding Author [1]School of Physics, Ningxia University, Yinchuan 750021, P. R. China [2]School of Physics and Information Technology, Shaanxi Normal University, Xi'an, 710062, P. R. China [3]School of Physics, Northwest University, Xi’an 710127, P. R. China [4]School of Computer Science, Soochow University, Suzhou 215006, P. R. China Cooperation is the foundation of ecosystems and the human society, and the reinforcement learning provides crucial insight into the mechanism for its emergence. However, most previous work has mostly focused on the self-organization at the population level, the fundamental dynamics at the individual level remains unclear. Here, we investigate the evolution of cooperation in a two-agent system, where each agent pursues optimal policies according to the classical Q-learning algorithm in playing the strict prisoner's dilemma. We reveal that a strong memory and long-sighted expectation yield the emergence of Coordinated Optimal Policies (COPs), where both agents act like “Win-Stay, Lose-Shift” (WSLS) to maintain a high level of cooperation. Otherwise, players become tolerant toward their co-player's defection and the cooperation loses stability in the end where the policy “all Defection” (All-D) prevails. This suggests that tolerance could be a good precursor to a crisis in cooperation. Furthermore, our analysis shows that the Coordinated Optimal Modes (COMs) for different COPs gradually lose stability as memory weakens and expectation for the future decreases, where agents fail to predict co-player's action in games and defection dominates. As a result, we give the constraint to expectations of future and memory strength for maintaining cooperation. In contrast to the previous work, the impact of exploration on cooperation is found not be consistent, but depends on composition of COMs. By clarifying these fundamental issues in this two-player system, we hope that our work could be helpful for understanding the emergence and stability of cooperation in more complex scenarios in reality. * Strong memory and long-sighted expectations yield “win-stay, lose-shift” and high cooperation * Tolerance of exploitation could be a good precursor to a crisis in cooperation * The impact of exploration on cooperation nonmonotonically depends on the composition of the coordinated optimal modes Nonlinear dynamicsCooperationRepeated gameReinforcement learning § INTRODUCTION Cooperation is ubiquitous and significant, from ant fortress associations and altruistic behavior of pathogenic bacteria in biological systems <cit.> to community activities and civic participation in human society <cit.>. However, the emergence of cooperation is not straightforward, since although common interest would require the majority to cooperate, exploiting others by defection could maximize individuals' interest. Such dilemma arising from the conflict between collective and individual welfare is captured in a number of classical game models <cit.>. Here, the key question of how do cooperative behaviors evolve still remains an open question. Among the most favorable models for studying cooperation mechanisms, the Prisoner's Dilemma Game (PDG) stands out for its simplicity. It's well known that defection, as the Nash equilibrium, is the optimal choice for individuals in a single round for this game. <cit.>. But the repeated PDG potentially provides an escape to cooperation revealed in both theoretic predictions and experiments <cit.>. Previous studies show that the equilibriums of repeated PDGs depend crucially on whether a game is finitely or infinitely repeated <cit.>. There are some exceptions, however, that incomplete information, e.g. uncertain preferences of the players <cit.>, uncertain number of rounds <cit.>, termination rule <cit.> or rewards with noise <cit.>, etc., can lead to the altruistic cooperation even in the finite repeated PDGs. This leads another interesting theme in repeated PDGs about the relevance of strategies to the cooperation level. Researchers have uncovered a number of strategies for actions, in which individuals' future deeds adhere to particular rules based on scant historical data, and they have investigated the dependence of the cooperation level on these rules <cit.> as well as the choices made by the individuals within these strategies for actions <cit.>. Based on the above works and the introduction of framework of evolutionary game theory, considerable progresses have been made later on around the mechanism behind the cooperation emergence among unrelated individuals <cit.>. It is notable that in most of these previous studies, the strategies are not evolved or fixed once adopted, showing weak adaptivity towards the circumstance. Humans and many other creatures, however, have a sophisticated cognitive capacity, such as reinforcement learning <cit.>, behaviour prediction <cit.>, intention recognition <cit.>, and intelligence from social interactions  <cit.>. A new paradigm accounting for the adaptivity is needed to understand the cooperative behaviours in reality. The past decades have witnessed the flourishing of machine learning, which has rooted in human cognition and neuroscience <cit.> and has many successful applications in many fields <cit.>. Reinforcement learning, as one of most influential branches of machine learning, is found particular suitable for understanding the evolution cooperation <cit.>. Reinforcement learning is originally designed to make optimal decision for maximizing the rewards for the given states through exploratory experimentation <cit.>, and this setup exactly matches with the evolution of cooperation. Actually, some studies have already adopted the idea of the reinforcement learning to investigate the repeated PDGs <cit.>. With this new paradigm, new insights are obtained by studying the impact of different factors on cooperation <cit.>, e.g., it's found that cooperation can benefit from improved exploration methods <cit.>, self-adaptive memories <cit.>, evolved payoffs <cit.> and even intrinsic fluctuations <cit.>. Some other works discuss the optimization of algorithms to facilitate the cooperation and increase rewards <cit.>, or find strategies to play against the classical strategies <cit.>. In parallel, some theories have also been developed, such as symmetric equilibrium <cit.>, symmetry breaking <cit.> or fundamental dynamics <cit.>. Building on these studies, researchers have inspected cooperation from self-organization, in populations or multi-agent systems, which aim to continuously adjustable strategies by learning instead of fixed strategies, such as imitation learning in the classical evolutionary games <cit.>. In spite of the progresses in the employment of reinforcement learning in explaining how humans deal with various tasks <cit.>, there are still a number of interesting questions about the cooperation mechanism: Can we use reinforcement learning, rooted in psychology and neuroscience, to understand cooperation in the repeated PDGs observed by economists? Can we bridge the strategies (equilibriums) of classical economics and the policies (behaviour modes) of machine learning? Addressing these questions is of paramount significance because it helps us to understand the connections and differences between social and artificial intelligence systems. This paper is organized as follows. In Sec. <ref>, we present a general model that combines reinforcement learning algorithms with repeated games for two agents. The simulation results of strict prisoners' dilemmas game are shown in Sec. <ref>. To investigate the mechanism of the phenomena, we make some analysis in Sec. <ref>, which consists of four parts. Finally, the conclusions and discussion are given in Sec. <ref>. § REINFORCEMENT LEARNING FOR REPEATED GAMES We start by introducing a general Reinforcement Learning Repeated Game (RLRG) for two agents, say “Iris” and “Jerry” (abbreviated as “i” and “j” in notation), specifically they adopt the Q-learning algorithm <cit.>. In this algorithm, Iris/Jerry may take an action against its co-player from an action set 𝒜 = a_1, ⋯, a_n_a when it is in one of n_s states from the state set 𝒮 = s_1,⋯,s_n_s. The goal is to find a policy that maximizes the expected cumulative reward. At τth round, the state of each agent consists of its own and its co-player's actions in the previous round, i.e. s(τ) = a(τ-1)ã(τ-1), where a and ã denote the agent and its co-player's actions, respectively. Therefore, the state set is the Cartesian product of action set 𝒜×𝒜→𝒮. In the Q-learning algorithm <cit.>, Iris/Jerry seeks for optimal policies through the so-called Q-table by learning. Here, the Q-table is a matrix on Cartesian product for states (columns) – actions (rows) 𝒮×𝒜→ℝ: Q(τ) = ( [ Q_s_1,a_1(τ) ⋯ Q_s_1,a_n_a(τ); ⋮ ⋱ ⋮; Q_s_n_s,a_1(τ) ⋯ Q_s_n_s,a_n_a(τ); ]). With a Q-table in hand, Iris/Jerry takes action following its Q-table a(τ) →max_a'{Q_s,a'(τ)}, a'∈𝒜, with probability 1-ϵ, or a random action within 𝒜 otherwise. Here, max_a'{Q_s,a'(τ)} is the action corresponding to the maximum Q-value in the row of state s. The parameter 0 < ϵ≪ 1 is to introduce some random exploration besides the exploitation of the Q-table. When Iris and Jerry make their decisions, they receive their own rewards according to their actions and a payoff matrix Π = ( [ Π_a_1a_1 ⋯ Π_a_1a_n_a; ⋮ ⋱ ⋮; Π_a_n_aa_1 ⋯ Π_a_n_aa_n_a; ]), where Π_aã = Π(τ) denotes the agent's reward if the agent with action a is against action ã of its co-player. At the end of τth round, Iris/Jerry update the element Q_s,a for its Q-table as follows Q_s,a(τ + 1) = g( Q(τ),r(τ)) = (1-α)Q_s,a(τ) +α[γ Q_s',a'^max(τ)+r(τ)], where α∈ (0, 1] is the learning rate reflecting the strength of memory effect, a large value of α means that the agent is forgetful since its previous value of Q_s,a(τ) is quickly modified. r(τ) = Π(τ)=Π_a(τ)ã(τ) is the agent's reward received in the current round τ. γ∈ [0,1) is the discount factor determining the importance of future rewards since Q_s',a'^max is the maximum element in the row of next state s' that could be expected. In such a way, the Q-table is updated, and the new state becomes s(τ+1) = a(τ)ã(τ), and a single round is then completed. To summarize, the pseudo code is provided in Algorithm <ref>. § SIMULATION RESULTS FOR PRISONER'S DILEMMA GAME In this work, Iris and Jerry play the Strict Prisoner's Dilemma Game (SPDG) within our RLRGs framework, and we focus on the evolution of the cooperation preference f_c. Specifically, the action and state sets are respectively 𝒜 = a_1,a_2=C, D, 𝒮 = s_1,s_2,s_3,s_4=CC, CD, DC, DD, and correspondingly the time-evolving Q-table Q(τ) = ( [ Q_s_1,a_1(τ) Q_s_1,a_2(τ); Q_s_2,a_1(τ) Q_s_2,a_2(τ); Q_s_3,a_1(τ) Q_s_3,a_2(τ); Q_s_4,a_1(τ) Q_s_4,a_2(τ); ]) = ( [ Q_cc,c(τ) Q_cc,d(τ); Q_cd,c(τ) Q_cd,d(τ); Q_dc,c(τ) Q_dc,d(τ); Q_dd,c(τ) Q_dd,d(τ); ]). The payoff matrix in the Sec. <ref> is rewritten as Π = ( [ Π_cc Π_cd; Π_dc Π_dd; ]) = ( [ R S; T P; ]) = ( [ 1 -b; 1+b 0; ]), where Π is with a tunable game parameter b∈ (0, 1), controlling the strength of the dilemma. A larger value of b means a higher temptation to defect where cooperators are harder to survive. Here, we define the average cooperation preference for Iris and Jerry within tth window in simulation as follows f_c(t):=∑_τ=t-T^t∑_k∈{i,j}δ(a^k(τ) - C)/2T, where δ(⋯) is the Dirac delta function and a^i,j are Iris or Jerry's actions. As can be seen that a sliding window with the length of T is used for averaging. The time series of average preference can help us for better monitoring the evolution trend of different actions. As t and T tend to infinity, f_c(t) is the average cooperation preference over all time and can be denoted as f̅_c. In our practice, we use sufficiently large t and T instead of infinity. Apart from the average cooperation preference, we are also interested in the degree of fairness for agents. For example, in the case of C-D pair, the defector takes advantage over the cooperator, yielding an unfair reward division. To measure the degree of unfairness, we defined it as the average reward difference between Iris and Jerry within consecutive rounds Δ R := ∑_τ=t-T^t|∑_τ^'=τ-1^τΠ^i(τ^') - Π^j(τ^')|/T, in which Π^i,j are the rewards for Iris and Jerry. Obviously, when Δ R→ 0, it means the two agents statistically keep the action symmetry with each other, without apparent exploitation detected. Otherwise, a symmetry-breaking in their action/reward is present. By fixing the dilemma strength b = 0.2, We firstly provide the average cooperation preference ⟨f̅_c⟩ in the parameter domain of (α,γ) in Fig. <ref>(a). The results show the domain can be roughly divided into three regions. In Region 1, where α is low and γ is high, the two agents maintain a high level of cooperation, showing that a strong memory effect and the long-sight facilitate the cooperation to thrive. By contrast, the opposite setup where agents are both forgetful and short-sighted results in a low cooperation preference, full defection is seen in Region 2. Starting from Region 1, ⟨f̅_c⟩ decreases as the agents gradually become short-sighted (i.e. by decreasing γ), but cooperation does not disappear as long as the value of α is low enough, which is Region 3. This means that cooperators still survive as long as the agents are not forgetful. In addition, Fig. <ref>(b) also provides the average reward difference ⟨Δ R⟩ in the same domain as in Fig. <ref>(a). We can see that the reward difference is almost zero within all the three Regions (I, II, and III), except at the boundaries between Regions 1 – 2, and Regions 2 – 3. This means high unfairness (corresponding to frequent appearance of C-D or D-C cases) is observable only in the domain close to the boundary between 1 and 2; otherwise fairness is well maintained. Fig. <ref> first shows some typical time series of cooperation preference for different combinations of (α, γ) in Fig. <ref>(a). As can be seen, for a low learning rate α, the cooperation preference is relatively stable and f_c increases with the discount factor γ [Fig. <ref>(a)]. With the increase of α, a significant decrease in f_C is detected, and the preference becomes quite volatile [Fig. <ref>(b)]. As α continues to increase, the cooperation is almost completely unsustainable in all three cases in [Fig. <ref>(c)]. By comparison, a high value of γ is more likely to yield a high level of cooperation for a fixed learning rate. These results and Fig. <ref>(a) indicate that the combination of a relatively low learning rate and high discount factor is the ideal scenario to sustain cooperation. § MECHANISM ANALYSIS §.§ Coordinated optimal policies and modes In the classical Q-learning, agent optimizes the value function of state-action pairs for the optimal policy π^* to maximize the total expected cumulative reward. The value function (i.e. the Q-table), is characterized by a set of Bellman equations as Q^π^*_sa = ∑_s^'∈𝒮p(s'|s,a)[r(s,a,s') +γ∑_a^'∈𝒜π^*(s^',a^')Q^π^*_s^',a^'] Here, p(s'|s, a) is the transition probability for the agent from state s to s' when it takes action a at s, r(s,a,s') is the reward received for the agent. In a static environment, e.g. walking the labyrinth game, the optimal policy π^* found by the agent is fixed because of the fixed environment of labyrinth thus also the time-independence of p(s'|s,a). But, this is not the case in RLRGs. Because the environment is now composed of agents' states, which are time-varying. It's thus obvious that p(s'|s,a) for Iris/Jerry is time-dependent and co-evolves with the Q-tables of both agents together with their policies <cit.>. This means that Iris and Jerry must seek for their optimal policy in a coordinated way with the other. Here we define a set of Coordinated Optimal Policies (COP) denoted as π^i*,π^j*, where π^i* is optimal for Iris only if Jerry's policy is π^j*, and vice versa. The COPs are able to remain unchanged for some time then the system will fall into some corresponding Coordinated Optimal Modes (COMs), which consist of circular state transitions. Here, we check the COMs instead of directly examining the COPs. By careful examination, we find that the system falls into a few modes once π^i,j remain unchanged for some time, which can be classified into 12 circular modes as ϵ→ 0 [see Table <ref> in <ref>]. Fig. <ref> gives seven short ones from m_1 to m_7. Note that, due to the presence of exploration, long modes generally are more unstable compare to the shorter ones [see <ref>]. Here these modes can be classified according to their state symmetry. A mode is called symmetric if the state experience is statistically the same if Iris and Jerry are interchanged, otherwise it is called asymmetric. For example, the mode of CC,CC or DD, DD in Fig. <ref>(a) is obviously symmetric, CD ↔ DC, DC ↔ CD is also in symmetry. But CC ↔ CD, CC ↔ DC is asymmetric, since one agent always acts as a cooperator while the other adopts between cooperation or defection periodically. In an asymmetric mode, there is a reward difference Δ R between Iris and Jerry, and this difference is disappeared in symmetric modes. Note that, due to the presence of transition between two states, we need to compute the accumulated rewards for two consecutive rounds as the definition given in Eq. (<ref>). Here, COMs can be analogous to the equilibriums in the finite classical repeated PDGs <cit.>. According to observation, we learn that the number of COMs lies between 1 and 4 for a COP. §.§ Transition of States The above analysis indicates that the probability of state, p(s), and the probabilities of state transitions, p(s^'|s), reflects agents' COPs and corresponding COMs. Since the states and state transitions for both agents can be mapped to each other as updating protocol shown, we only need to examine the cooperation preference of one agent through its p(s) and p(s^'|s). In here, we show the distribution of states in the parameter space of α,γ in Fig. <ref> (a-c) firstly. The state of CC and DD dominate in Region 1 and  2 respectively, while the two coexist in Region 3. However, CD and DC appear only at the boundary between 1 and 2. To characterize the correlation between the consecutive states, we compute the mutual information I(s;s^') defined as I(s;s^'):=∑_s∈𝒮∑_s^'∈𝒮p(s,s^')logp(s,s^')/p(s)p(s^'), which are shown in Fig. <ref> (d). Results show that the mutual information between consecutive states is weak in the Region 1 or 2, but is strong at their boundary. Strong mutual information implies that strong correlation between consecutive states and thus predictability in time. It is well known that the dynamics near the criticality has long-term correlations, and a tiny perturbation is able to trigger a series of large fluctuations. Thus, the observations suggest that there is a bifurcation at the boundary in α, γ, where the COP gradually changes as the learning parameters are varied. Fig. <ref> exhibits p(s) and p(s'|s) of one agent for four typical combinations of (α, γ) after the system becomes statistically stable. Specifically, we choose α, γ from Region 1, the boundary between Regions 1 and 2, Region 2 and Region 3 in Fig. <ref>, respectively. The observations are made respectively as following: (1) In Fig. <ref> (a), where the parameters are located in Region 1, CC is shown to be the primary state and the other states are rarely seen according to the probabilities of p(s). This is because of the dominating transitions CD→ DD→ CC and DC→ DD→ CC are dominant according to p(s^'|s). With these, one learns that π^i* and π^j* in the unique COP are the same for both that are “win-stay, lose-shift” (WSLS). It implies that the exploitation by the agent's defection incurs retaliation from its co-player, and that revenge will ultimately lower its payoff and the agent is forced to cooperate. (2) For the case at the boundary between 1 and 2, Fig. <ref> (b) shows the state is mainly composed of CC and DD mixture, while the fractions of the rest states are negligible. The reason for the absence of CD and DD is due to the high probabilities of CD→ CC and DC→ CC. This means that the cooperator becomes tolerant in the face of co-player's defection, but this tolerance also causes more exploitation to appear, i.e., the possibilities of CC→ CD and CC→ DC also increase compare to Fig. <ref> (a). The observations indicate that, on one hand, agents use co-player's tolerance to get more rewards by defection, but on the other hand, they do not want to break cooperation completely. However, once both defect, they stay in defection with a large probability. Though, still there is some chance to rebuild cooperation by DD→ CC. The results suggest that tolerance is a precursor that cooperation become fragile. (3) When located in Region 2, Fig. <ref> (c) shows the state DD dominates. The transitions to state DD are also non-negligible, i.e. CC→ DD, CD→ DD, and DC→ DD). Consequently, agents almost have no chance to rebuild cooperation once they have defected. That is to say, both π^i* and π^j* are “all-defection” (All-D), i.e., the COP is All-D, All-D in Region 2. (4) Fig. <ref> (d) shows the scenario in Region 3, where the state is also mostly composed of CC and DD mixture, as in the case of Fig. <ref> (b). However, compared to case ii, the CC state now becomes unstable although the possibility to rebuild cooperation is non-zero. Overall, in this region, agents' policies preserve most features of the All-D, but also with the property of WSLS, due to the presence of DD→ CC. Case (1-3) suggest a strong memory may be the prerequisites for rebuilding cooperation when the cooperation is broken. §.§ Temporal Correlation To further capture the correlation in time between consecutive states, we compute the joint probability p(s,s^') for the state transition from s to s^'. And as the benchmark <cit.>, we also compute the products of their state probabilities p(s)p(s^'). When p(s,s^')>p(s)p(s^'), it means that there is a positive correlation between the two consecutive states compare to the purely random occurrence, and vice versa. Figure. <ref> displays p(s,s^'), p(s)p(s^') for the four typical cases with the same parameter combinations as in Fig. <ref>. Figure. <ref>(a) and (c) show that pCC, CC and pDD, DD are the only dominant joint probabilities in Region 1 and 2, respectively. Since the gaps between p(s,s^') and p(s)p(s^') are almost invisible in these two cases, we compute their Cohen's kappa coefficients that are given in Fig. <ref> of the <ref>, which verify positive correlations between consecutive CCs (DDs) in Region 1 (2). This observation means that the COMs of CC, CC and DD, DD are quite stable in Region 1 and 2, respectively. The reasons why cooperation is preferred in Region 1 is due to the fact that cooperation must build on the predictability of agents' policy towards each other; and the predictability increases with γ but decreases with increasing α [Eqs. (<ref>) and  (<ref>)]. Therefore, a high level of cooperation is expected in Region 1, where γ is large and α is small. By contrast, weakened cooperation is observable due to the inferior predictability of the opponent as α increases and γ decrease. At the boundary between Region 1 and 2, Fig. <ref> (b) shows that there are positive correlations between CC and all states except DD, while DD is only positively correlated with itself. This indicates that CC, CC starts to become unstable and as the competing transition CC↔ CD, CC↔ DC emerges, and DD, DD starts to stabilize. The results imply that there is competition between tolerance and revenge for agents in dealing with the exploitation from its opponent at boundary. In Region 3, Fig. <ref>(d) displays p(DD,DD) is much higher than all other joint probabilities, the two states CC and DD coexist. While the correlations in p(DD,DD) and p(CC,CC) are both negative, the p(CC,DD) and p(DD,CC) are positive (see Fig. <ref> of <ref> for their Cohen's kappa coefficients), which implies enhanced propensities of the transitions CC↔ DD, CC↔ DD compared to the benchmark level. §.§ Boundary of High Cooperation Level Our simulation shows WSLS,WSLS and CC↔ CD, CC↔ DC coexist at the boundary. According to the clue, we conjecture that there are two competitive balances at the boundary: (1) the selection between WSLS,WSLS and CC↔ CD, CC↔ DC; (2) the switch between CC↔ CD, CC↔ DC and WSLS,WSLS with perturbations due to the exploration. For the first balance, it is the competition between the revenge of WSLS,WSLS and the tolerance of CC↔ CD, CC↔ DC in dealing with exploitation. Thus, as the values for the revenge and the tolerance, Q_cd,d^w and Q_cd,c^t are our pivotal Q-values. The analysis in  <ref> shows that the Q-values on a key path of COM/COP will converge to fixed values [as Eqs. (<ref>) and  (<ref>) shown]. Accordingly, the converged Q-values for WSLS, WSLS {[ Q_cc,c^w* = Q_dd,c^w* = Π_cc/1-γ,; Q_cd,d^w* = Q_dc,d^w* = Π_dd + γΠ_cc/1-γ, ]. and for the mode CC↔ CD, CC↔ DC are {[ Q_cc,c^t* = Π_cd+γΠ_cc/1-γ^2,; Q_cd,c^t* = Π_cc+γΠ_cd/1-γ^2, ]. {[ Q_cc,d^e* = Π_dc+γΠ_cc/1-γ^2,; Q_dc,c^e* = Π_cc+γΠ_dc/1-γ^2. ]. Due to the asymmetry of the COM, we use superscript e and t to distinguish the exploiters and the tolerant agent's Q-values, respectively. At the boundary, the constraint for the tolerant agents is Q_cd,d^w*=Q_cd,c^t*, i.e., γ = -1-b/2+1/2√(5-2b+b^2). For the second balance, after entering CC↔ CD, CC↔ DC, the exploiter confronts staying on the current COM or switching to the competing COM of WSLS,WSLS when faced with the tolerant agent's random exploration. As a COM or COP, CC↔ CD, CC↔ DC and WSLS, WSLS obviously have some stability at the boundary. Thus, for the exploiter, the Q-values on the key paths of CC↔ CD, CC↔ DC and WSLS,WSLS have converged to the fixed values correspondingly before the random exploration, i.e., Q_cc,d→ Q_cc,d^e*, Q_dd,c→ Q_dd,c^w* and Q_cc,c→ Q_cc,c^w*. If the tolerant agent defects at the state CC under exploration, the balance of whether or not to change the path for the exploiter is decided by Q_cc,c and Q_cc,d, i.e., Q_cc,d = (1-α)Q_cc,d^e* + α(γ Q_dd,c^w* + Π_dd) = Q_cc,c^w* as Fig. <ref> shows. So, another constraint for the boundary is Q_cc,d=Q_cc,c^w*, i.e., α=b/1+b-γ^2. After substituting b = 0.2, the temptation b of Fig. <ref>, into Eqs. (<ref>) and  (<ref>), the predicted boundaries well match the results of our simulations as shown Fig. <ref>. Our analysis shows that, with the increase of α and decrease of γ, CC, CC becomes unstable because tolerance replaces immediate revenge of WSLS in the face of exploitation from the opponent, leading to the degradation of cooperation. In addition, the analysis also confirms that a transition occurs at the boundary in Fig. <ref>(d), where WSLS,WSLS loses stability and COP changes gradually with the change of learning parameters. §.§ Impact of Random Exploration We further analyze the impact of the random exploration on cooperation. For comparison, we turn off the exploration (by setting ϵ=0) after the evolution is stabilized to reveal the difference, which is shown as a function of the game parameter b, see Fig. <ref>. When the exploration is ceased, the evolution fails to go through from one mode to another, but falls into a single mode. As can be seen, only some of them e.g. m_1, m_2, m_3 and m_6 are present acting as COMs and the probabilities of the aforementioned short modes meet ∑_μ=1^7p(m_μ) ≈ 1. Fig. <ref>(a-b) and (d) show that p(m_1) is decreasing as b increases, while p(m_2) is doing the opposite. p(m_3) however shows a non-monotonic dependence on b from increasing to decreasing. Furthermore, an asymmetric COM, m_6, appears in (a-b) but not in (c-d), and the mode also shows non-monotonic change. According to Fig. <ref> (b), we conjecture that m_6, resulting from tolerance, is a metastable mode between m_1 and m_2. The reason is that (b) shows m_6 seemingly redundant around b = 0.19 as α,γ = 0.5, 0.9, which is at the boundary of 1 (see Fig. <ref> and Eq. (<ref>)). To investigate the impacts of exploration, we compute the difference between the cooperation level with and without exploration δf̅_c := ⟨f̅_c⟩ - ⟨ f_c_0⟩ = ⟨f̅_c⟩-∑_μ=1^7 p(m_μ)f_c(m_μ), where ⟨f̅_c⟩ is the observed cooperation prevalence before the exploration is turned off, f_c(m_μ) is cooperation preference of the corresponding mode m_μ, e.g., the preferences for m_ 3 and m_6 are f_c(m_3) = 0.75 and f_c(m_6) = 0.5. The second term as a benchmark is thus the expected cooperation prevalence in the absence of exploration. Fig. <ref> also plots δf̅_c as a function of temptation b for the same parameter combination. Firstly, there is little impact of exploration in Fig. <ref>(b) and (c) since δf̅_c≈ 0. However, this is not case in Fig. <ref>(a) and (d). In both cases, the presence of exploration improves the cooperation preference at the small range of b. But, as b increases, the exploration suppresses the cooperation preference in Fig. <ref>(a), but shows little impact in Fig. <ref>(d). This is because exploration does not play a significant role in the pure modes m_1 and m_2 as in (b, c). But when multiple states coexist such as in (a-d) for an intermediate b value, the transition among the evolved states yields a non-trivial impact of cooperation prevalence. This is quite different from the previous work <cit.>, where the exploration always facilitates the cooperation under scheme of reinforcement learning. § CONCLUSION AND DISCUSSION In the work, we introduce a general reinforcement learning for repeated dyadic games, where each agent optimizes it’s policies through Q-learning algorithms. Specifically, we focus on the impacts of the learning rate and discount ratio on the evolution of cooperation in the strict prisoner’s dilemma game. We reveal that agents can achieve a high level of cooperation when they have a strong memory and a confident foresight for the future. However, cooperation is completely broken when the agents become forgetful or short-sighted. To proceed, we examine the agents' policies by checking their probabilities of states and states transitions. In the high cooperation region, both Q-tables exhibit WSLS property as their Coordinated Optimal Policy (COP). In contrast, both agents are doomed to defect when their COP is composed of All-D for both in defection region. The most striking case occurs on the boundary of these two regions, where one agent tolerates its opponent’s defection and maintains cooperation, while the other takes the advantage to maximize its own rewards. Such tolerance may be regarded as a precursor to the instability of cooperation. A mixture of both WSLS-like and All-D-like policies finds its niche when agents are endowed with a strong memory but a short sight, which allows a low level of cooperation. Moreover, analogous to the equilibriums of the finite repeated PDGs <cit.>, we find that the agents' behavior can be decomposed into one of several circular Coordinated Optimal Modes (COMs). The time correlation between consecutive states are also given, and the pronounced mutual information between consecutive states at the boundary indicates some sort of criticality relating to bifurcation of COPs. Based on evolution of COMs and COPs, our theoretical analysis give the boundary of high cooperation and verifies the indication by showing a decent match with the numerical results. Finally, we also examine the effects of exploration rate on cooperation. In contrast to the previous work <cit.>, its impact depends on the composition of COMs, could be positive, negative, or no influence at all. In brief, by establishing an exploratory framework for the analysis of dynamics of RLRGs, we show some fundamentally interesting results. However, our findings leave many questions unanswered. For example, an interesting perspective is to relate the COP to dynamical attractors, but a proper formulation still needs to be shaped. Addressing this question could help to obtain all COPs in complex scenarios. A further open question of special significance is to identify effective early-warning signals what could this be like? of failure in cooperation, where the theory of criticality may lend a hand to prevent irreversible and disruptive defective behaviours. § ACKNOWLEDGMENTS We are supported by the Natural Science Foundation of China under Grant No. 12165014 and the Key Research and Development Program of Ningxia Province in China under Grant No. 2021BEB04032. CL is supported by the Natural Science Foundation of China under Grant No. 12075144. figuresection § MORE DETAILS ON COMS §.§ Learning Parameters and Convergence As stated in Subsec. <ref>, the state transitions in our RLRG model will fall into one of circular modes as ϵ→ 0 if Iris and Jerry's policies remain unchanged for some time. In Table. <ref>, we list all modes of circular state transitions under SPDG setting, where “cycle-m” means the mode contains m-states, e.g., cycle-1 is a single-state self-loop. Besides, all modes in cycle-3 are asymmetric, while the mode in cycle-4 is symmetric. For cycle-1 and -2 modes, the symmetric and asymmetric modes are separated by semicolons. For a cycle-1 mode, the convergence rate of the Iris and Jerry's Q-tables is 1-α+αγ, which increases with γ but decreases with increasing α. While, for a cycle-m mode with m≥ 2, the dynamics of Q-table for any agent k∈i,j in a cycle is as follows ([ Q^k_s_1,a_1(τ+ m); Q^k_s_2,a_2(τ+ m); Q^k_s_3,a_3(τ+ m); ⋮; Q^k_s_n,a_n(τ+ m) ]) = ([ 1 0 0 ⋯ 0; 0 1 0 ⋯ 0; 0 0 1 ⋯ 0; ⋮ ⋮ ⋮ ⋱ ⋮; αγ 0 0 ⋯ 1-α ]) ⋯([ 1 0 0 ⋯ 0; 0 1-α αγ ⋯ 0; 0 0 1 ⋯ 0; ⋮ ⋮ ⋮ ⋱ ⋮; 0 0 0 ⋯ 1 ])· ([ 1-α αγ 0 ⋯ 0; 0 1 0 ⋯ 0; 0 0 1 ⋯ 0; ⋮ ⋮ ⋮ ⋱ ⋮; 0 0 0 ⋯ 1 ])·([ Q^k_s_1,a_1(τ); Q^k_s_2,a_2(τ); Q^k_s_3,a_3(τ); ⋮; Q^k_s_m,a_m(τ) ]) +([ αΠ_s_2; αΠ_s_3; αΠ_s_4; ⋮; αΠ_s_1+α^2γΠ_s_2 ]) =([ 1-α αγ 0 ⋯ 0; 0 1-α αγ ⋯ 0; 0 0 1-α ⋯ 0; ⋮ ⋮ ⋮ ⋱ ⋮; (1-α)αγ α^2γ^2 0 ⋯ 1-α ])·([ Q^k_s_1,a_1(τ); Q^k_s_2,a_2(τ); Q^k_s_3,a_3(τ); ⋮; Q^k_s_m,a^k_m(τ) ]) + ([ αΠ_s^k_2; αΠ_s^k_3; αΠ_s^k_4; ⋮; αΠ^k_s_1+α^2γΠ_s_2 ]) and the corresponding state transitions are shown in Fig. <ref>(a). Here, 𝒮^k_μ = s_1,⋯, s_m is the set of k’s states in the mode, e.g., k's sets and its co-player are CC, CD and CC, DC in CC↔ CD, CC↔ DC, respectively. In the mode, agents' Q-tables must meet the constraints max_a'{Q^k_s_n,a^'}=Q^k_s_n,a_n, ∀ k∈{i,j}, s_n∈𝒮^k_μ. Here, the constraints for a mode is increased with the length of the mode. So, the long modes are generally more fragile than the short as mentioned in Subsec. <ref>. The eigenvalues of the matrix of right-hand side in Eq. (<ref>) meets (λ+α-1)^m = (αγ)^mλ, which are the horizontal coordinates of the intersection of curves y(λ) = (λ+ α - 1)^m and y(λ) = (αγ)^mλ. Then, the maximum real eigenvalue λ_max is greater than 1-α. To investigate the effects of learning parameters on the convergence rate, we derive both sides of Eq. (<ref>) for α and γ, and we have m(λ_max+ α - 1)^m-1+m(λ_max+ α - 1)^m-1∂λ_max/∂α = [m(λ_max+ α - 1)^m/α +(λ_max+ α - 1)^m/λ_max∂λ_max/∂α] and m(λ_max+ α - 1)^m-1∂λ_max/∂γ = [m(λ_max+ α - 1)^m/γ + (λ_max+ α - 1)^m/λ_max∂λ_max/∂γ]. According to the above equation, we get ∂λ_max/∂α = mλ_max(1-λ_max)/α[(α-1)+λ_max(1-m)]<0 and ∂λ_max/∂γ = λ_maxm(λ_max+α-1)/γ[(1-α)+λ_max(m-1)]>0 under m≥ 2 and (1-α)<λ_max<1. So, in any mode, the convergence rates of Q-tables for both agents increases with α but decreases of γ. §.§ Stability of COMs/COPs Any Q-value, Q^k_s_n,a_n, in the mode of Fig. <ref> will converge to fixed Q^k*_s_n,a_n = Π_s_n+1+γΠ_s_n+2+⋯γ^m-lΠ_s_m + γ^m-l+1Π_s_1 + ⋯γ^m-1Π_s_n-1/1-γ^m if the mode is a stable COM. It is obvious that the next state is determined by the current states if both agents' policies are remaining and ϵ→ 0. In the case, the agents will enter the COM through determined state transition paths, called as key paths. Then, the Q-values of corresponding policies of both agents are also converged along these paths. Without loss of generality, we only focus on the Q-values in one of key paths, 𝒫^k_ν= s_1̅, s_2̅,⋯, s_l̅, as Fig. <ref>(a) shown. The Q-values in 𝒫^k_ν will also converge to fixed values if the Q-values are always satisfying following constraints max_a'{Q^k_s_n,a^'}=Q^k_s_n,a_n, ∀ k∈{i,j}, s_n∈𝒫^k_ν, i.e., both agents policies on the path are unchanged. According to Fig. <ref>(a), we get the stable Q-values in the path as follows Q^k*_s_i,a_i = Π_s_i+1 + γΠ_s_i+2+⋯ + γ^l-iΠ_s_l+γ^l-i+1Q^*n_s_n,a_n. Here, s_n is the first state in COM to pass along the path. It is obvious that for a given exploration rate ϵ the convergence rate also increases with α but decreases with γ. Note that the Q-values in the modes evolve much faster than on the key paths to the mode, while the Q-values on the key paths evolve much faster than on the other paths. Each state on key paths or COM only has one state as the next transition. It means that the correlation between consecutive states is positive if they are also consecutive states in a COM or a key path. But p(s) for s in 𝒮^k_μ is much greater than for s in 𝒫^k_ν because the only way to leave a stable COM is to explore (see Fig. <ref>(a) and (c)). In Fig. <ref> (a), the COM will be broken as long as any constraint in Eq. (<ref>) is unsatisfied. And the state transitions cannot return to the mode through 𝒫_μ^k after leaving the mode by exploration, as long as the constraints in Eq. (<ref>) cannot be satisfied. In both cases, at least one agent has changed its policy at some state and the COP is not unique. Thereafter, the COP may switch by exploration. This means that a COM might become unstable before the Q-tables of the COM converges to fixed. Our analysis show that high α and low γ could reduce robustness of both agents' policies of COPs under competition and thus shorten characteristic time of the corresponding COMs. It is the reason why f_c is more volatile for high α and low γ as Fig. <ref> shows. Furthermore, cooperation also becomes fragile in the case when co-player's policy becomes unpredictable, because all-defection for any agent is the best policy. It is important to note that when the exploration rate is low, it is the exploration fluctuation, not the exploration itself, that weakens the robustness of the competing COPs. In contrast, the exploration may enhance the robustness of the COPs because the exploration is benefit to maintain the constraints in Eqs. (<ref>) and (<ref>) §.§ Supplementary Simulations for Tipping Points According to Fig. <ref> in Sec. <ref>, we conjecture that the tipping point for whether exploration can promote cooperation is always corresponding to p(m_2) = 0.5. To verify that, we give more results on p(m_μ) and δf̅_c in Fig. <ref>. The results, especially (a), support our conjecture. § COHEN’S KAPPA COEFFICIENTS FOR TEMPORAL CORRELATIONS In analysis of COM in Sec. <ref>, we give the gap between p(s,s^') and p(s)p(s^') to show the correlation between two consecutive states. But, the gap becomes invisible as p(s,s^') is close to 0 or 1. So, we here employ the Cohen's kappa coefficients for better visualisation. The Cohen's kappa coefficient of two consecutive rounds is defined as κ:=p(s,s^')-p(s)p(s^')/1-p(s)p(s^'). Fig. <ref> shows the results corresponding to Fig. <ref>. As Fig. <ref> (a) and Fig. <ref> (a) show, the correlation between the consecutive CCs is positive as CC is the dominant state for a low α and large γ. Similarly, the consecutive DDs also have a positive correlation when DD is as the dominant state under high α. For the low α and γ, even though DD is still the dominant state, the correlation between DDs is negative as Fig. <ref>(d) and Fig. <ref>(d) shown. elsarticle-num-names
http://arxiv.org/abs/2307.04803v2
20230710180019
The irreversible relaxation of inflation
[ "Robert Alicki", "Gabriela Barenboim", "Alejandro Jenkins" ]
gr-qc
[ "gr-qc", "astro-ph.CO", "hep-ph", "hep-th" ]
#1#1:  Tr tr
http://arxiv.org/abs/2307.04563v1
20230710135209
Automatically detecting activities of daily living from in-home sensors as indicators of routine behaviour in an older population
[ "Claire M. Timon", "Pamela Hussey", "Hyowon Lee", "Catriona Murphy", "Harsh Vardan Rai", "and Alan F. Smeaton" ]
cs.HC
[ "cs.HC", "cs.LG" ]
Timon et al. Digital Health 1Centre for eIntegrated Care (CeIC), School of Nursing, Psychotherapy and Community Health, Dublin City University, Dublin 9, Ireland 2Insight Centre for Data Analytics, Dublin City University, Dublin 9, Ireland Alan F. Smeaton, Insight Centre for Data Analytics, Dublin City University, Glasnevin, Dublin 9, Ireland. [email protected] Objective The NEX project has developed an integrated Internet of Things (IoT) system coupled with data analytics to offer unobtrusive health and wellness monitoring supporting older adults living independently at home. Monitoring currently involves visualising a set of automatically detected activities of daily living (ADLs) for each participant. The detection of ADLs is achieved to allow the incorporation of additional participants whose ADLs are detected without re-training the system. Methods Following an extensive User Needs and Requirements study involving 426 participants, a pilot trial and a friendly trial of the deployment, an Action Research Cycle (ARC) trial was completed. This involved 23 participants over a 10-week period each with c.20 IoT sensors in their homes. During the ARC trial, participants each took part in two data-informed briefings which presented visualisations of their own in-home activities. The briefings also gathered training data on the accuracy of detected activities. Association rule mining was then used on the combination of data from sensors and participant feedback to improve the automatic detection of ADLs. Results Association rule mining was used to detect a range of ADLs for each participant independently of others and was then used to detect ADLs across participants using a single set of rules for each ADL. This allows additional participants to be added without the necessity of them providing training data. Conclusions Additional participants can be added to the NEX system without the necessity to re-train the system for automatic detection of the set of their activities of daily living. Automatically detecting activities of daily living from in-home sensors as indicators of routine behaviour in an older population Claire M. Timon1, Pamela Hussey1, Hyowon Lee2, Catriona Murphy1, Harsh Vardan Rai2 and Alan F. Smeaton2 August 12, 2023 ================================================================================================================================= § INTRODUCTION Using IoT technologies, the use of ambient sensors to detect activities in the homes of older or more vulnerable people has grown in recent years <cit.>. In its basic form, the use case for this has been to record and visualise the raw data from actual sensor triggers and activations and to present aggregated views of this data spanning days, weeks or even months. This allows a clinician, a caregiver or a family member to observe whether certain sensors have been triggered or not. In turn it also allows an observer to use their observation of sensor activations to deduce whether or not higher level activities to do with eating, cleaning or social interaction with others, have taken place. For example if IoT sensors on the kettle and on the doors to the cupboards where cups, tea and sugar are stored are all activated within a short time frame during the morning, then the observer could infer that a mid-morning tea or coffee was made. Visualising raw data from sensors can allow patterns of in-home behaviour to be observed but this is far more challenging because typically there are a large number of sensor activations that are not connected with the higher level activities which we may wish to observe as well as the general visual “noise” from visualising so much data. For example, just because a sensor on the entrance door to a home has been activated does not mean the occupant has left or arrived, the activation could have been caused by a caller to the home, or by a delivery. It is only by looking at combinations of sensor activations in occasions of ADL activities that the actual behaviour can be accurately determined. So if presence sensors in more than one part of the home are simultaneously activated after the entrance door sensor has been activated that implies there is a caller to the home. While the approaches to gathering and visualising raw sensor activations are useful, their limitation is that they place the burden on the clinician or observer to interpret raw sensor activations into higher level activities which correspond to the things that people do in their everyday life by grouping combinations of sensor activations. This can be seen in Figure <ref> from our deployed system showing one week of raw sensor data from a participant's home. A total of 16 sensors are deployed including motion sensors, 6-in-1 environmental sensors, smartplugs and contact sensors on doors and presses. While scanning this visualisation can reveal daily daytime and evening patterns of activities particularly in the kitchen and other rooms, it is difficult to get an overall view and especially to extend an overall view of activities into multiple weeks Activities of Daily Living (ADLs) are a set of known, pre-defined and agreed daily physical or movement activities which most people will carry out and which correspond to the skills required to manage our basic physical needs <cit.>. Proposals for what make up a definitive set of ADLs have been around for many years <cit.> and some have been revised since those first proposals and specialised for areas including activities for people with dementia and activities for stroke patients <cit.>. Even with such subject specialisms, the set of ADLs commonly used today are fairly stable <cit.>. ADLs are typically used to provide a summative assessment of whether a person is able to reach a certain level of movement and to competently complete basic tasks so self-manage their lives and typically this would be used in assessments of older citizens <cit.>. ADLs are essential and routine tasks that most healthy individuals can perform without assistance <cit.>. The inability to accomplish essential activities of daily living may lead to unsafe conditions, poor quality of life and may be indicative of a physical or cognitive disability in older adults. Eligibility for home care is frequently associated with deficits in ADL ability <cit.>. Assessment of ADLs through self-reported, observed or objective data provides evidence to individuals and caregivers on existing baselines and potential deficits in self-care ability and supports potential interventions which may be required for continued independence. The state of the art in the field of recognition of activities of daily living is already well developed as shown by systematic reviews published within the last decade including <cit.>. These works describe a field which has received much attention because it is an important topic and it has a very practical and useful nature. In this paper we present a technique to automatically detect a subset of common ADLs from raw sensor data in the homes of older citizens living alone and to “tag” their routine behaviour. The sensors used in our study of ADL generation are not wearable sensors but are in-situ sensors in the home though participants did use a smartwatch which was not used in this study. The set of ADLs are chosen as indicators of routine everyday behaviour. The ability to infer and visualise higher level activities as well as viewing the raw sensor data means that caregivers and family, as well as participants themselves, can assess behaviour and behaviour changes over time in a more natural and intuitive way. The technique we use for inferring ADLs uses association rule mining and relies on an initial set of manual annotations from participants but once this is in place we can incorporate additional participants without the necessity for further manual annotation. While our approach to ADL detection is data-driven, other approaches to ADL detection have been taken including a knowledge-driven approach in <cit.> which uses domain knowledge, structured ontologies and semantic reasoning to disambiguate potential conflicts. The focus of the work in <cit.> is on real-time detection of ADLs as they happen, in an incremental way hence the use of semantic reasoning and ontologies to disambiguate. In the work in this paper the detection of ADLs happens retrospectively, at the end of each day because our use case does not require real-time detection. The work which is possibly closest to what we report here is a series of works by researchers from INRIA in France <cit.>. Their work involved several older healthy participants, living normally in their homes and targeting a range of daily activities to detect while using sensor data to assist in the detection. That work culminated in a method to detect 6 generic activity types including meal preparation, leaving the home, and dressing/waking up which overlap with the ADLs we use in this paper, and was tested on 5 adults over a short period of 5 days <cit.>. In that work the target was activity verification where the participants' declarations of their own daily activities were refined with sensor logs and visualised for them for confirmation. The work we report in this paper targets detection of similar activities of daily living but we take a more data-driven approach, are less reliant on participants' self-verification of their activities and our experiments are larger with more participants and over a longer period of data logging. § METHODS The overall aim of the Action Research Cycle Trial (ARC) trial was to investigate the technical performance and participant evaluation of a refined version of the NEX system. Ethical approval to conduct the ARC trial was obtained from the Dublin City University Research Ethics Committee (DCUREC202221) on 25/1/2022. The NEX ARC Trial was advertised through various networks including the Age Friendly University, Dublin City University and NEX study social media platforms. Eligibility criteria to participate in the trial included: demonstrated capacity to provide written consent as determined by a cognitive assessment <cit.>, willingness to provide written informed consent to participate, aged 60 years or over, with or without one or more stable chronic condition/s, fully vaccinated against COVID-19 and had an active Wi-Fi connection at home. Older adult participants were enrolled to the study for a 10-week period if they met the eligibility criteria. Between January 2022 and July 2022, twenty-six healthy older adults (aged 60 years and over) who were living independently at home in the community participated in the trial. The gender profile was predominantly female (81% n=21) with a total population mean age of 73.2 years. All participants resided in Dublin, Ireland (100% n=26) and the majority lived in urban locations (96% n=25). This was a well-educated sample as 65% (n=17) received third level education. The majority of participants within this sample present as independent and high functioning as only 8% (n=2) reported difficulties in completing activities of daily living (ADLS) such as dressing etc. and only 4% (n=1) reported difficulties in completing more complex tasks defined as instrumental activities of daily living (IADLS) such as shopping for groceries etc. Three participants dropped out and one participant was no longer able to stay involved with the trial as her Wi-Fi connection was deemed too weak to support the NEX system on inspection by the technical engineer during a site visit, resulting in a final sample of n=22. The research team devised a study design which greatly minimised face-to-face contact with participants in an effort to minimise the risk of COVID-19 spread. This meant that the majority of study visits were completed over Zoom. After enrolment to the trial, participants met with a researcher on Zoom to complete a demographics questionnaire, a questionnaire about technology use, and a compilation of health and well-being assessments. Additionally during these research calls, the researcher completed a home configuration assessment in collaboration with participants. The purpose of this home configuration was to inform the research team about the participant’s home layout and their routine so that decisions about the appropriate placement of IoT sensors and smart plugs could be made. The assessment consisted of a number of questions e.g. the type of home where the participant lived, number of rooms, number of external doors, doors used most often, the layout of the participants' kitchen, which cabinets were used to store food, what appliances were used most frequently, etc. During a second visit, a researcher and technical engineer visited the participant in their home environment to facilitate the installation of the NEX system technology. The researcher, technician and participant complied with a very strict COVID-19 study protocol which was developed by the research team and consisted of antigen testing prior to, and mask wearing during, home visits. The researcher and technician used home configuration assessment with the participant in Visit 1 to determine the most appropriate placement of preconfigured technology. The NEX system design consisted of a range of IoT technologies, including a smartwatch (for measurement of sleep and step count), voice activated assistant (entertainment and reminder functionality), contact sensors (detecting activity around the home and opening and closing of doors and cupboards), smart plugs (measuring energy use of appliances), motion sensors (detecting movement, temperature, humidity, and light in the home), hub (a central connection point for sensor devices), tablet (display NEX system data to participants), and a cloud hosted secure device management platform. The technologies were deployed in combination to facilitate the detection of some of the key ADLs from participants’ in-home sensor and smart plug use data over the trial period. Face-to-face training on the technology was provided to participants at the time of installation, and a training manual and a series of training videos were also provided. Throughout the remainder of the ARC trial the researchers met with 19 of the 23 participants individually on two subsequent occasions over Zoom and met with the other 4 once, to present them with a snapshot of raw data that was collected via the NEX technologies in the previous 24 hours. In preparation for these, sensor data for each participant was pre-processed to generate candidate occurrences of ADLs. These were presented to participants for validation and the briefings also included gathering recollections of in-home activities in the day or days immediately preceding the briefings e.g. confirming What time did they have breakfast at? etc. These provided training data for subsequent ADL detection. At the end of the ARC trial the technical engineer visited the participant in their home and removed all of the technology. During the final research visit, the researcher interviewed participants about their experience of the trial and the NEX technology and completed an assessment of the system acceptability and usability (adapted version of the Technology Acceptance Model <cit.> and System Usability Scale <cit.>. The researcher also repeated the health and wellbeing measures administered at the start of the trial to investigate whether having NEX installed in participants’ homes for the duration of the trial affected their wellbeing and other aspects of life. While there are many individual ADLs we could focus on, we balanced the value of different ADLs given the characteristics and demographics of the ARC trial participants against the feasibility of detecting ADLs given the sensors which were deployed in their homes. After much consideration and taking the requirements of the clinical partners into consideration we focused on 4 ADLs and grouped each with a set of in-home sensors which could be used to detect them automatically. These ADLs are presented in Table <ref>. Increasing the number of ADLs would not affect the validity of our approach since each additional ADL would be grouped with a set of sensors needed to detect it and each additional ADL would have its own set of rules for detection. Table <ref> shows that the sets of in-home sensors used for each ADL in this work do not overlap but even if they did, that would not affect the performance of ADL detection. To turn the training data for ADL occurrence into automatic detection of ADLs we examined different machine learning techniques that could be used to build classifiers to recognise ADLs. Within the field of machine learning, deep learning approaches are regarded as best in terms of accuracy but their downside is that they need much training data in order to be reliable <cit.>. In addition, once the models have been created they cannot offer any explanation for recommendations or outputs that they generate <cit.>. Our application has limited amounts of training data because there are only so many times we can ask participants to indicate when they had eating, sleeping, bathing or other ADL activities before user fatigue sets in and the quality of the annotations deteriorates. Our participants and our clinical partners are also wary of black box machine learning precisely because they have no explanation capabilities. Association rule mining (ARM) is a machine learning technique which automatically develops conditional rules based on input data such as sensor data readings and annotated training data <cit.>. It is a technique which has been around for many years and used successfully in a wide range of applications <cit.>. As the name implies, association rules are a series of if/then statements that aid the discovery of relationships between seemingly unrelated data collections. ARM seeks to identify recurring patterns, correlations, or relationships in datasets. A rule generated by the ARM process has two parts, and antecedent and a consequent. An item found in a data collection is called an antecedent, and an item found in combination with an antecedent is called a consequent. For instance consider the following: “A participant is 90% more likely to watch television when he/she is having breakfast." In this case, breakfast is the antecedent and watching TV is the consequent in the association rule above. The process of developing sets of association rules involves carefully reviewing data and searching for recurring if/then patterns. The most significant associations are then determined according to the following two parameters: * Support which describes how frequently the data collection contains instances of the if/then relationship; * Confidence which is the number of times these associations have been verified to be accurate iin the data collection. When processing large datasets using association rule mining, for every conceivable item combination of data items, the Apriori algorithm <cit.> is attractive to use as it scans the data collection only once as it derives a set of association rules. In an earlier phase of our work we validated that the Apriori algorithm can be used successfully to detect ADLs using the data from 7 participants in a friendly trial where we detected kitchen events only <cit.>. The results from the earlier trial indicated that for a given participant we could mine rules for the occurrence of kitchen-based activities if we have training data for occurrences of those activities from data-informed briefings. In practice the requirement for having to have training data for ADL occurrence is not scalable to larger sets of participants so our aim in generating ADLs in this work is to use the annotations from briefings with ARC participants and apply them unseen to new participants. This consideration also influenced our choice of using association rule mining for ADL detection in the ARC trial. Processing with the Apriori algorithm for association rule mining required setting minimum values for the support and confidence variables. This should indicate that we are only interested in discovering rules for things that have a minimum value for co-occurrence with other items and have a specific default existence. In this work these values have been set as min_support=0.15 and min_confidence=0.5. Detecting relatively short-duration activities of daily living requiring a small number of activations of a dependent set of sensors but not in a particular sequence, presented challenges in the temporal domain. For example a participant may take a longer or shorter time to complete any of the ADLs and may activate sensors in a different order each time, for example putting the kettle on first and then preparing the crockery in the morning, and then doing these in the reverse order the in the afternoon. To address this we used sliding windows to aggregate sensor activations over a set time period of various durations, effectively grouping the sensor data into an order-independent set and thus smoothing out variations in the ordering. It was crucial to choose a window size that was both small enough to detect individual activities and large enough to reduce noise associated with smaller window sizes. Analysis of the data-informed briefings with participants provided insights to establish a baseline for the size of the sliding windows for various ADLs and combined with experiments reported later, the window sizes chosen were as shown below: * For ‘Dressing’ and ‘Leaving House’ the window sizes were 30 minutes; * For ‘Eating/Drinking’ and ‘Bathing’, the window sizes were 60 minutes. The shift or stride for ADL detection was set to 5 minutes. That means that the association rules test for the presence of an ADL in a given time window (30 or 60 minutes) and if not present then the window would shift forward by 5 minutes and would re-test. Our choice of 30 or 60 minutes for window sizes is in line with the work in <cit.> where those authors use window sizes varying from 30 to 120 minutes for the same ADLs as we detect here, though their windows begin and end at fixed times and theirs do not slide and overlap as ours do. Our method accommodates ADLs taking place close to each other in time because each ADL detection runs independently of others. Thus our approach will detect a participant dressing directly after taking a bath, for example. This can be seen later in Figure <ref> where ADL co-occurrences are shown to overlap for a participant. With the window sizes for ADLs determined and using the training data from participant briefings, association rules for ADL detection from sensor data were generated, initially for each ADL for each participant. To illustrate, the conditions for some ADL detection rules for participant 11 are shown in Figure <ref>. These show that for an Eating/Drinking event the use of any of the kitchen appliances or opening of the doors to the food staples, combined with presence detection, is the trigger. For the Bathing ADL, detecting presence and an increase in humidity within the time window is the trigger while for the Dressing ADL, opening the wardrobe and the underwear drawer is the trigger, for this participant. If these sensor activation conditions are satisfied within a 60-minute or within a 30-minute time window depending on the ADL, the activity will be labelled as that ADL. For creating groundtruth training data, the clinical partners met with each of the participants on at least 1 occasion after the sensors had been installed in their homes for a briefing or a data-informed recall on how their deployment was going. During these meetings held over Zoom because of the pandemic, the clinicians gathered data on occurrences of the 5 ADLs that had happened in the previous days, noting the ADL and the timestamp and this recall was prompted by the clinician sharing a visualising of the raw sensor data with the participant on the SafeTRX platform. So seeing sensors for, say, the kitchen being activated in mid-afternoon would prompt the participant to remember that s/he had made tea and had a biscuit during that afternoon which would be recorded as an eating or drinking ADL. The timings of the clinical partner’s data-informed briefings, and their place in the overall data logging for the ARC participants is shown in Figure <ref> which shows sensor data logging for 159 days from 23 participants. Here we see that 23 participants, all except participants 5, 8, 13, and 23 had two briefings and that the briefings were rarely on consecutive days and most were at least one, and closer to two, weeks apart. § RESULTS We developed a number of versions of using association rule mining to build sets of rules to detect ADLs. This was so we could (1) incrementally determine the best time window sizes to use for different ADLs and (2) include more of participants' validations of candidate ADLs and their suggestions of additional ones from their data-informed briefings. Different versions of the rule mining generated different sets of ADLs for the same participants. It was necessary for the rule generation and the subsequent ADL detection to take place immediately prior to participants having one of their briefings so that some of the candidate ADLs could be presented to them during their interviews. We started our use of association rule mining using participants' feedback from their first briefing with no candidate ADLs offered to them as we had no training data, and treating each participant independently of others. For their second briefing we offered candidate ADLs generated using training data from their first briefing and these were validated and further training data was gathered during the second briefing. As mentioned earlier, we experimented with varying the sizes of time windows for different ADLs choosing 30 and 60 minutes depending on the ADL and generating ADLs for each participant based on their own set of rules, independent of others. Finally we used association rule mining to generate a single set of rules for ADL detection which we applied across all participants. Note that not all participants were used for ADL detection at the all stages of the investigation depending on the timing of their briefings and the availability of their own sensor data as shown in Figure <ref>. As mentioned above, different deployments of association rule mining generated different ADLs raising the question of whether a new set of activities is better than a previous one. Evaluating the effectiveness of a set of rules can only be done by validating the ADLs it generates against manually annotated training data, to which we have no further access, and we cannot go back to participants to get this. This is a consequence of our focus to have little annotation data from participants from their data-informed briefings which we could use as ground truth for training ADL detection and/or for evaluation of different ADL detection rule sets. Thus our evaluation is done in terms of how the distribution of ADLs generated by a version, appears overall. An early version of our rule mining is where training data has been generated from 2 data-informed briefings and where each participant’s ADL generation is completed independent of others but with no adjustment of window sizes for resolution of clashing ADLs for the same participant. Figure <ref> shows the raw number of ADLs of each type where eating ADLs dominate, and ADLs have not been generated for all participants at that point because not all ARC installations had been completed. The different numbers of (absolute) ADLs for different participants reflects the fact that participant data logging had been running for different durations for different participants. Figure <ref> shows the proportion of ADLs types for participants for this ARM version and that is a more useful indicator. From this we can see that the leaving house and dressing ADLs was not detected for some participants and the bathing ADL was not detected for any because the humidity sensor in the 6-in-1 bathroom sensor was sampling once every 10 minutes which was insufficient but subsequently corrected. After several iterations of association rule mining development, our final implementation generates ADL rules from the training data from all participants, uses the optimal window settings for different ADLs and resolves clashes and overlaps between ADLs. Figures <ref> and <ref> show ADLs generated for all ARC participants. In Figure <ref> the numbers of ADLs per participant are normalised by the total number of days by ARC01 (171 days) taken as the longest duration of all participants for the data capture in this study. The normalised view helps us draw comparisons across participants for their relative amounts of ADLs, given their numbers of ADLs are for the same logging period. In Figure <ref> we show the relative proportions of ADLs per participant. The figures show some outliers and errors like no “eating or drinking" ADL and a disproportionally high number of “leaving house" ADLs for ARC18 and some participants with no dressing ADL. When results from all participants were completed we analysed each participant's ADLs individually with the clinician who carried out their data-informed briefing. For each of the outliers and errors in ADL detection were were able to determine an explanation such as having no training data to work with for a given ADL from the online participant briefings such as a participant not leaving their home recently or having no appropriate deployed sensors for an ADL for a given participant. The numbers of detected ADLs across participants in Figure <ref> does show a lot of variety. ARC24 shows largest number because of the large number of eating events, similar to ARC26 and is explained as follows. Figure <ref> shows the ADLs generated for participant ARC24 over the same time period as the raw sensor data shown earlier in Figure <ref>. This shows a regular bathing and dressing activity and a leaving of the house on 5 of the 7 days. March 20 shows the participant not leaving the house though the front door was opened and March 15, 16, 17 and 18 show a lot of front door activity not identified as leaving the house so the participant must have had callers or deliveries. The eating activity is well represented throughout each day because as shown in Figure <ref> this participant does seem to spend large parts of the day in and out of the kitchen, opening and closing the fridge, food presses and drawers. Some of these recognised as the eating/drinking ADL may actually be food preparation or returning from grocery shopping rather than food or drink consumption. Other observations from Figures <ref> and <ref> show a high number of leaving the house ADLs for some participants, especially ARC18. This can be traced back to the fact that ARC18 had more callers to the front door than others. As part of the analysis of each participant's ADLs with the clinician who carried out their data-informed briefing as mentioned above, we analysed which of the in-home sensors appeared most often in the rules and which were used most in the triggering of those rules. From this analysis we identified 11 core in-home sensors which should be included for any new participants for whom automatic detection of these ADLs is desired. This set of 11 is driven by their common use across all our ARC participants, and their use in the rule mining for ADL recognition and assumes that the same ADLs are the target for detection. The 11 core sensors are listed in Table <ref>. § CONCLUSIONS This paper describes the data processing carried out on in-home sensor data gathered from 23 participants over periods varying from 6 weeks to 6 months. The sensor data was processed into a set of activities of daily living (ADLs) which were chosen as typical indicators of regular, routine behaviour by the participants. A characteristic of our use case of turning sensor data into ADLs is that there is a limited amount of training data available. Our training data was gathered directly from participants during two online data-informed briefings and corresponds to participants indicating, or validating, an instance of an ADL occurrence as being true and valid. We then used this as input to association rule mining to determine a set of rules for ADL detection. Our initial sets of ADLs were based on a different set of association rules for each participant and then we fused the training data to generate a set of rules for detecting ADLs across all participants. This means that we can now add additional participants without requiring additional training data by re-using the training data from the pool of 23 ARC trial participants. In this way our ADL detection is scalable and can be made available to others. One of the unresolved questions about the work in this paper is the end-goal and what to do with detected ADLs. In a clinical setting even the visualisation of ADLs over time has limited capacity to support observations of subtle behaviour changes and degradations. In our future work we will apply the automatic detection of periodicity intensity, namely how strongly or weakly the activities of a participant fits into the regular 24-hour circadian rhythm or the weekly cycle of behaviours, to detected ADLs. It is known that strong rhythmicity in our lives is an indicator of wellness and that degradations in our regular behaviour can be detected automatically as weakening of the strengths of our circadian and other regular rhythms. We have already done this work using raw sensor data as input to periodicity detection <cit.> but believe that using higher level ADLs will give even better detection of behaviour changes. There are also limitations including the limited number of ADLs (n=4) especially since there is work elsewhere reporting detection and use of larger numbers of ADLs. However our aim was to demonstrate that our technique for ADL detection with limited training data and a limited number of sensors per participant works and can be applied to new participants without the need for additional training data and with acceptable accuracy. This has been demonstrated for 4 ADLs whose detections work independently and in future work we will examine the accuracy of ADL detection when using only the 11 core sensors we identified for future deployments. We also acknowledge that the approach could be improved with further inputs from the caregivers or directly from the participants in their homes as a form of human-in-the-loop active (machine) learning <cit.> where the rules would evolve and improve as more annotations were provided. In summary, the work reported here has been successful in applying analytics techniques to raw sensor data from participant homes to inform clinical partners about the long-term behaviour and behaviour changes in the routine daily in-home lifestyle and activities of participants. Insights gained from visualising activities at an ADL level rather than at the level of raw sensor data, is more insightful and ultimately beneficial for the participant and the clinician. The Authors declare that there is no conflict of interest. This work was supported by the Disruptive Technologies Innovation Fund administered by Enterprise Ireland, project grant number DT-2018-0258 and by Science Foundation Ireland under grant number SFI/12/RC/2289_P2, co-funded by the European Regional Development Fund. Ethical approval: The research ethics committee of Dublin City University approved this study (REC number: DCUREC202221) on 25/1/2022. Guarantor: AFS Author Contributions: AFS and CMT researched the literature and AFS, CMT, PH, HL and CM conceived the study. CMT, PH and CM obtained ethical approval and performed subject recruitment and HL and HRV managed the data capture and analysis including visualisations. HRV implemented the ARM coding. AFS and CMT wrote the first draft of the manuscript. All authors reviewed and edited the manuscript and approved the final version of the manuscript. Acknowledgements Not applicable. The raw sensor data from 23 homes used in this work is publicly available on the Figshare repository at <cit.>. 10 urlstyle 10.1007/s10916-019-1365-7 Baig MM, Afifi S, Gholamhosseini H et al. A systematic review of wearable sensors and IoT-based monitoring applications for older adults — a focus on ageing population and independent living. J Med Syst 2019; 43(8): 1–11. 10.1007/s10916-019-1365-7. ADLs Edemekong P, Bomgaars D, Sukumaran S et al. Activities of daily living, 2022. [Updated 2022 Jul 3]. In: StatPearls [Internet]. Treasure Island (FL): StatPearls Publishing. lawton1969assessment Lawton MP and Brody EM. Assessment of older people: self-maintaining and instrumental activities of daily living. The Gerontologist 1969; 9(3_Part_1): 179–186. nouri1987extended Nouri F and Lincoln N. An extended activities of daily living scale for stroke patients. Clinical Rehabilitation 1987; 1(4): 301–305. galasko1997inventory Galasko D, Bennett D, Sano M et al. An inventory to assess activities of daily living for clinical trials in Alzheimer's disease. Alzheimer Disease and Associated Disorders 1997; 11, Suppl 2: S33–9. hindmarch1998bayer Hindmarch I, Lehfeld H, de Jongh P et al. The Bayer activities of daily living scale (B-ADL). Dementia and Geriatric Cognitive Disorders 1998; 9 Suppl 2: 20–26. doi:10.1177/20552076221084468 Bosco A, McGarrigle L, Skelton DA et al. Make Movement Your Mission: Evaluation of an online digital health initiative to increase physical activity in older people during the COVID-19 pandemic. Digital Health 2022; 8. 10.1177/20552076221084468. PMID: 35295764. kemper2008meeting Kemper P, Weaver F, Short PF et al. Meeting the need for personal care among the elderly: does medicaid home care spending matter? Health Services Research 2008; 43(1p2): 344–362. rothgang2003dependency Rothgang H and Comas-Herrera A. Dependency rates and health expectancy. In European study of long-term care expenditure, Report to the European Commission, Employment and Social Affairs DG, chapter 14. London: London School of Economics, 2003. rashidi2012survey Rashidi P and Mihailidis A. A survey on ambient-assisted living tools for older adults. IEEE journal of biomedical and health informatics 2012; 17(3): 579–590. reeder2013framing Reeder B, Meyer E, Lazar A et al. Framing the evidence for health smart homes and home-based consumer health technologies as a public health intervention for independent aging: A systematic review. International journal of medical informatics 2013; 82(7): 565–579. queiros2015usability Queirós A, Silva A, Alvarelhão J et al. Usability, accessibility and ambient-assisted living: a systematic literature review. Universal Access in the Information Society 2015; 14: 57–66. blackman2016ambient Blackman S, Matlo C, Bobrovitskiy C et al. Ambient assisted living technologies for aging well: a scoping review. Journal of Intelligent Systems 2016; 25(1): 55–69. cicirelli2021ambient Cicirelli G, Marani R, Petitti A et al. Ambient assisted living: a review of technologies, methodologies and future perspectives for healthy aging of population. Sensors 2021; 21(10): 3549. chen2011knowledge Chen L, Nugent CD and Wang H. A knowledge-driven approach to activity recognition in smart homes. IEEE Transactions on Knowledge and Data Engineering 2011; 24(6): 961–974. caroux2014verification Caroux L, Consel C, Dupuy L et al. Verification of daily activities of older adults: a simple, non-intrusive, low-cost approach. In Proceedings of the 16th international ACM SIGACCESS conference on Computers & accessibility. pp. 43–50. caroux2018towards Caroux L, Consel C, Dupuy L et al. Towards context-aware assistive applications for aging in place via real-life-proof activity detection. Journal of ambient intelligence and smart environments 2018; 10(6): 445–459. belloum2021tooled Belloum R, Riche A, Volanschi N et al. A tooled method for developing knowledge-based activity recognizers. In 2021 IEEE SmartWorld, Ubiquitous Intelligence & Computing, Advanced & Trusted Computing, Scalable Computing & Communications, Internet of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/IOP/SCI). IEEE, pp. 17–24. borson2000mini Borson S, Scanlan J, Brush M et al. The mini-cog: a cognitive ‘vital signs’ measure for dementia screening in multi-lingual elderly. International Journal of Geriatric Psychiatry 2000; 15(11): 1021–1027. davis1989user Davis FD, Bagozzi RP and Warshaw PR. User acceptance of computer technology: A comparison of two theoretical models. Management Science 1989; 35(8): 982–1003. brooke1996sus Brooke J et al. SUS-a quick and dirty usability scale. Usability evaluation in industry 1996; 189(194): 4–7. zhang2021understanding Zhang C, Bengio S, Hardt M et al. Understanding deep learning (still) requires rethinking generalization. Communications of the ACM 2021; 64(3): 107–115. gilpin2018explaining Gilpin LH, Bau D, Yuan BZ et al. Explaining explanations: An overview of interpretability of machine learning. In 2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA). IEEE, pp. 80–89. solanki2015survey Solanki SK and Patel JT. A survey on association rule mining. In 2015 Fifth International Conference on Advanced Computing & Communication Technologies. IEEE, pp. 212–216. 10.5555/3000292.3000305 Liu B, Hsu W and Ma Y. Integrating classification and association rule mining. In Proceedings of the Fourth International Conference on Knowledge Discovery and Data Mining. KDD'98, AAAI Press, p. 80–86. al2014improved Al-Maolegi M and Arkok B. An improved apriori algorithm for association rules. arXiv preprint arXiv:14033948 2014; . orla Keogh O, Lee H, Timon CM et al. Detecting activities of daily living using ambient sensors and association rule mining. PLos Digital Health 2023; Under Review. smeaton2023 Smeaton AF and Hu F. Periodicity Intensity Reveals Insights into Time Series Data: Three Use Cases. Algorithms 2023; 16(2). 10.3390/a16020119. <https://www.mdpi.com/1999-4893/16/2/119>. monarch2021human Monarch RM. Human-in-the-Loop Machine Learning: Active learning and annotation for human-centered AI. "New York": Manning Publications, 2021. Timon2022 Timon CM, Hussey P, Lee H et al. Raw sensor data from in-home sensors in 23 homes of older citizens, 2022. 10.6084/m9.figshare.21415836.v1.
http://arxiv.org/abs/2307.06060v2
20230712102228
Interpreting deep embeddings for disease progression clustering
[ "Anna Munoz-Farre", "Antonios Poulakakis-Daktylidis", "Dilini Mahesha Kothalawala", "Andrea Rodriguez-Martinez" ]
stat.ML
[ "stat.ML", "cs.CL", "cs.LG", "q-bio.QM" ]
[ Interpreting deep embeddings for disease progression clustering equal* Anna Munoz-Farreequal,comp Antonios Poulakakis-Daktylidisequal,comp Dilini Mahesha Kothalawalacomp Andrea Rodriguez-Martinezcomp compBenevolentAI, London, UK Anna [email protected] Antonios [email protected] Machine Learning, ICML, patient clustering, healthcare, intepreting, embeddings, disease progression 0.3in ] We propose a novel approach for interpreting deep embeddings in the context of patient clustering. We evaluate our approach on a dataset of participants with type 2 diabetes from the UK Biobank, and demonstrate clinically meaningful insights into disease progression patterns. § INTRODUCTION The advent of transformer-based models has revolutionised the field of natural language processing <cit.>. These models have shown significant potential in healthcare applications, where large volumes of structured data (disease diagnoses, medication prescriptions, surgical procedures, laboratory results, etc) are collected and stored in the form of electronic health records (EHR) <cit.>. This has enabled researchers to extract insights into the underlying mechanisms that drive disease progression, as well as to cluster patients based on their particular disease profile and comorbidities <cit.>. In recent years, there has been an increase in prioritising and establishing better benchmarks and developing more reliable and trustworthy models <cit.>. The interpretability of such models is crucial to identify potential biases and ensure fairness when applying such models in the healthcare context, which will also have to go through regulatory approvals before using them on real patients <cit.>. In this paper, we propose a new method for disease progression clustering, using transformer-based embeddings derived from large-scale structured EHR data. We define a framework to clinically interpret the learnt embeddings, which enables us to identify disease progression stages. Finally, we apply time-series clustering to stratify patients into clinically-relevant subgroups with different aetiological and prognostic profiles. We validate our approach by showing that the embedding space is associated with disease-specific clinical themes, with patients progressing across them. The contributions of our paper are: * (i) a method for interpreting the embedding space in the clinical setting (Section <ref>) * (ii) the presentation of a patient clustering method based on disease trajectories learned from embeddings (Section <ref>) * (iii) an in-depth clinical evaluation for each disease stage and cluster (Sections <ref> and <ref>). § RELATED WORK §.§ Intepreting deep embeddings Interpreting deep embeddings in language models has been a subject of extensive research. Visualization techniques, such as t-SNE <cit.> or UMAP <cit.>, have been used to reveal semantic relationships and analogies between words <cit.>. Most popular methods focus on learning about feature importance and feature interaction for each prediction <cit.>. In the clinical setting, <cit.> have proposed an explanation space constructed from feature contributions for inferring disease subtypes. <cit.> were among the first to account for different temporal progression of medical conditions and add an interpretability aspect on top of RNNs. Med-BERT touched on the interpretability aspect as well by visualising attention patterns of the model <cit.>. <cit.> used a transformer-based architecture coupled with perturbation techniques to identify clinically explainable risk factors for heart failure. §.§ Disease progression clustering <cit.> applied dynamic time warping directly on time sequences of ordered disease codes. <cit.> used the hidden state of LSTM layers as time-series for subtype identification, and dynamic time warping for computing similarities, focusing on Parkinson's disease. More recently, <cit.> applied temporal clustering based on future outcomes using an actor-critic architecture with an RNN as an encoder, and multiple loss functions to induce embedding separation and cluster purity. § METHODS §.§ Defining clinical histories through EHR Medical ontologies are the basic building block of how structured EHR data are recorded. They are hierarchical data structures which contain healthcare concepts that enable healthcare professionals to consistently record information. Ontology concepts are composed of a unique identifier and a corresponding human-friendly description (for example, J45-Asthma is a code-description pair in the ICD10 ontology used in hospitalization EHR). However, each healthcare setting (e.g. primary care, secondary care) uses a different ontology <cit.>, which means a single patient might have their records in multiple ontologies. For each patient, we defined their entire clinical history as the concatenation of sequences of ontology text descriptions (ξ_θ_1, … ,ξ_θ_t), ξ_θ_i∈Ξ_Θ, i=1, …, t, ordered over time <cit.> across all EHR sources, with Ξ_Θ being the set of descriptions for each ontology θ. To capture temporal patterns and changes in disease progression, we sliced each patient's history into "snapshots" around the date of diagnosis (Figure <ref>). Snapshot length was chosen based on the available dataset and the disease use-case. For each snapshot, we processed the raw sequence of textual descriptions into tokens (word and sub-word pieces), using a tokenizer W as X=W(ξ_θ_1,…,ξ_θ_t) = (x_1,…,x_n), with n as the tokenized sequence length. §.§ Model design We trained a model that classifies disease status based on EHR sequences. Let X^(p,s) = (x^(p,s)_1, … ,x^(p,s)_n) denote the tokenized input sequence of an individual p and a snapshot s. It forms the input to an encoding function 𝐞^(p,s)_1, …, 𝐞^(p,s)_n = Encoder(X^(p,s)), where each 𝐞_i is a fixed-length vector representation of each input token x_i. Let 𝐲^(p)∈{0,1} be the disease label. To calculate disease probability 𝐏(𝐲^(p,s)|X^(p,s)), the embeddings of the CLS token are fed into a decoder z^(p,s)_1, …, z^(p,s)_D = Decoder(𝐞^(p,s)_1,…, 𝐞^(p,s)_n), and the resulting logits are fed into a softmax function σ 𝐏(y^(p,s)|𝐞^(p,s)_1,…, 𝐞^(p,s)_n) = σ(z^(p,s)) (Figure <ref>). §.§ Embedding space interpretation framework Patient snapshots fed into the model represent different disease stages, so we expected the resulting embedding space to reflect them. To demonstrate this, we reduced the normalized embeddings generated by the transformer-based encoder for each sequence to two-dimensional vectors U^(p,s) = (u^(p,s)_1,u^(p,s)_2), using the Uniform Manifold Approximation and Projection (UMAP) algorithm <cit.> (Figure <ref>). To evaluate separation of disease stages in the embedding space, we examined the correlation between the reduced embeddings U and other available clinical markers F =(f_1, … ,f_k). We included clinically-relevant markers extracted from snapshots of EHR data, such as laboratory tests, medication prescription, other co-occurring conditions (comorbidities), etc. Specifically, we computed the point-biserial correlation coefficient <cit.> between each patient's reduced embeddings U^(p,s) and their comorbidities, and medication prescription. We calculated the L2 norm (Euclidean distance to the origin) for each clinical marker f_k as d_f_k=√(r_f_k,u_2^2+r_f_k,u_1^2), 0 being no correlation between f_k and (u_1, u_2). We then evaluated whether the most highly-correlated conditions and medications were specific to the disease in question, and whether we could identify different clinical themes (Figure <ref>). §.§ Patient clustering For each patient, we have multiple snapshots in the form of reduced two-dimensional embeddings (u_1, u_2), which can be used as time series data to study patient trajectories in the embedding space. We aligned patients using linear interpolation, excluding those with less than three snapshots. We performed temporal clustering of patients using the k-means algorithm with multivariate dynamic time warping (DTW) <cit.> (Figure <ref>). Finally we used the embedding interpretation framework proposed in the previous section to clinically characterize each patient cluster. § EXPERIMENTS AND RESULTS §.§ Defining study population: Type 2 Diabetes cohort This research has been conducted using the UK Biobank (UKBB) Resource under Application Number 43138, a large-scale research study of around 500k individuals <cit.>. It includes rich genotyping and phenotyping data, both taken at recruitment and during primary care (general practice, GP) and secondary care (admitted hospitalizations) visits. To avoid bias or stratification based on data source, we restricted the dataset to individuals that have both primary and secondary care data linked, which are coded using the read and ICD ontologies, respectively <cit.>. The final cohort includes 154,668 individuals. Type 2 diabetes mellitus (T2D) is one of the most prevalent chronic diseases worldwide, and patients are primarily diagnosed and managed in primary care. It presented an excellent use-case for our framework, because we have orthogonal data available to evaluate the embedding space (such as medication prescription and other co-occurring conditions). Our attempt was to identify known T2D epidemiological associations and clinical markers in the patients with T2D <cit.>. We selected a cohort of 20.5k patients with T2D (cases) and a corresponding cohort of 20.5k control patients (matched on biological sex and age). We cleaned inconsistent diabetes mellitus codes from cases, and removed type 1 diabetes patients from controls <ref>. Both ICD and Read ontologies are structured in a hierarchy, so we took the parent T2D code-descriptions for hospital and GP, and all of their children. We removed them from all input sequences, to force the model to learn disease relevant history representations without seeing the actual diagnosis. We spliced each patient's history into three time snapshots of 10 years around diagnosis: [-10,0,10,20], where 0 is date of diagnosis (more details in Appendix <ref>, and Figure <ref>). §.§ Model training Using the full UKBB dataset, we first trained a BertWordPieceTokenizer, resulting in a vocabulary size of 2025 tokens. We then trained a transformer-based encoder with a hidden dimension of 200 on the Masked Language Modeling (MLM) task <cit.>, to learn the semantics of diagnoses. The proposed classifier uses the trained encoder and a fully connected linear layer as the decoder. To be able to use the embeddings of all T2D patients, we trained a total of five models in a cross-validation fashion (more details in Appendix <ref>). All results presented are predictions and embeddings of each model on its respective independent test set. We evaluated model performance on the test set of each fold using standard metrics for binary classification, with an average recall of 0.92 and precision of 0.82 across sequences. §.§ Embedding space interpretation We used the default UMAP hyperparameters to reduce the embeddings to two-dimensional vectors, after experimenting with different combinations (Appendix <ref>). We then examined the most strongly-correlated clinical markers by extracting the highest-ranked comorbid-diseases (Table <ref>, Figure <ref>) and medications (Table <ref>, Figure <ref>). To show unique clinical markers, we mapped all conditions (from GP or hospital) to the ICD10 ontology and medications to the Anatomical Therapeutic Chemical (ATC) Classification System <cit.>. We evaluated clinical themes, in terms of T2D management, comorbidities and complications: * T2D management: Metformin is associated with u_1, which is the preferred initial glucose-lowering medication for most people with T2D. We also find gliclazide, which can be used instead of or in combination with metformin, and diabetes lancets and glucose testing strips, which are used to test blood glucose levels. Interestingly, insulin is strongly associated with u_2, which is given to severe T2D patients <cit.>. * Comorbidities. We find two main clinical themes. Cardiovascular disease (CVD) is associated with u_2. T2D patients have a considerably higher risk of cardiovascular morbidity and mortality, due to high blood sugar levels causing blood vessel damage and increasing the risk of atherosclerosis <cit.>. Moreover, hypercholesterolemia and high LDL cholesterol, which are strongly associated with T2D, are risk factors for CVD. When looking at medication, we find furosemide and bisoprolol, which are used to manage heart failure (HF) <cit.>, and antiplatelet agents, such as clopidogrel or aspirin, given to patients with coronary heart disease (CHD) <cit.>. Erectile dysfunction (ED) is a prevalent comorbidity in male T2D patients <cit.>, and is managed with drugs such as tadalafil (Cialis) and sildenafil (Viagra) <cit.>, which are all associated with u_1. * T2D complications: Even though all T2D related ontology terms were excluded from the input data, the model learned to separate T2D patients without complications to those with complications, which are associated with both u_1 and u_2, such as diabetic retinopathy, nephropathy, or polyneuropathy <cit.>. Moreover, T2D is a leading risk factor for chronic kidney disease and renal failure, which is found in the same area <cit.>. §.§ Patient clusters evaluation We used linear interpolation with a five year step to align patients' snapshots, resulting in the following time points relative to the date of diagnosis: [-5,0,5,10,15]. We found four patient clusters (experiments in Appendix <ref>, with demographics and age of diagnosis in Table <ref>). When examining patient progression across the embedding space (Figure <ref>), we observed that patients start in the same space (with no diagnosis of T2D), and move towards clinical themes, corresponding to what we saw in Figure <ref>. To look at comorbidity progression, we calculated prevalence of the most strongly correlated themes, looking at how many patients had at least one diagnosis of the theme for each group and time point (Figure <ref>). Starting from the lowest u1,u2, we see that patients in cluster 3 remain in the initial area, indicating they might be in a controlled disease state. Cluster 2 is a slightly older population, that moves towards the cardiovascular and T2D without complications area. Following closely, cluster 0 represents a more severe group, with a combination of high prevalence of cardiovascular disease, renal failure and T2D complications. Finally, cluster 1 represents mostly male patients with T2D complications and erectile dysfunction. § CONCLUSIONS Here, we proposed a framework for interpreting the embedding space of transformer-based models in a clinically-meaningful way. We showed that the model learnt to distinguish disease-specific clinical themes and we validated that by replicating associations with known T2D comorbidities, complications, and medications. We performed temporal clustering of patients and identified distinct and clinically interpretable disease progression patterns. Our framework can be adapted to any disease use-case, and any available clinical dataset. It can be used to identify disease-specific, clinically and biologically relevant groups to personalize treatment and interventions for patients. § ACKNOWLEDGEMENTS This research has been conducted using the UK Biobank Resource under Application Number 43138. This work used data provided by patients and collected by the NHS as part of their care and support (Copyright © 2023, NHS England. Re-used with the permission of the NHS England and UK Biobank. All rights reserved). This research used data assets made available by National Safe Haven as part of the Data and Connectivity National Core Study, led by Health Data Research UK in partnership with the Office for National Statistics and funded by UK Research and Innovation. Using real patient data is crucial for clinical research and to find the right treatment for the right patient. We would like to thank all participants who are part of the UK Biobank, who volunteered to give their primary and secondary care and genotyping data for the purpose of research. UK Biobank is generously supported by its founding funders the Wellcome Trust and UK Medical Research Council, as well as the British Heart Foundation, Cancer Research UK, Department of Health, Northwest Regional Development Agency and Scottish Government. We are particularly grateful to Aylin Cakiroglu, Prof. Spiros Denaxas, Rogier Hintzen, Nicola Richmond, Ana Solaguren Beascoa-Negre, Benjamin Tenmann, Andre Vauvelle, and Millie Zhao for their feedback, insightful comments and the many inspiring conversations. icml2023 § APPENDIX. §.§ Data processing §.§.§ UKBB The UK Biobank (UKBB) <cit.> is a large-scale research study of around 500k individuals between the ages of 40 and 54 at the time of recruitment. It includes rich genotyping and phenotyping data, both taken at recruitment and during primary and secondary care visits (GP and hospital). We used patient records visits in the form of code ontologies Read version2/ Clinical Terms Version 3 (GP), and ICD-9/10 (hospital) together with their textual descriptions. We restricted the data set to individuals that had both hospital and GP records, reducing the cohort to 154,668 individuals. Requiring individuals to have entries in their GP records reduces bias towards acute events that usually present in hospitals, but we note that removing individuals without any hospital records may still bias the data towards more severe cases. A patient can be admitted to the hospital for multiple days. We treated an entire hospital admission as one point in time using the admission date, and only kept unique ICD-10/ICD-9 codes for each visit. We aggregated visits that were less than a week apart into one visit keeping only unique codes. This approach removed repeated codes, thus avoiding redundancy and reducing sequence length. §.§.§ Type 2 diabetes cohort extraction When extracting the T2D cohort, we noticed that some patients had both a diagnosis for type 1 and type 2 diabetes, or had an undefined diabetes mellitus diagnosis. This mistake might happen, for example, when admitting patients in the hospital without looking at their entire clinical history. To properly label those patients, we looked at the medications that those patients were taking, identifying the ones that are given to type 2 diabetes patients <cit.>. In those cases where it was unclear, we dropped the patients from the cohort. Finally, we did not include type 1 diabetes patients in the control cohort. Our final cohort included 20.5k patients with T2D and 20.5k control patients (matched on biological sex and age). T2D is a chronic progressive condition, so that taking 10 years for each snapshot was enough to represent disease stage without losing important information. There were some patients that had longer snapshots than the maximum sequence length (64) after tokenization, so we split those sequences. The result was that a patient could have multiple snapshots for each given 10 year window (Figure <ref>). For each input sequence, we calculated a mean time to T2D diagnosis by examining the first and last date present in that sequence. This way we had each snapshot associated to a given time point (relative to date of diagnosis), which could then be used for interpolation and patient clustering, as explained in Section <ref>. §.§ Model training To train on the classification task, we split our data set into five equally sampled folds f_0,...f_4, containing unique patients. To be able to use the embeddings of all T2D patients, we trained a total of five classification models on three folds, holding back folds f_i for validation and f_(i+1)5 for testing for model i, i=1,…,5. This maintained a 60/20/20 training, validation and testing split. Each model was trained for 30 epochs, batch size of 64, learning rate of 10^-5, and a warm-up proportion of 0.25, using gradient descent with AdamW optimizer, weight decay of 0.01 and early stopping. Performance was monitored every 0.25 epochs on the validation fold for both recall and precision. §.§ Dimensionality reduction and patient clustering §.§.§ UMAP hyperparameters We used UMAP to reduce our embeddings to a two-dimensional vector. There are several hyperparameters that can be tuned that might affect the results and final patient clusters. We experimented with the number of nearest neighbours (n_neighbors =[15, 30, 50, 100]) and minimum distance (min_dist=[0.01, 0.1, 0.5, 1]) <cit.>. For the same number of clusters k, we wanted to verify that the resulting patient clusters were maintained (robust to different hyperparameters). As a metric, we used euclidean distance on the normalized embeddings. Our hypothesis was that clusters are maintained, so we looked at the overlap of patient groups across combinations, and calculated the Jaccard similarity score. We saw that there are 3 very clean clusters that are always found, and two others with a higher overlap, but highly separated from the first 3 (Figures <ref> and <ref>). After seeing that clusters were robust across combinations, we decided to use the recommended UMAP hyperparameters in the original implementation (n_neighbors = 15 and min_dist = 0.1). §.§.§ Embedding space interpretation For each patient snapshot, we looked at all diagnoses in primary and secondary care, and medication prescriptions given in primary care in their clinical history. Note that type 2 diabetes associated diagnoses (including complications) were excluded from the input data, but we included them in these analysis to see whether the model learnt certain characteristics from the rest of the sequence. We looked at the most strongly correlated clinical markers by calculating the L2 norm (euclidean distance to origin), and took the top 15. We dropped duplicated diagnoses that were essentially the same condition, keeping the most correlated one (for example, Atrial fibrillation and flutter was dropped and Atrial fibrillation kept) (Table <ref>). We mapped diseases to clinical themes, finding that the most strongly associated ones were either T2D complications or known comorbidities <cit.>. We also looked at the most strongly correlated medications, and found that the most strongly correlated ones have indications for severe T2D patients <cit.>, heart failure <cit.>, erectile dysfunction <cit.> or coronary heart disease <cit.>, which are part of the clinical themes (Table <ref>). The clinical interpretation of these findings can be found in Section <ref>. §.§.§ Number of patient clusters We iterated across different number of clusters k using (u_1, u_2), and 20 different random seeds, and calculated the within-cluster sum of squares (WCSS), which measures the total variation within each cluster. Both k=3 and k=4 were the most stable and robust across seeds, so we used the elbow method <cit.> to choose k=4 (Figure <ref>). In Table <ref> we present the demographics for the final patient groups.
http://arxiv.org/abs/2307.04419v1
20230710085104
Constraints on primordial curvature power spectrum with pulsar timing arrays
[ "Zhi-Qiang You", "Zhu Yi", "You Wu" ]
gr-qc
[ "gr-qc", "astro-ph.CO" ]
[ * Received / Accepted ======================== § INTRODUCTION Recently, four pulsar timing array (PTA) collaborations, namely NANOGrav <cit.>, PPTA <cit.>, EPTA <cit.>, and CPTA <cit.>, all announced the strong evidence of a stochastic signal consistent the Hellings-Downs angular correlations, pointing to the gravitational-waves (GW) origin of this signal. Assuming the signal originates from an ensemble of binary supermassive black hole inspirals and a fiducial f^-2/3 characteristic-strain spectrum, the strain amplitude is estimated to be at the order of ∼ 10^-15 at a reference frequency of 1  yr^-1 <cit.>. However, the origin of this signal, whether from supermassive black hole binaries or other cosmological sources, is still under investigation <cit.>. A promising candidate to explain the signal is the scalar-induced gravitational waves (SIGWs) accompanying the formation of primordial black holes <cit.>. Other physical phenomena (see e.g. <cit.>) can also be the sources in the PTA band. The SIGW is sourced from scalar perturbations generated during the inflationary epoch <cit.>. They offer valuable insights into the physics of the early Universe and can be detected not only by PTAs but also by space-based GW detectors such as LISA <cit.>, Taiji <cit.>, TianQin <cit.>, and DECIGO <cit.>. Significant SIGWs require the amplitude of the power spectrum of the primordial curvature perturbations to be around 𝒜_ζ∼𝒪(0.01) which is approximately seven orders of magnitude larger than the constraints from large-scale measurements of cosmic microwave background (CMB) anisotropy observation, 𝒜_ζ= 2.1× 10^-9 <cit.>. Therefore, to account for the observed gravitational wave signal detected by PTAs, the curvature power spectrum must possess at least one high peak. This can be achieved through inflation models with a transition ultra-slow-roll phase <cit.>. To characterize a single-peak primordial curvature power spectrum, various parameterizations such as the δ-function form, box form, lognormal form, or broken power law form are employed. Among them, the δ-function, box and lognormal parameterizations are investigated in Ref. <cit.>, where the constraints from the PTAs data on the parameters of these models are also given. The constraints on the broken power law form are provided in Ref. <cit.>, where the role of non-Gaussianity is also considered. However, the analysis does not determine which model among these is the most compatible with the PTAs signal. For the multi-peak primordial curvature power spectrum model <cit.>, we parameterize the primordial curvature power spectrum with the double lognormal form. In this study, we aim to determine whether the PTAs signal favors a single-peak or multi-peak primordial curvature power spectrum and identify the most compatible model with the PTAs signal. The organization of this paper is as follows: Section II provides a brief review of the scalar-induced gravitational waves. Section III presents the constraints on the power spectrum for different forms and identifies the best-fitted model based on the PTAs signal. Finally, Section IV summarizes our findings and provides concluding remarks. § SCALAR-INDUCED GRAVITATIONAL WAVES The large scalar perturbations seeded from the primordial curvature perturbation generated during inflation can act as the source to induce GWs at the radiation domination epoch. In this section, we give a brief review of the SIGW. In the cosmological background, the metric with perturbation in Newtonian gauge is d s^2= -a^2(η)(1+2Φ)dη^2 +a^2(η)[(1-2Φ)δ_ij+1/2h_ij]d x^i d x^j, where a is the scale factor of the Universe, η is the conformal time, dη =dt/a(t), Φ is the Bardeen potential, and h_ij are the tensor perturbations. The tensor perturbations in the Fourier space can be obtained by the transform h_ij(x,η)=∫ d^3k e^ik·x/(2π)^3/2 [h_k(η)e_ij(k)+h̃_k(η)ẽ_ij(k)], where the plus and cross polarization tensors e_ij(k) and ẽ_ij(k) are e_ij(k)=1/√(2)[e_i(k)e_j(k)-ẽ_i(k)ẽ_j(k)], ẽ_ij(k)=1/√(2)[e_i(k)ẽ_j(k)+ẽ_i(k)e_j(k)], and the basis vectors satisfying e·ẽ= e ·k= ẽ·k. For the source from the second order of linear scalar perturbations, the tensor perturbations with either polarization in the Fourier space satisfy <cit.> h”_k+2ℋh'_k+k^2h_k=4S_k, where ℋ=a'/a is the conformal Hubble parameter and a prime denotes the derivative with respect to the conformal time η. The second order source S_k is S_k= ∫d^3k̃/(2π)^3/2e_ij(k)k̃^ik̃^j [2Φ_k̃Φ_k-k̃1/2+ 1/ℋ^2(Φ'_k̃+ℋΦ_k̃) (Φ'_k-k̃+ℋΦ_k-k̃)]. The Bardeen potential in the Fourier space, Φ_k, can be connected to the primordial curvature perturbations ζ_k produced during inflation epoch through the transfer function, Φ_k=3+3w/5+3wT(k,η) ζ_k, where w is the equation of state parameter and the transfer function T(k,η) satisfy T(k,η)=3[sin(k η/√(3))-(kη/√(3)) cos(kη/√(3))/(kη/√(3))^3]. The equation of the tensor perturbations (<ref>) can be solved by the Green function method and the solution is h_k(η)=4/a(η)∫_η_k^ηd η̃g_k(η,η̃)a(η̃)S_k(η̃), where g_k is the corresponding Green function with the form g_k(η,η')=sin[k(η-η')]/k. The definition of the power spectrum of tensor perturbations h_k is ⟨ h_k(η)h_k̃(η)⟩ =2π^2/k^3δ^(3)(k+k̃)𝒫_h(k,η). Combining it with the solution of h_k (<ref>), we have <cit.> 𝒫_h(k,η)= 4∫_0^∞dv∫_|1-v|^1+vdu [4v^2-(1-u^2+v^2)^2/4uv]^2 × I_RD^2(u,v,x)𝒫_ζ(k v)𝒫_ζ(ku), where u=|k-k̃|/k, v=k̃/k, x=kη, and 𝒫_ζ is the power spectrum of the curvature perturbation which is parameterized in the following section. The integral kernel I_RD is I_RD(u, v, x)= ∫_1^x dy y sin(x-y){3T(uy)T(vy) +y[T(vy)u T'(uy)+v T'(vy) T(uy)] +y^2 u v T'(uy) T'(vy)}. The definition of the energy density of gravitational waves is Ω_GW(k,η)=1/24(k/aH)^2𝒫_h(k,η). By combining the equation (<ref>) and the definition (<ref>), we obtain <cit.> Ω_GW(k,η)= 1/6(k/aH)^2∫_0^∞dv∫_|1-v|^1+vdu×[4v^2-(1-u^2+v^2)^2/4uv]^2 ×I_RD^2(u, v, x)𝒫_ζ(kv)𝒫_ζ(ku), where I_RD^2 represents the oscillation time average of the integral kernel. The energy density of gravitational waves undergoes the same evolution as radiation. Exploiting this property, it becomes straightforward to determine the energy density of gravitational waves at present, which is Ω_GW(k,η_0)=c_gΩ_r,0Ω_GW(k,η)/Ω_r(η), where Ω_r(η)=1 is the energy density of the radiation at the generation of SIGWs during radiation domination, Ω_r,0 is that at present, and <cit.> c_g=0.387(g_*,s^4g_*^-3/106.75)^-1/3. § MODELS AND RESULTS At large scales, the observational data from the CMB impose a constraint on the amplitude of the primordial curvature power spectrum, which is limited to 𝒜_ζ = 2.1 × 10^-9 <cit.>. However, there are minimal constraints on the primordial curvature power spectrum at small scales. Consequently, in order to generate significant SIGWs, it is necessary to enhance the primordial curvature power spectrum to approximately 𝒜_ζ∼𝒪(0.01) at small scales. Thus, the profile of the primordial curvature spectrum exhibits at least one pronounced peak at intermediate scales, while displaying lower amplitudes at both large and very small scales. In this section, we consider the primordial curvature spectrum with single-peak and double-peak, respectively. For the single peak, the commonly employed parameterizations of the primordial curvature spectrum are the simple δ function form 𝒫_ζ = Aδ(ln k -ln k_p), the box form 𝒫_ζ = A Θ(k - k_min) Θ(k_max - k), the lognormal form 𝒫_ζ = A/√(2π)Δexp[-1/2(ln k -ln k_p/Δ)^2], and the broken power law form 𝒫_ζ =A(α+β)/β(k/k_p)^-α+α(k/k_p)^β+A_*(k/k_*)^n_s_*-1. For the double peak model, we parameterize the primordial curvature spectrum with a double lognormal form 𝒫_ζ= A_1/√(2π)Δ_1exp[-1/2(ln k -ln k_p_1/Δ_1)^2]+ A_2/√(2π)Δ_2exp[-1/2(ln k -ln k_p_2/Δ_2)^2]. We conducted a Bayesian analysis of the NANOGrav 15 yrs data to investigate the parameterization of the power spectrum of the primordial curvature perturbation, as described by Eq.(<ref>), Eq. (<ref>), Eq. (<ref>), Eq. (<ref>), and Eq. (<ref>). In our analysis, we utilized the 14 frequency bins reported in <cit.> to fit the posterior distributions of the model parameters. The Bilby code <cit.> was employed for the analysis, utilizing the dynesty algorithm for nested sampling <cit.> . The log-likelihood function was constructed by evaluating the energy density of SIGWs at the 14 specific frequency bins. Subsequently, we computed the sum of the logarithm of the probability density functions obtained from 14 independent kernel density estimates corresponding to these frequency values <cit.>. The equation for the likelihood function is presented as ℒ(Θ)=∏_i=1^14ℒ_i(Ω_GW(f_i, Θ)), where Θ is the collection of parameters for δ-function, box, lognormal, broken power law, and double lognormal models. These parameters and their priors are shown in Table <ref>. We divide these models into two categories. The first one is single-peak power spectrum models, including δ-function (<ref>), box (<ref>), lognormal (<ref>) and broken power law model (<ref>), while the second one is double-peak model, including double lognormal model (<ref>). The posterior distributions for the parameters in Eq. (<ref>), Eq. (<ref>), Eq. (<ref>), Eq. (<ref>), and Eq. (<ref>) are depicted in Figure <ref>, Figrue <ref>, Figure <ref>, Figure <ref>, and Figure <ref>, respectively. We summarize the mean values and 1-σ confidence intervals for parameters of these models in Table <ref>. When comparing the results of the double-peak lognormal primordial curvature power spectrum with the single-peak models using δ, box, lognormal, and broken power law forms, the Bayesian analysis yields no support in favor of the single-peak models with respective Bayes factors of lnℬ= 0.42, lnℬ=0.26, lnℬ =0.46, and lnℬ =0.45. Thus, the PTAs data show no significant evidence for or against the single-peak primordial curvature power spectrum over the double-peak primordial curvature power spectrum. Due to the very close values of logarithmic evidence, it is also difficult to favor which single-peak model provides a better fit. After obtaining the best-fit values from posteriors, we present the power spectrum of the primordial curvature perturbations in Figure <ref> and the corresponding SIGWs in Figure <ref>. In Figure <ref>, the orange thin solid line, blue thick solid line, red dashed line, black dotted line, and green dash-dotted line denote the primordial curvature power spectrum with the δ-function, box, lognormal, broken power law, and double-lognormal parameterizations, respectively. The peak scale of these parameterizations is around k_p∼ 10^8  Mpc^-1, and the amplitude of the primordial curvature power spectrum of these parameterizations at the peak is around A∼ 0.1. In Figure <ref>, the orange thin solid line, blue thick solid line, red dashed line, black dotted line, and green dash-dotted line represent the energy density of the SIGW from the primordial curvature power spectrum with the δ-function, box, lognormal, broken power law, and double-lognormal parameterizations, respectively. If the PTAs data indeed arises from the SIGWs, this PTAs signal can also be detected by space-based detectors in the future. And the parameterizations of the primordial curvature power spectrum can also be distinguished by the space-based detectors. § CONCLUSION The stochastic signal detected by the NANOGrav, PPTA, EPTA, and CPTA collaborations points to the GW origin and can be explained by the SIGWs, where the scalar perturbations are seeded from the primordial curvature perturbations. To determine the SIGWs model that best fits the observed stochastic signal, we explore both single-peak and double-peak parameterizations for the power spectrum of the primordial curvature perturbations. For the single-peak scenarios, we consider parameterizations using the δ-function form, box form, lognormal form, and broken power law form. Additionally, in the double-peak scenario, we employ the double lognormal form. The best-fit values for the scale and amplitude of the primordial curvature perturbations at the peak, obtained from these five parameterizations, are approximately k_p ∼ 10^8  Mpc^-1 and A∼ 0.1. Comparing the results with the double-peak scenarios, the Bayesian analysis provides no support in favor of the single-peak models, with respective Bayes factors of lnℬ= 0.42, lnℬ=0.26, lnℬ =0.46, and lnℬ =0.45 for the δ-function, box, lognormal, and broken power law forms, respectively. If the stochastic signal observed by the PTAs indeed originates from SIGWs, it may also be detectable by space-based gravitational wave detectors in the future, potentially allowing for the distinction between different types of SIGWs. Although our analysis in this paper focuses on the double-peak model, our conclusion can be extended to multi-peak models. In conclusion, the recent gravitational wave background signal can be explained by SIGWs, without preference for a single peak in the primordial curvature power spectrum over a multi-peak configuration. We thank Xiao-Jing Liu for useful discussions. ZQY is supported by the China Postdoctoral Science Foundation Fellowship No. 2022M720482. ZY is supported by the National Natural Science Foundation of China under Grant No. 12205015 and the supporting fund for young researcher of Beijing Normal University under Grant No. 28719/310432102. 100 NANOGrav:2023hde NANOGrav collaboration, The NANOGrav 15 yr Data Set: Observations and Timing of 68 Millisecond Pulsars, https://doi.org/10.3847/2041-8213/acda9aAstrophys. J. Lett. 951 (2023) L9 [https://arxiv.org/abs/2306.162172306.16217]. NANOGrav:2023gor NANOGrav collaboration, The NANOGrav 15 yr Data Set: Evidence for a Gravitational-wave Background, https://doi.org/10.3847/2041-8213/acdac6Astrophys. J. Lett. 951 (2023) L8 [https://arxiv.org/abs/2306.162132306.16213]. Zic:2023gta A. Zic et al., The Parkes Pulsar Timing Array Third Data Release, https://arxiv.org/abs/2306.162302306.16230. Reardon:2023gzh D.J. Reardon et al., Search for an Isotropic Gravitational-wave Background with the Parkes Pulsar Timing Array, https://doi.org/10.3847/2041-8213/acdd02Astrophys. J. Lett. 951 (2023) L6 [https://arxiv.org/abs/2306.162152306.16215]. Antoniadis:2023lym J. Antoniadis et al., The second data release from the European Pulsar Timing Array I. The dataset and timing analysis, https://arxiv.org/abs/2306.162242306.16224. Antoniadis:2023ott J. Antoniadis et al., The second data release from the European Pulsar Timing Array III. Search for gravitational wave signals, https://arxiv.org/abs/2306.162142306.16214. Xu:2023wog H. Xu et al., Searching for the Nano-Hertz Stochastic Gravitational Wave Background with the Chinese Pulsar Timing Array Data Release I, https://doi.org/10.1088/1674-4527/acdfa5Res. Astron. Astrophys. 23 (2023) 075024 [https://arxiv.org/abs/2306.162162306.16216]. NANOGrav:2023hvm NANOGrav collaboration, The NANOGrav 15 yr Data Set: Search for Signals from New Physics, https://doi.org/10.3847/2041-8213/acdc91Astrophys. J. Lett. 951 (2023) L11 [https://arxiv.org/abs/2306.162192306.16219]. Antoniadis:2023xlr J. Antoniadis et al., The second data release from the European Pulsar Timing Array: V. Implications for massive black holes, dark matter and the early Universe, https://arxiv.org/abs/2306.162272306.16227. Franciolini:2023pbf G. Franciolini, A. Iovino, Junior., V. Vaskonen and H. Veermae, The recent gravitational wave observation by pulsar timing arrays and primordial black holes: the importance of non-gaussianities, https://arxiv.org/abs/2306.171492306.17149. Liu:2023ymk L. Liu, Z.-C. Chen and Q.-G. Huang, Implications for the non-Gaussianity of curvature perturbation from pulsar timing arrays, https://arxiv.org/abs/2307.011022307.01102. Vagnozzi:2023lwo S. Vagnozzi, Inflationary interpretation of the stochastic gravitational wave background signal detected by pulsar timing array experiments, https://arxiv.org/abs/2306.169122306.16912. Cai:2023dls Y.-F. Cai, X.-C. He, X. Ma, S.-F. Yan and G.-W. Yuan, Limits on scalar-induced gravitational waves from the stochastic background by pulsar timing array observations, https://arxiv.org/abs/2306.178222306.17822. Wang:2023ost S. Wang, Z.-C. Zhao, J.-P. Li and Q.-H. Zhu, Exploring the Implications of 2023 Pulsar Timing Array Datasets for Scalar-Induced Gravitational Waves and Primordial Black Holes, https://arxiv.org/abs/2307.005722307.00572. Yi:2023mbm Z. Yi, Q. Gao, Y. Gong, Y. Wang and F. Zhang, The waveform of the scalar induced gravitational waves in light of Pulsar Timing Array data, https://arxiv.org/abs/2307.024672307.02467. Bi:2023tib Y.-C. Bi, Y.-M. Wu, Z.-C. Chen and Q.-G. Huang, Implications for the Supermassive Black Hole Binaries from the NANOGrav 15-year Data Set, https://arxiv.org/abs/2307.007222307.00722. Wu:2023hsa Y.-M. Wu, Z.-C. Chen and Q.-G. Huang, Cosmological Interpretation for the Stochastic Signal in Pulsar Timing Arrays, https://arxiv.org/abs/2307.031412307.03141. Zhu:2023faa Q.-H. Zhu, Z.-C. Zhao and S. Wang, Joint implications of BBN, CMB, and PTA Datasets for Scalar-Induced Gravitational Waves of Second and Third orders, https://arxiv.org/abs/2307.030952307.03095. Franciolini:2023wjm G. Franciolini, D. Racco and F. Rompineve, Footprints of the QCD Crossover on Cosmological Gravitational Waves at Pulsar Timing Arrays, https://arxiv.org/abs/2306.171362306.17136. Zeldovich:1967lct Y.B. Zel'dovich and I.D. Novikov, The Hypothesis of Cores Retarded during Expansion and the Hot Cosmological Model, Soviet Astron. AJ (Engl. Transl. ), 10 (1967) 602. Hawking:1971ei S. Hawking, Gravitationally collapsed objects of very low mass, Mon. Not. Roy. Astron. Soc. 152 (1971) 75. Carr:1974nx B.J. Carr and S.W. Hawking, Black holes in the early Universe, Mon. Not. Roy. Astron. Soc. 168 (1974) 399. Chen:2018czv Z.-C. Chen and Q.-G. Huang, Merger Rate Distribution of Primordial-Black-Hole Binaries, https://doi.org/10.3847/1538-4357/aad6e2Astrophys. J. 864 (2018) 61 [https://arxiv.org/abs/1801.103271801.10327]. Chen:2018rzo Z.-C. Chen, F. Huang and Q.-G. Huang, Stochastic Gravitational-wave Background from Binary Black Holes and Binary Neutron Stars and Implications for LISA, https://doi.org/10.3847/1538-4357/aaf581Astrophys. J. 871 (2019) 97 [https://arxiv.org/abs/1809.103601809.10360]. Liu:2018ess L. Liu, Z.-K. Guo and R.-G. Cai, Effects of the surrounding primordial black holes on the merger rate of primordial black hole binaries, https://doi.org/10.1103/PhysRevD.99.063523Phys. Rev. D 99 (2019) 063523 [https://arxiv.org/abs/1812.053761812.05376]. Liu:2019rnx L. Liu, Z.-K. Guo and R.-G. Cai, Effects of the merger history on the merger rate density of primordial black hole binaries, https://doi.org/10.1140/epjc/s10052-019-7227-0Eur. Phys. J. C 79 (2019) 717 [https://arxiv.org/abs/1901.076721901.07672]. Chen:2019irf Z.-C. Chen and Q.-G. Huang, Distinguishing Primordial Black Holes from Astrophysical Black Holes by Einstein Telescope and Cosmic Explorer, https://doi.org/10.1088/1475-7516/2020/08/039JCAP 08 (2020) 039 [https://arxiv.org/abs/1904.023961904.02396]. Liu:2020cds L. Liu, Z.-K. Guo, R.-G. Cai and S.P. Kim, Merger rate distribution of primordial black hole binaries with electric charges, https://doi.org/10.1103/PhysRevD.102.043508Phys. Rev. D 102 (2020) 043508 [https://arxiv.org/abs/2001.029842001.02984]. Liu:2020vsy L. Liu, O. Christiansen, Z.-K. Guo, R.-G. Cai and S.P. Kim, Gravitational and electromagnetic radiation from binary black holes with electric and magnetic charges: Circular orbits on a cone, https://doi.org/10.1103/PhysRevD.102.103520Phys. Rev. D 102 (2020) 103520 [https://arxiv.org/abs/2008.023262008.02326]. Liu:2020bag L. Liu, O. Christiansen, W.-H. Ruan, Z.-K. Guo, R.-G. Cai and S.P. Kim, Gravitational and electromagnetic radiation from binary black holes with electric and magnetic charges: elliptical orbits on a cone, https://doi.org/10.1140/epjc/s10052-021-09849-4Eur. Phys. J. C 81 (2021) 1048 [https://arxiv.org/abs/2011.135862011.13586]. Wu:2020drm Y. Wu, Merger history of primordial black-hole binaries, https://doi.org/10.1103/PhysRevD.101.083008Phys. Rev. D 101 (2020) 083008 [https://arxiv.org/abs/2001.038332001.03833]. Chen:2021nxo Z.-C. Chen, C. Yuan and Q.-G. Huang, Confronting the primordial black hole scenario with the gravitational-wave events detected by LIGO-Virgo, https://doi.org/10.1016/j.physletb.2022.137040Phys. Lett. B 829 (2022) 137040 [https://arxiv.org/abs/2108.117402108.11740]. Liu:2022wtq L. Liu and S.P. Kim, Merger rate of charged black holes from the two-body dynamical capture, https://doi.org/10.1088/1475-7516/2022/03/059JCAP 03 (2022) 059 [https://arxiv.org/abs/2201.025812201.02581]. Chen:2022fda Z.-C. Chen, S.-S. Du, Q.-G. Huang and Z.-Q. You, Constraints on primordial-black-hole population and cosmic expansion history from GWTC-3, https://doi.org/10.1088/1475-7516/2023/03/024JCAP 03 (2023) 024 [https://arxiv.org/abs/2205.112782205.11278]. Chen:2022qvg Z.-C. Chen, S.P. Kim and L. Liu, Gravitational and electromagnetic radiation from binary black holes with electric and magnetic charges: hyperbolic orbits on a cone, https://doi.org/10.1088/1572-9494/acce98Commun. Theor. Phys. 75 (2023) 065401 [https://arxiv.org/abs/2210.155642210.15564]. Liu:2022iuf L. Liu, Z.-Q. You, Y. Wu and Z.-C. Chen, Constraining the merger history of primordial-black-hole binaries from GWTC-3, https://doi.org/10.1103/PhysRevD.107.063035Phys. Rev. D 107 (2023) 063035 [https://arxiv.org/abs/2210.160942210.16094]. Zheng:2022wqo L.-M. Zheng, Z. Li, Z.-C. Chen, H. Zhou and Z.-H. Zhu, Towards a reliable reconstruction of the power spectrum of primordial curvature perturbation on small scales from GWTC-3, https://doi.org/10.1016/j.physletb.2023.137720Phys. Lett. B 838 (2023) 137720 [https://arxiv.org/abs/2212.055162212.05516]. Zhu:2018lif X.-J. Zhu, W. Cui and E. Thrane, The minimum and maximum gravitational-wave background from supermassive binary black holes, https://doi.org/10.1093/mnras/sty2849Mon. Not. Roy. Astron. Soc. 482 (2019) 2588 [https://arxiv.org/abs/1806.023461806.02346]. Chen:2021wdo Z.-C. Chen, C. Yuan and Q.-G. Huang, Non-tensorial gravitational wave background in NANOGrav 12.5-year data set, https://doi.org/10.1007/s11433-021-1797-ySci. China Phys. Mech. Astron. 64 (2021) 120412 [https://arxiv.org/abs/2101.068692101.06869]. Wu:2021kmd Y.-M. Wu, Z.-C. Chen and Q.-G. Huang, Constraining the Polarization of Gravitational Waves with the Parkes Pulsar Timing Array Second Data Release, https://doi.org/10.3847/1538-4357/ac35ccAstrophys. J. 925 (2022) 37 [https://arxiv.org/abs/2108.105182108.10518]. Chen:2021ncc Z.-C. Chen, Y.-M. Wu and Q.-G. Huang, Searching for isotropic stochastic gravitational-wave background in the international pulsar timing array second data release, https://doi.org/10.1088/1572-9494/ac7cdfCommun. Theor. Phys. 74 (2022) 105402 [https://arxiv.org/abs/2109.002962109.00296]. Chen:2022azo Z.-C. Chen, Y.-M. Wu and Q.-G. Huang, Search for the Gravitational-wave Background from Cosmic Strings with the Parkes Pulsar Timing Array Second Data Release, https://doi.org/10.3847/1538-4357/ac86cbAstrophys. J. 936 (2022) 20 [https://arxiv.org/abs/2205.071942205.07194]. PPTA:2022eul PPTA collaboration, Constraining ultralight vector dark matter with the Parkes Pulsar Timing Array second data release, https://doi.org/10.1103/PhysRevD.106.L081101Phys. Rev. D 106 (2022) L081101 [https://arxiv.org/abs/2210.038802210.03880]. IPTA:2023ero IPTA collaboration, Searching for continuous Gravitational Waves in the second data release of the International Pulsar Timing Array, https://doi.org/10.1093/mnras/stad812Mon. Not. Roy. Astron. Soc. 521 (2023) 5077 [https://arxiv.org/abs/2303.107672303.10767]. Wu:2023pbt Y.-M. Wu, Z.-C. Chen and Q.-G. Huang, Search for stochastic gravitational-wave background from massive gravity in the NANOGrav 12.5-year dataset, https://doi.org/10.1103/PhysRevD.107.042003Phys. Rev. D 107 (2023) 042003 [https://arxiv.org/abs/2302.002292302.00229]. Wu:2023dnp Y.-M. Wu, Z.-C. Chen and Q.-G. Huang, Pulsar timing residual induced by ultralight tensor dark matter, https://arxiv.org/abs/2305.080912305.08091. tomita1967non K. Tomita, Non-linear theory of gravitational instability in the expanding universe, Progress of Theoretical Physics 37 (1967) 831. Saito:2008jc R. Saito and J. Yokoyama, Gravitational wave background as a probe of the primordial black hole abundance, https://doi.org/10.1103/PhysRevLett.102.161101Phys. Rev. Lett. 102 (2009) 161101 [https://arxiv.org/abs/0812.43390812.4339]. Young:2014ana S. Young, C.T. Byrnes and M. Sasaki, Calculating the mass fraction of primordial black holes, https://doi.org/10.1088/1475-7516/2014/07/045JCAP 1407 (2014) 045 [https://arxiv.org/abs/1405.70231405.7023]. Yuan:2019udt C. Yuan, Z.-C. Chen and Q.-G. Huang, Probing primordial–black-hole dark matter with scalar induced gravitational waves, https://doi.org/10.1103/PhysRevD.100.081301Phys. Rev. D 100 (2019) 081301 [https://arxiv.org/abs/1906.115491906.11549]. Yuan:2019wwo C. Yuan, Z.-C. Chen and Q.-G. Huang, Log-dependent slope of scalar induced gravitational waves in the infrared regions, https://doi.org/10.1103/PhysRevD.101.043019Phys. Rev. D 101 (2020) 043019 [https://arxiv.org/abs/1910.090991910.09099]. Chen:2019xse Z.-C. Chen, C. Yuan and Q.-G. Huang, Pulsar Timing Array Constraints on Primordial Black Holes with NANOGrav 11-Year Dataset, https://doi.org/10.1103/PhysRevLett.124.251101Phys. Rev. Lett. 124 (2020) 251101 [https://arxiv.org/abs/1910.122391910.12239]. Yuan:2019fwv C. Yuan, Z.-C. Chen and Q.-G. Huang, Scalar induced gravitational waves in different gauges, https://doi.org/10.1103/PhysRevD.101.063018Phys. Rev. D 101 (2020) 063018 [https://arxiv.org/abs/1912.008851912.00885]. Ananda:2006af K.N. Ananda, C. Clarkson and D. Wands, The Cosmological gravitational wave background from primordial density perturbations, https://doi.org/10.1103/PhysRevD.75.123518Phys. Rev. D 75 (2007) 123518 [https://arxiv.org/abs/gr-qc/0612013gr-qc/0612013]. Baumann:2007zm D. Baumann, P.J. Steinhardt, K. Takahashi and K. Ichiki, Gravitational Wave Spectrum Induced by Primordial Scalar Perturbations, https://doi.org/10.1103/PhysRevD.76.084019Phys. Rev. D 76 (2007) 084019 [https://arxiv.org/abs/hep-th/0703290hep-th/0703290]. Alabidi:2012ex L. Alabidi, K. Kohri, M. Sasaki and Y. Sendouda, Observable Spectra of Induced Gravitational Waves from Inflation, https://doi.org/10.1088/1475-7516/2012/09/017JCAP 09 (2012) 017 [https://arxiv.org/abs/1203.46631203.4663]. Nakama:2016gzw T. Nakama, J. Silk and M. Kamionkowski, Stochastic gravitational waves associated with the formation of primordial black holes, https://doi.org/10.1103/PhysRevD.95.043511Phys. Rev. D 95 (2017) 043511 [https://arxiv.org/abs/1612.062641612.06264]. Kohri:2018awv K. Kohri and T. Terada, Semianalytic calculation of gravitational wave spectrum nonlinearly induced from primordial curvature perturbations, https://doi.org/10.1103/PhysRevD.97.123532Phys. Rev. D 97 (2018) 123532 [https://arxiv.org/abs/1804.085771804.08577]. Cheng:2018yyr S.-L. Cheng, W. Lee and K.-W. Ng, Primordial black holes and associated gravitational waves in axion monodromy inflation, https://doi.org/10.1088/1475-7516/2018/07/001JCAP 07 (2018) 001 [https://arxiv.org/abs/1801.090501801.09050]. Cai:2019amo R.-G. Cai, S. Pi, S.-J. Wang and X.-Y. Yang, Resonant multiple peaks in the induced gravitational waves, https://doi.org/10.1088/1475-7516/2019/05/013JCAP 05 (2019) 013 [https://arxiv.org/abs/1901.101521901.10152]. Cai:2018dig R.-g. Cai, S. Pi and M. Sasaki, Gravitational Waves Induced by non-Gaussian Scalar Perturbations, https://doi.org/10.1103/PhysRevLett.122.201101Phys. Rev. Lett. 122 (2019) 201101 [https://arxiv.org/abs/1810.110001810.11000]. Cai:2019elf R.-G. Cai, S. Pi, S.-J. Wang and X.-Y. Yang, Pulsar Timing Array Constraints on the Induced Gravitational Waves, https://doi.org/10.1088/1475-7516/2019/10/059JCAP 10 (2019) 059 [https://arxiv.org/abs/1907.063721907.06372]. Cai:2019bmk R.-G. Cai, Z.-K. Guo, J. Liu, L. Liu and X.-Y. Yang, Primordial black holes and gravitational waves from parametric amplification of curvature perturbations, https://doi.org/10.1088/1475-7516/2020/06/013JCAP 06 (2020) 013 [https://arxiv.org/abs/1912.104371912.10437]. Cai:2020fnq R.-G. Cai, Y.-C. Ding, X.-Y. Yang and Y.-F. Zhou, Constraints on a mixed model of dark matter particles and primordial black holes from the galactic 511 keV line, https://doi.org/10.1088/1475-7516/2021/03/057JCAP 03 (2021) 057 [https://arxiv.org/abs/2007.118042007.11804]. Pi:2020otn S. Pi and M. Sasaki, Gravitational Waves Induced by Scalar Perturbations with a Lognormal Peak, https://doi.org/10.1088/1475-7516/2020/09/037JCAP 09 (2020) 037 [https://arxiv.org/abs/2005.123062005.12306]. Domenech:2020kqm G. Domènech, S. Pi and M. Sasaki, Induced gravitational waves as a probe of thermal history of the universe, https://doi.org/10.1088/1475-7516/2020/08/017JCAP 08 (2020) 017 [https://arxiv.org/abs/2005.123142005.12314]. Liu:2021jnw L. Liu, X.-Y. Yang, Z.-K. Guo and R.-G. Cai, Testing primordial black hole and measuring the Hubble constant with multiband gravitational-wave observations, https://doi.org/10.1088/1475-7516/2023/01/006JCAP 01 (2023) 006 [https://arxiv.org/abs/2112.054732112.05473]. Papanikolaou:2021uhe T. Papanikolaou, C. Tzerefos, S. Basilakos and E.N. Saridakis, Scalar induced gravitational waves from primordial black hole Poisson fluctuations in f(R) gravity, https://doi.org/10.1088/1475-7516/2022/10/013JCAP 10 (2022) 013 [https://arxiv.org/abs/2112.150592112.15059]. Papanikolaou:2022hkg T. Papanikolaou, C. Tzerefos, S. Basilakos and E.N. Saridakis, No constraints for f(T) gravity from gravitational waves induced from primordial black hole fluctuations, https://doi.org/10.1140/epjc/s10052-022-11157-4Eur. Phys. J. C 83 (2023) 31 [https://arxiv.org/abs/2205.060942205.06094]. Danzmann:1997hm K. Danzmann, LISA: An ESA cornerstone mission for a gravitational wave observatory, https://doi.org/10.1088/0264-9381/14/6/002Class. Quant. Grav. 14 (1997) 1399. Audley:2017drz LISA collaboration, Laser Interferometer Space Antenna, https://arxiv.org/abs/1702.007861702.00786. Hu:2017mde W.-R. Hu and Y.-L. Wu, The Taiji Program in Space for gravitational wave physics and the nature of gravity, https://doi.org/10.1093/nsr/nwx116Natl. Sci. Rev. 4 (2017) 685. Luo:2015ght TianQin collaboration, TianQin: a space-borne gravitational wave detector, https://doi.org/10.1088/0264-9381/33/3/035010Class. Quant. Grav. 33 (2016) 035010 [https://arxiv.org/abs/1512.020761512.02076]. Gong:2021gvw Y. Gong, J. Luo and B. Wang, Concepts and status of Chinese space gravitational wave detection projects, https://doi.org/10.1038/s41550-021-01480-3Nature Astron. 5 (2021) 881 [https://arxiv.org/abs/2109.074422109.07442]. Kawamura:2011zz S. Kawamura et al., The Japanese space gravitational wave antenna: DECIGO, https://doi.org/10.1088/0264-9381/28/9/094011Class. Quant. Grav. 28 (2011) 094011. Akrami:2018odb Planck collaboration, Planck 2018 results. X. Constraints on inflation, https://doi.org/10.1051/0004-6361/201833887Astron. Astrophys. 641 (2020) A10 [https://arxiv.org/abs/1807.062111807.06211]. Martin:2012pe J. Martin, H. Motohashi and T. Suyama, Ultra Slow-Roll Inflation and the non-Gaussianity Consistency Relation, https://doi.org/10.1103/PhysRevD.87.023514Phys. Rev. D 87 (2013) 023514 [https://arxiv.org/abs/1211.00831211.0083]. Motohashi:2014ppa H. Motohashi, A.A. Starobinsky and J. Yokoyama, Inflation with a constant rate of roll, https://doi.org/10.1088/1475-7516/2015/09/018JCAP 09 (2015) 018 [https://arxiv.org/abs/1411.50211411.5021]. Yi:2017mxs Z. Yi and Y. Gong, On the constant-roll inflation, https://doi.org/10.1088/1475-7516/2018/03/052JCAP 03 (2018) 052 [https://arxiv.org/abs/1712.074781712.07478]. Garcia-Bellido:2017mdw J. Garcia-Bellido and E. Ruiz Morales, Primordial black holes from single field models of inflation, https://doi.org/10.1016/j.dark.2017.09.007Phys. Dark Univ. 18 (2017) 47 [https://arxiv.org/abs/1702.039011702.03901]. Germani:2017bcs C. Germani and T. Prokopec, On primordial black holes from an inflection point, https://doi.org/10.1016/j.dark.2017.09.001Phys. Dark Univ. 18 (2017) 6 [https://arxiv.org/abs/1706.042261706.04226]. Motohashi:2017kbs H. Motohashi and W. Hu, Primordial Black Holes and Slow-Roll Violation, https://doi.org/10.1103/PhysRevD.96.063503Phys. Rev. D 96 (2017) 063503 [https://arxiv.org/abs/1706.067841706.06784]. Ezquiaga:2017fvi J.M. Ezquiaga, J. Garcia-Bellido and E. Ruiz Morales, Primordial Black Hole production in Critical Higgs Inflation, https://doi.org/10.1016/j.physletb.2017.11.039Phys. Lett. B 776 (2018) 345 [https://arxiv.org/abs/1705.048611705.04861]. Gong:2017qlj H. Di and Y. Gong, Primordial black holes and second order gravitational waves from ultra-slow-roll inflation, https://doi.org/10.1088/1475-7516/2018/07/007JCAP 07 (2018) 007 [https://arxiv.org/abs/1707.095781707.09578]. Ballesteros:2018wlw G. Ballesteros, J. Beltran Jimenez and M. Pieroni, Black hole formation from a general quadratic action for inflationary primordial fluctuations, https://doi.org/10.1088/1475-7516/2019/06/016JCAP 06 (2019) 016 [https://arxiv.org/abs/1811.030651811.03065]. Dalianis:2018frf I. Dalianis, A. Kehagias and G. Tringas, Primordial black holes from -attractors, https://doi.org/10.1088/1475-7516/2019/01/037JCAP 01 (2019) 037 [https://arxiv.org/abs/1805.094831805.09483]. Bezrukov:2017dyv F. Bezrukov, M. Pauly and J. Rubio, On the robustness of the primordial power spectrum in renormalized Higgs inflation, https://doi.org/10.1088/1475-7516/2018/02/040JCAP 02 (2018) 040 [https://arxiv.org/abs/1706.050071706.05007]. Kannike:2017bxn K. Kannike, L. Marzola, M. Raidal and H. Veermäe, Single Field Double Inflation and Primordial Black Holes, https://doi.org/10.1088/1475-7516/2017/09/020JCAP 09 (2017) 020 [https://arxiv.org/abs/1705.062251705.06225]. Lin:2020goi J. Lin, Q. Gao, Y. Gong, Y. Lu, C. Zhang and F. Zhang, Primordial black holes and secondary gravitational waves from k and G inflation, https://doi.org/10.1103/PhysRevD.101.103515Phys. Rev. D 101 (2020) 103515 [https://arxiv.org/abs/2001.059092001.05909]. Lin:2021vwc J. Lin, S. Gao, Y. Gong, Y. Lu, Z. Wang and F. Zhang, Primordial black holes and scalar induced gravitational waves from Higgs inflation with noncanonical kinetic term, https://doi.org/10.1103/PhysRevD.107.043517Phys. Rev. D 107 (2023) 043517 [https://arxiv.org/abs/2111.013622111.01362]. Gao:2020tsa Q. Gao, Y. Gong and Z. Yi, Primordial black holes and secondary gravitational waves from natural inflation, https://doi.org/10.1016/j.nuclphysb.2021.115480Nucl. Phys. B 969 (2021) 115480 [https://arxiv.org/abs/2012.038562012.03856]. Gao:2021vxb Q. Gao, Primordial black holes and secondary gravitational waves from chaotic inflation, https://doi.org/10.1007/s11433-021-1708-9Sci. China Phys. Mech. Astron. 64 (2021) 280411 [https://arxiv.org/abs/2102.073692102.07369]. Yi:2020kmq Z. Yi, Y. Gong, B. Wang and Z.-h. Zhu, Primordial black holes and secondary gravitational waves from the Higgs field, https://doi.org/10.1103/PhysRevD.103.063535Phys. Rev. D 103 (2021) 063535 [https://arxiv.org/abs/2007.099572007.09957]. Yi:2020cut Z. Yi, Q. Gao, Y. Gong and Z.-h. Zhu, Primordial black holes and scalar-induced secondary gravitational waves from inflationary models with a noncanonical kinetic term, https://doi.org/10.1103/PhysRevD.103.063534Phys. Rev. D 103 (2021) 063534 [https://arxiv.org/abs/2011.106062011.10606]. Yi:2021lxc Z. Yi and Z.-H. Zhu, NANOGrav signal and LIGO-Virgo primordial black holes from the Higgs field, https://doi.org/10.1088/1475-7516/2022/05/046JCAP 05 (2022) 046 [https://arxiv.org/abs/2105.019432105.01943]. Yi:2022anu Z. Yi, Primordial black holes and scalar-induced gravitational waves from the generalized Brans-Dicke theory, https://doi.org/10.1088/1475-7516/2023/03/048JCAP 03 (2023) 048 [https://arxiv.org/abs/2206.010392206.01039]. Zhang:2020uek F. Zhang, Y. Gong, J. Lin, Y. Lu and Z. Yi, Primordial non-Gaussianity from G-inflation, https://doi.org/10.1088/1475-7516/2021/04/045JCAP 04 (2021) 045 [https://arxiv.org/abs/2012.069602012.06960]. Pi:2017gih S. Pi, Y.-l. Zhang, Q.-G. Huang and M. Sasaki, Scalaron from R^2-gravity as a heavy field, https://doi.org/10.1088/1475-7516/2018/05/042JCAP 05 (2018) 042 [https://arxiv.org/abs/1712.098961712.09896]. Kamenshchik:2018sig A.Y. Kamenshchik, A. Tronconi, T. Vardanyan and G. Venturi, Non-Canonical Inflation and Primordial Black Holes Production, https://doi.org/10.1016/j.physletb.2019.02.036Phys. Lett. B 791 (2019) 201 [https://arxiv.org/abs/1812.025471812.02547]. Fu:2019ttf C. Fu, P. Wu and H. Yu, Primordial Black Holes from Inflation with Nonminimal Derivative Coupling, https://doi.org/10.1103/PhysRevD.100.063532Phys. Rev. D 100 (2019) 063532 [https://arxiv.org/abs/1907.050421907.05042]. Fu:2019vqc C. Fu, P. Wu and H. Yu, Scalar induced gravitational waves in inflation with gravitationally enhanced friction, https://doi.org/10.1103/PhysRevD.101.023529Phys. Rev. D 101 (2020) 023529 [https://arxiv.org/abs/1912.059271912.05927]. Dalianis:2019vit I. Dalianis, S. Karydas and E. Papantonopoulos, Generalized Non-Minimal Derivative Coupling: Application to Inflation and Primordial Black Hole Production, https://doi.org/10.1088/1475-7516/2020/06/040JCAP 06 (2020) 040 [https://arxiv.org/abs/1910.006221910.00622]. Gundhi:2020zvb A. Gundhi and C.F. Steinwachs, Scalaron–Higgs inflation reloaded: Higgs-dependent scalaron mass and primordial black hole dark matter, https://doi.org/10.1140/epjc/s10052-021-09225-2Eur. Phys. J. C 81 (2021) 460 [https://arxiv.org/abs/2011.094852011.09485]. Cheong:2019vzl D.Y. Cheong, S.M. Lee and S.C. Park, Primordial black holes in Higgs-R^2 inflation as the whole of dark matter, https://doi.org/10.1088/1475-7516/2021/01/032JCAP 01 (2021) 032 [https://arxiv.org/abs/1912.120321912.12032]. Zhang:2021rqs F. Zhang, Primordial black holes and scalar induced gravitational waves from the E model with a Gauss-Bonnet term, https://doi.org/10.1103/PhysRevD.105.063539Phys. Rev. D 105 (2022) 063539 [https://arxiv.org/abs/2112.105162112.10516]. Zhang:2021vak F. Zhang, J. Lin and Y. Lu, Double-peaked inflation model: Scalar induced gravitational waves and primordial-black-hole suppression from primordial non-Gaussianity, https://doi.org/10.1103/PhysRevD.104.063515Phys. Rev. D 104 (2021) 063515 [https://arxiv.org/abs/2106.107922106.10792]. Kawai:2021edk S. Kawai and J. Kim, Primordial black holes from Gauss-Bonnet-corrected single field inflation, https://doi.org/10.1103/PhysRevD.104.083545Phys. Rev. D 104 (2021) 083545 [https://arxiv.org/abs/2108.013402108.01340]. Cai:2021wzd R.-G. Cai, C. Chen and C. Fu, Primordial black holes and stochastic gravitational wave background from inflation with a noncanonical spectator field, https://doi.org/10.1103/PhysRevD.104.083537Phys. Rev. D 104 (2021) 083537 [https://arxiv.org/abs/2108.034222108.03422]. Chen:2021nio P. Chen, S. Koh and G. Tumurtushaa, Primordial black holes and induced gravitational waves from inflation in the Horndeski theory of gravity, https://arxiv.org/abs/2107.086382107.08638. Zheng:2021vda R. Zheng, J. Shi and T. Qiu, On primordial black holes and secondary gravitational waves generated from inflation with solo/multi-bumpy potential *, https://doi.org/10.1088/1674-1137/ac42bdChin. Phys. C 46 (2022) 045103 [https://arxiv.org/abs/2106.043032106.04303]. Karam:2022nym A. Karam, N. Koivunen, E. Tomberg, V. Vaskonen and H. Veermäe, Anatomy of single-field inflationary models for primordial black holes, https://doi.org/10.1088/1475-7516/2023/03/013JCAP 03 (2023) 013 [https://arxiv.org/abs/2205.135402205.13540]. Ashoorioon:2019xqc A. Ashoorioon, A. Rostami and J.T. Firouzjaee, EFT compatible PBHs: effective spawning of the seeds for primordial black holes during inflation, https://doi.org/10.1007/JHEP07(2021)087JHEP 07 (2021) 087 [https://arxiv.org/abs/1912.133261912.13326]. Espinosa:2018eve J.R. Espinosa, D. Racco and A. Riotto, A Cosmological Signature of the SM Higgs Instability: Gravitational Waves, https://doi.org/10.1088/1475-7516/2018/09/012JCAP 09 (2018) 012 [https://arxiv.org/abs/1804.077321804.07732]. Lu:2019sti Y. Lu, Y. Gong, Z. Yi and F. Zhang, Constraints on primordial curvature perturbations from primordial black hole dark matter and secondary gravitational waves, https://doi.org/10.1088/1475-7516/2019/12/031JCAP 12 (2019) 031 [https://arxiv.org/abs/1907.118961907.11896]. Vaskonen:2020lbd V. Vaskonen and H. Veermäe, Did NANOGrav see a signal from primordial black hole formation?, https://doi.org/10.1103/PhysRevLett.126.051303Phys. Rev. Lett. 126 (2021) 051303 [https://arxiv.org/abs/2009.078322009.07832]. DeLuca:2020agl V. De Luca, G. Franciolini and A. Riotto, NANOGrav Data Hints at Primordial Black Holes as Dark Matter, https://doi.org/10.1103/PhysRevLett.126.041303Phys. Rev. Lett. 126 (2021) 041303 [https://arxiv.org/abs/2009.082682009.08268]. Ashton:2018jfp G. Ashton et al., BILBY: A user-friendly Bayesian inference library for gravitational-wave astronomy, https://doi.org/10.3847/1538-4365/ab06fcAstrophys. J. Suppl. 241 (2019) 27 [https://arxiv.org/abs/1811.020421811.02042]. NestedSampling J. Skilling, Nested Sampling, https://doi.org/10.1063/1.1835238AIP Conf. Proc. 735 (2004) 395. Moore:2021ibq C.J. Moore and A. Vecchio, Ultra-low-frequency gravitational waves from cosmological and astrophysical processes, https://doi.org/10.1038/s41550-021-01489-8Nature Astron. 5 (2021) 1268 [https://arxiv.org/abs/2104.151302104.15130].
http://arxiv.org/abs/2307.04403v1
20230710081123
$φ(2170)$ decaying to $φππ$ and $φK\bar{K}$
[ "Yun-Hua Chen" ]
hep-ph
[ "hep-ph", "hep-ex", "nucl-th" ]
http://arxiv.org/abs/2307.04324v1
20230710033812
Study of the $B^-\to K^-ηη_c$ decay due to the $D\bar{D}$ bound state
[ "Xin-Qiang Li", "Li-Juan Liu", "En Wang", "Le-Le Wei" ]
hep-ph
[ "hep-ph" ]
[email protected] Institute of Particle Physics and Key Laboratory of Quark and Lepton Physics (MOE), Central China Normal University, Wuhan, Hubei 430079, China Center for High Energy Physics, Peking University, Beijing 100871, China [email protected] School of Physics and Microelectronics, Zhengzhou University, Zhengzhou, Henan 450001, China [email protected] School of Physics and Microelectronics, Zhengzhou University, Zhengzhou, Henan 450001, China [email protected] Institute of Particle Physics and Key Laboratory of Quark and Lepton Physics (MOE), Central China Normal University, Wuhan, Hubei 430079, China We study the B^- → K^- ηη_c decay by taking into account the S-wave contributions from the pseudoscalar meson–pseudoscalar meson interactions within the unitary coupled-channel approach, where the DD̅ bound state is dynamically generated. In addition, the contribution from the intermediate resonance K_0^*(1430), with K_0^*(1430)→ K^-η, is also considered. Our results show that there is a clear peak around 3730 MeV in the ηη_c invariant mass distribution, which could be associated with the D D̅ bound state. The future precise measurements of the B^- → K^- ηη_c process at the Belle II and LHCb experiments could be, therefore, used to check the existence of the D D̅ bound state, and to deepen our understanding of the hadron-hadron interactions. Study of the B^- → K^- ηη_c decay due to the DD̅ bound state Le-Le Wei ============================================================ § INTRODUCTION Since the discovery of X(3872) by the Belle Collaboration in 2003 <cit.>, many exotic states, which do not fit into the expectations of the conventional quark models, have been observed experimentally during the past two decades <cit.>. Many of these exotic states, especially the ones observed in the charmonium sector, are observed around the threshold of a pair of heavy hadrons; some of them, such as X(3872) <cit.>, Z_c(3900) <cit.> and X(4160) <cit.>, can be explained as the hadronic molecules. However, the hadronic molecular states with mass near the D D̅ threshold have not yet been observed experimentally, and further detailed studies are therefore required both theoretically and experimentally <cit.>. In Ref. <cit.>, by taking into account the ππ, K K̅, D D̅, D_s D̅_s, ηη, and ηη_c coupled channels, the authors predicted a narrow hidden charm resonance with quantum numbers I(J^PC)=0(0^++) and mass around 3700 MeV [denoted as X(3700) throughout this paper] within the unitary coupled-channel approach. Furthermore, by considering the η_c as a pure c c̅ state and the η–η^' mixing, together with the same parameters as used in Ref. <cit.>, the pole of the new X(3700) state was predicted to be √(s)=(3722-i18) MeV within the unitary coupled-channel approach <cit.>. The mass of the D D̅ bound state predicted by other different models is also basically around the threshold of D D̅ <cit.>, and the theoretical studies of the experimental measurements of the processes e^+ e^- → J/ψ D D̅ <cit.>, B^+ → D^0 D̅^0 K^+ <cit.> and γγ→ D D̅ <cit.> all support the existence of such a D D̅ bound state. Meanwhile, some processes have also been suggested to search for the D D̅ bound state, such as ψ(3770) →γ X(3700) →γηη^', ψ(4040) →γ X(3700) →γηη^', e^+ e^- → J/ψ X(3700) → J/ψηη^' <cit.>, ψ(3770) →γ D D̅ <cit.>, and Λ_b →Λ D D̅ <cit.>. It is worth mentioning that the BESIII Collaboration has recently searched for the X(3700) in the ψ(3770) →γηη^' decay for the first time, observing however no significant signals due to the low detection efficiencies of the photons <cit.>. Although the DD̅ bound state X(3700) couples mainly to the D D̅ and D_s D̅_s channels, it is not easy to search for any signals of the state in these systems. This is due to the fact that, since its mass is a little bit lower than the D D̅ threshold, the X(3700) state would manifest itself as a near-threshold enhancement in the D D̅ invariant mass distributions, which may be difficult to identify due to the low detection efficiencies near the threshold. On the other hand, the X(3700) state has also a sizeable coupling to the ηη_c channel, as observed in Refs. <cit.>. Since the ηη_c threshold is about 200 MeV lower than the predicted mass of X(3700), one expects that, if the D D̅ bound state exists, a clear peak near the D D̅ threshold would appear in the ηη_c invariant mass distributions of some processes with large phase space. As is well known, the three-body weak decays of the B mesons involve more complicated dynamics than the two-body decays and can, therefore, provide a wealth of information about the meson-meson interactions and hadron resonances <cit.> (see e.g. Ref. <cit.> for a recent review). For instance, the B → K + X/Y/Z decay is an ideal process to produce the charmoniumlike hadronic molecular states <cit.>, and many exotic states have been observed experimentally through the B-meson weak decays during the past few years, such as Z_cs(4000) and Z_cs(4220)  <cit.>, X(4140) <cit.> in B^+ → J/ψϕ K^+, as well as X_0(2900) and X_1(2900) in B^+ → D^+ D^- K^+ decay <cit.>. In this paper, we propose to search for the D D̅ bound state X(3700) in the B^- → K^- ηη_c decay. It is worth mentioning that the Belle Collaboration has already searched for the process in 2015 based on 772×10^6 BB̅ pairs collected at the Υ(4S) resonance <cit.>, and no significant signal of the D D̅ bound state was observed due to insufficient statistics. However, the Belle II Collaboration will accumulate about 50 times the Belle data set <cit.>, and is expected to make the further precise measurements of the B^- → K^- ηη_c decay, which will shed more light on the existence of the D D̅ bound state in this process. In addition, the authors of Ref. <cit.> have suggested to search for the D D̅ bound state in the ηη_c mass distribution of the B^+ → K^+ ηη_c decay, and predicted a branching ratio of ℬ(B^+ → ( X_q q̅→η_c η ) K^+ )= ( 0.9 ∼ 6.7) × 10^-4. In this paper, motivated by the observations made above, we study the B^- → K^- ηη_c decay by taking into account the pseudoscalar meson–pseudoscalar interactions within the chiral unitary approach, where the DD̅ bound state is generated dynamically. On the other hand, the B^- → K^- ηη_c decay can also proceed through the subsequent decay of the intermediate resonance K^*_0(1430), i.e. K^*_0(1430) → K η, whose contribution will be considered in this paper too. We will demonstrate that, besides a peak of K_0^*(1430) in the K^-η invariant mass distribution, there is a clear peak around 3730 MeV in the ηη_c invariant mass distribution, which could be associated with the D D̅ bound state. Therefore, future precise measurements of the B^- → K^- ηη_c decay at the Belle II and LHCb experiments could be used to check the existence of the D D̅ bound state, and to deepen our understanding of the hadron-hadron interactions. This paper is organized as follows. In Sec. <ref>, we will firstly introduce our formalism for the B^- → K^- ηη_c decay. Our numerical results and discussions are then presented in Sec. <ref>. In Sec. <ref>, we give our final conclusion. § FORMALISM In analogy to the discussions made in Refs. <cit.>, the B^- → K^- ηη_c decay proceeds via the following three steps: the weak decay, the hadronization and the final state interactions. Explicitly, the b quark of the B^- meson firstly decays into a c quark and a W^- boson, and then the W^- boson turns into a c̅ s pair. In order to give rise to the K^- ηη_c final state, the u̅ antiquark of the initial B^- meson and the c̅ s pair from the W^- subsequent decay have to hadronize together with the q̅ q (≡u̅ u + d̅ d + s̅ s) created from the vacuum with the quantum numbers J^PC=0^++. The relevant quark level diagrams can be classified as the internal W^- emission mechanisms and external W^- emission mechanisms, as depicted in Figs. <ref>(a)–(b) and <ref>(c)–(d), respectively. Here we have neglected all the CKM suppressed diagrams that are proportional to the CKM element V_ub. The meson-meson systems formed by the hadronization of q_i, q̅_j and q̅_k q_k are given by ∑^3_k=1q_i(q̅_k q_k)q̅_j=∑^3_k=1M_ikM_kj=(M^2)_ij, with the SU(4) q q̅ matrix defined as M=( [ uu̅ ud̅ us̅ uc̅; du̅ dd̅ ds̅ dc̅; su̅ sd̅ ss̅ sc̅; cu̅ cd̅ cs̅ cc̅ ]), which could be expressed in terms of the physical pseudoscalar mesons as <cit.>, M = ( [ π^0/√(2)+ η/√(3)+η^'/√(6) π^+ K^+ D̅^0; π^- -π^0/√(2)+η/√(3)+η^'/√(6) K^0 D^-; K^- K̅^0 -η/√(3) +√(2/3)η^' D_s^-; D^0 D^+ D_s^+ η_c ]). Thus, by isolating the meson K^-, one could easily obtain the components of the meson systems for Figs. <ref>(a) and  <ref>(b) as follows: | H ⟩^a = V_p V_cb V_cs^∗ c(u̅ u + d̅ d + s̅ s) c̅su̅ = V_p V_cb V_cs^∗(M^2)_44 K^- = V_p V_cb V_cs^∗( D^0 D̅^0 + D^+ D^- + D_s^+ D_s^- ) K^-, | H ⟩^b = V_p V_cb V_cs cc̅s(u̅ u + d̅ d + s̅ s) u̅ = V_p V_cb V_cs^∗(M^2)_31η_c = V_p V_cb V_cs^∗( 1/√(2)K^- π^0 + 3/√(6)K^- η^') η_c, where V_cb=0.04182 and V_cs=0.97349 are the elements of the CKM matrix, and V_p encodes all the remaining factors arising from the production vertex. Then, the final state interactions of DD̅, D_sD̅_s, and η'η_c will dynamically generate the DD̅ bound state, which could decay into ηη_c system. Here we do not consider the component K^-π^0η_c, since the isospin of the π^0η_c system is I=1. Similarly, we can write the hadron components for Figs. <ref>(c) and  <ref>(d) that could couple to the K^-ηη_c system as follows: | H ⟩^c = V_p V_cb V_cs^∗× C ×( K^- D_s^+ ) D_s^-, | H ⟩^d = V_p V_cb V_cs^∗× C ×( K^- D̅^0 ) D^0, where we have introduced the color factor C to account for the relative weight of the external W^- emission mechanisms with respect to the internal W^- emission mechanism, and will take C=3 in the case of color number N_C=3, as done in Refs. <cit.>. According to the above discussions, the K^- ηη_c final state could not be produced directly through the tree-level diagrams of the B^- decay, but can via the final state interactions of the coupled channels D^0 D̅^0, D^+ D^-, D_s^+ D_s^-, and η'η_c, which could then generate the DD̅ bound state, as shown in Fig. <ref>. The total amplitude of Fig. <ref> can be expressed as 𝒯_X = V_p V_cb V_cs^∗[ G_D^+ D^- t_D^+ D^- →ηη_c. . + (1+C) × G_D^0 D̅^0 t_D^0 D̅^0 →ηη_c. . + (1+C) × G_D_s^+ D_s^- t_D_s^+ D_s^- →ηη_c. . + 3/√(6)× G_η'η_c t_η'η_c →ηη_c], where G_l is the loop function for the two-meson propagator in the l-th channel, and its explicit expression is given by <cit.> G_l = i ∫d^4 q/(2π)^41/q^2 - m_1^2 + iϵ1/(P-q)^2 - m_2^2 + iϵ = 1/16π^2[α_l + lnm_1^2/μ^2 + m_2^2 - m_1^2 + s/2slnm_2^2/m_1^2. + p/√(s)×(lns - m_2^2 + m_1^2 + 2p√(s)/-s + m_2^2 - m_1^2 + 2p √(s). . . + lns + m_2^2 - m_1^2 + 2p√(s)/-s - m_2^2 + m_1^2 + 2p √(s)) ], with the subtraction constant α_l= -1.3 for the coupled channels D^+ D^-, D^0 D̅^0, D_s^+ D_s^-, and η^'η_c, and μ= 1500 MeV, being the same as used in Ref. <cit.>. √(s)=M_ηη_c is the invariant mass of the two mesons in the l-th channel, and m_1 and m_2 are the mass of these two mesons. P is the total four-momentum of the two mesons in the l-th channel, and p is the magnitude of the three-momentum of each meson in the meson-meson center of mass frame, with p = λ^1/2( s, m_1^2, m_2^2 )/2 √(s), where λ(x,y,z) = x^2 + y^2 + z^2 - 2xy - 2yz -2zx is the Källen function. The transition amplitudes in Eq. (<ref>) can be generically written as t_j → k = g_j × g_k/M_ηη_c^2 - M_X(3700)^2 + i M_X(3700)Γ_X(3700), where the mass M_X(3700) = 3722 MeV, the width Γ_X(3700) = 36 MeV, and the coupling constants g_j are taken from Ref. <cit.>. For convenience, we also show in Table <ref> the values of these couplings. On the other hand, the B^- → K^- ηη_c decay could also proceed via the intermediate excited kaon mesons. According to the Dalitz plot shown in Fig. <ref>, one can see that only the well-established resonance K^*_0(1430) could contribute to this process, since the K^*_0(1430) couples to the channel K^-η in an S-wave way with a branching fraction ℬ(K^*_0(1430)→ Kη)=(8.6^+2.7_-3.4)% <cit.>. Therefore, in this paper, we will neglect all the other excited kaon mesons, and only take into account the contribution from the intermediate K^*_0(1430) as shown by Fig. <ref>, whose amplitude can be expressed as 𝒯_K^*_0 = V_p×β× M_K^*_0(1430)^2/M_K^- η^2 - M_K^*_0(1430)^2 + i M_K^*_0(1430)Γ_K^*_0(1430), where the parameter β stands for the relative weight of the K^*_0(1430) contribution with respect to that of the DD̅ bound state X(3700), and M_K^- η is the invariant mass of the K^- η system. We will take as input M_K^*_0(1430) = 1425 MeV and Γ_K^*_0(1430) = 270 MeV <cit.>. With the amplitudes of Eqs. (<ref>) and (<ref>) at hand, the doubly differential decay width of the B^- → K^- ηη_c process can be written as d^2 Γ/dM_ηη_cdM_K^- η = 1/(2 π)^3M_ηη_c M_K^- η/8 M_B^-^3|𝒯_X + 𝒯_K^*_0|^2. The differential decay width dΓ/dM_ηη_c can then be obtained by integrating Eq. (<ref>) over the K^- η invariant mass M_K^- η, whose integration range is given by ( M^2_K^- η)_min = ( E_K^-^* + E_η^* )^2 - ( √(E_η^*2 - m_η^2) + √(E_K^-^*2 - m_K^-^2))^2, ( M^2_K^- η)_max = ( E_K^-^* + E_η^* )^2 - ( √(E_η^*2 - m_η^2) - √(E_K^-^*2 - m_K^-^2))^2, where E_K^-^* and E_η^* are the energies of K^- and η in the ηη_c rest frame, respectively. Explicitly, we have E_K^-^* = M^2_B^- - M^2_ηη_c - M^2_K^-/2 M_ηη_c, E_η^* = M^2_ηη_c - M^2_η_c + M^2_η/2 M_ηη_c. Similarly, we can also obtain the differential decay width dΓ/dM_K^- η by integrating Eq. (<ref>) over the ηη_c invariant mass M_ηη_c, and the range of integration can be obtained by exchanging K^- and η_c in Eqs. (<ref>)–(<ref>). Finally, by integrating the differential width dΓ/dM_ηη_c (dΓ/dM_K^- η) over M_ηη_c (M_K^- η), we can obtain the partial decay width of the B^- → K^- ηη_c process, Γ = ∫dM_ηη_c∫dM_K^- η1/(2 π)^3M_ηη_c M_K^- η/8 M_B^-^3|𝒯_X + 𝒯_K^*_0|^2. Here all the meson masses involved are taken from the Particle Data Group <cit.>. § RESULTS AND DISCUSSION In our model, we have two free parameters, V_p and β. The parameter V_p is a global factor and its value does not affect the shapes of the ηη_c and K^- η invariant mass distributions, and thus we take V_p=1 for simplicity. The parameter β represents the relative weight of the contribution from K^*_0(1430) with respect to that from X(3700), and we take the default value β=0.004 in order to make the contributions from X(3700) and K^*_0(1430) within the same order of magnitude. Firstly, we show in Fig. <ref> the normalized ηη_c and K^- η invariant mass distributions with β=0.004. One can see a clear peak around 3730 MeV in the ηη_c invariant mass distribution, which should be associated with the D D̅ bound state X(3700). In addition, a K^*_0(1430) signal appears in the K^- η invariant mass distribution, but gives rise to a smooth shape in the ηη_c invariant mass distribution and does not affect the peak structure of the X(3700) significantly. It should be stressed that the line shape of the X(3700) in the ηη_c invariant mass distribution is different from that of a Breit-Wigner form, which is a typical feature of the DD̅ molecular state. We also show in Fig. <ref> the Dalitz plot for the B^- → K^- ηη_c decay in the (M_ηη_c^2, M_K^- η^2) plane, where one can see two clear bands corresponding to the X(3700) and K^*_0(1430) resonances, respectively. The value of the parameter β is unknown, and could be determined if the experimental measurements of the B^- → K^- ηη_c decay are available in the future. In order to study the dependence of our results on β, we show in Fig. <ref> the predicted ηη_c and K^- η (b) invariant mass distributions of the process with three different values of β = 0.003, 0.004, 0.005. One can see that the peak of the K^*_0(1430) resonance in the K^- η invariant mass distribution becomes more significant when the value of β increases. The signal corresponding to the D D̅ bound state X(3700) is, however, always clear in the ηη_c invariant mass distribution. On the other hand, the value of the color factor C, which represents the relative weight of the external W^- emission mechanism with respect to the internal W^- emission mechanism, could vary around 3 in order to account for the potential nonfactorizable contributions <cit.>. To this end, we show in Fig. <ref> the normalized ηη_c and K^- η invariant mass distributions of the B^- → K^- ηη_c decay by taking three different values of C = 3.0, 2.5, 2.0. One can see that, although the peak of the X(3700) state in the ηη_c invariant mass distribution becomes weaker when the value of C decreases, its signal is still clear and will be easy to be distinguished from the background contribution. Meanwhile, the peak of the K^*_0(1430) resonance in the K^-η invariant mass distribution has little changes for these three different values of the parameter C, because the contribution from the DD̅ bound state is smooth around the peak of K^*_0(1430) in the K^-η invariant mass distribution. From the above analyses, one can find that within the variation ranges of the two free parameters, there is always a clear peak around 3730 MeV in the ηη_c invariant mass distribution, which corresponds to the D D̅ bound state. Thus, we suggest strongly that our experimental colleagues can perform more precise measurements of the B^- → K^- ηη_c decay at the Belle II and LHCb experiments in the future, which is very important for confirming the existence of the predicted D D̅ bound state. § CONCLUSIONS In this paper, motivated by the theoretical predictions for the DD̅ bound state, we propose to search for this state in the B^- → K^- ηη_c decay. To this end, we have investigated the process within the unitary coupled-channel approach, by taking into account the contributions from the S-wave pseudoscalar meson–pseudoscalar meson interactions, which can dynamically generate the DD̅ bound state X(3700). We have also taken into account the contribution from the intermediate resonance K^*_0(1430), since it couples to the Kη channel in an S-wave way with a branching fraction of ℬ(K^*_0(1430)→ Kη)=(8.6^+2.7_-3.4)%. Our results show that a clear peak appears around 3730 MeV in the ηη_c invariant mass distribution, which should be associated with the DD̅ bound state. It should be stressed that the line shape of the DD̅ bound state is significantly different from that of a Breit-Winger form, which is a typical feature of the DD̅ molecular state. On the other hand, one can also find the peak of the resonance K^*_0(1430) in the K^-η invariant mass distribution, and the resonance gives a smooth contribution in the ηη_c invariant mass distribution. In summary, we strongly encourage our experimental colleagues to perform a more precise measurement of the B^- → K^- ηη_c decay at the Belle II and LHCb experiments in the future, which will be very helpful to confirm the existence of the predicted D D̅ bound state, as well as to deepen our understanding of the hadron-hadron interactions. § ACKNOWLEDGEMENTS This work is supported by the National Natural Science Foundation of China under Grant Nos. 12135006, 12075097 and 12192263, the Natural Science Foundation of Henan under Grand Nos. 222300420554 and 232300421140, the Project of Youth Backbone Teachers of Colleges and Universities of Henan Province (2020GGJS017), the Youth Talent Support Project of Henan (2021HYTP002), the Open Project of Guangxi Key Laboratory of Nuclear Physics and Nuclear Technology (No. NLK2021-08), as well as the Fundamental Research Funds for the Central Universities under Grant Nos. CCNU19TD012 and CCNU22LJ004. 99 Belle:2003nnu S. K. Choi et al. [Belle], Observation of a narrow charmonium-like state in exclusive B^±→ K^±π^+ π^- J/ψ decays, Phys. Rev. Lett. 91 (2003), 262001. ParticleDataGroup:2022pth R. L. Workman et al. [Particle Data Group], Review of Particle Physics, PTEP 2022 (2022), 083C01. Pakvasa:2003ea S. Pakvasa and M. Suzuki, On the hidden charm state at 3872 MeV, Phys. Lett. B 579 (2004), 67-73. Chen:2015ata W. Chen, T. G. Steele, H. X. Chen and S. L. Zhu, Mass spectra of Z_c and Z_b exotic states as hadron molecules, Phys. Rev. D 92 (2015), 054002. Molina:2009ct R. Molina and E. Oset, The Y(3940), Z(3930) and the X(4160) as dynamically generated resonances from the vector-vector interaction, Phys. Rev. D 80 (2009), 114013. Guo:2017jvc F. K. Guo, C. Hanhart, U. G. Meißner, Q. Wang, Q. Zhao and B. S. Zou, Rev. Mod. Phys. 90 (2018) no.1, 015004 [erratum: Rev. Mod. Phys. 94 (2022) no.2, 029901]. Gamermann:2006nm D. Gamermann, E. Oset, D. Strottman and M. J. Vicente Vacas, Dynamically generated open and hidden charm meson systems, Phys. Rev. D 76 (2007), 074016. Gamermann:2009ouq D. Gamermann, E. Oset and B. S. Zou, The radiative decay of ψ(3770) into the predicted scalar state X(3700), Eur. Phys. J. A 41 (2009), 85-91. Prelovsek:2020eiw S. Prelovsek, S. Collins, D. Mohler, M. Padmanath and S. Piemonte, Charmonium-like resonances with J^PC = 0^++, 2^++ in coupled D D̅, D_s D̅_s scattering on the lattice, JHEP 06 (2021), 035. Dong:2021bvy X. K. Dong, F. K. Guo and B. S. Zou, A survey of heavy–heavy hadronic molecules, Commun. Theor. Phys. 73 (2021), 125201. Chen:2021erj H. X. Chen, Hadronic molecules in B decays, Phys. Rev. D 105 (2022) 9, 094003. Shi:2021hzm P. P. Shi, Z. H. Zhang, F. K. Guo and Z. Yang, D^+ D^- hadronic atom and its production in pp and p p̅ collisions, Phys. Rev. D 105 (2022), 034024. Xin:2022bzt Q. Xin, Z. G. Wang and X. S. Yang, Analysis of the X(3960) and related tetraquark molecular states via the QCD sum rules, AAPPS Bull. 32 (2022) 1, 37. Peng:2023lfw F. Z. Peng, M. J. Yan and M. Pavon Valderrama, Heavy- and light-flavor symmetry partners of the T_cc^+(3875), the X(3872) and the X(3960) from light-meson exchange saturation, [arXiv:2304.13515 [hep-ph]]. Gamermann:2007mu D. Gamermann and E. Oset, Hidden charm dynamically generated resonances and the e^+ e^- → J/ψ D D̅, J/ψ D D̅^* reactions, Eur. Phys. J. A 36 (2008), 189-194. Wang:2019evy E. Wang, W. H. Liang and E. Oset, Analysis of the e^+e^- → J/ψ D D̅ reaction close to the threshold concerning claims of a χ_c0(2P) state, Eur. Phys. J. A 57 (2021), 38. Belle:2017egg K. Chilikin et al. [Belle], Observation of an alternative χ_c0(2P) candidate in e^+ e^- → J/ψ D D̅, Phys. Rev. D 95 (2017), 112003. Dai:2015bcc L. R. Dai, J. J. Xie and E. Oset, B^0 → D^0 D̅^0 K^0 , B^+ → D^0 D̅^0 K^+ , and the scalar D D̅ bound state, Eur. Phys. J. C 76 (2016) 3, 121. Belle:2005rte S. Uehara et al. [Belle], Observation of a χ^'_c2 candidate in γγ→ D D̅ production at BELLE, Phys. Rev. Lett. 96 (2006), 082003. BaBar:2010jfn B. Aubert et al. [BaBar], Observation of the χ_c2(2P) meson in the reaction γγ→ D D̅ at BaBar, Phys. Rev. D 81 (2010), 092003. Deineka:2021aeu O. Deineka, I. Danilkin and M. Vanderhaeghen, Dispersive analysis of the γγ→ D D̅ data and the confirmation of the D D̅ bound state, Phys. Lett. B 827 (2022), 136982. Wang:2020elp E. Wang, H. S. Li, W. H. Liang and E. Oset, Analysis of the γγ→ DD̅ reaction and the DD̅ bound state, Phys. Rev. D 103 (2021), 054008. Xiao:2012iq C. W. Xiao and E. Oset, Three methods to detect the predicted D D̅ scalar meson X(3700), Eur. Phys. J. A 49 (2013), 52. Dai:2020yfu L. Dai, G. Toledo and E. Oset, Searching for a D D̅ bound state with the ψ (3770) →γ D^0 D̅^0 decay, Eur. Phys. J. C 80 (2020) 6, 510. Wei:2021usz L. L. Wei, H. S. Li, E. Wang, J. J. Xie, D. M. Li and Y. X. Li, Search for a D D̅ bound state in the Λ_b →Λ DD̅ process, Phys. Rev. D 103 (2021), 114013. BESIII:2023bgk M. Ablikim et al. [BESIII], Search for a scalar partner of the X(3872) via ψ(3770) decays into γηη' and γπ^+π^- J/ψ, [arXiv:2305.11682 [hep-ex]]. Xing:2022uqu Z. P. Xing, F. Huang and W. Wang, Angular distributions for Λ_b →Λ^*_J (p K^-) J/ψ (→ℓ^+ ℓ^-) decays, Phys. Rev. D 106 (2022), 114041. Duan:2023qsg M. Y. Duan, E. Wang and D. Y. Chen, Searching for the open flavor tetraquark T^++_cs̅0(2900) in the process B^+→ K^+ D^+ D^-, [arXiv:2305.09436 [hep-ph]]. Lyu:2023jos W. T. Lyu, Y. H. Lyu, M. Y. Duan, D. M. Li, D. Y. Chen and E. Wang, The roles of the T_cs̅0(2900)^0 and D_0^*(2300) in the process B^-→ D_s^+K^-π^-, [arXiv:2306.16101 [hep-ph]]. Bediaga:2020qxg I. Bediaga and C. Göbel, Direct CP violation in beauty and charm hadron decays, Prog. Part. Nucl. Phys. 114, 103808 (2020). Wang:2021aql F. L. Wang, X. D. Yang, R. Chen and X. Liu, Correlation of the hidden-charm molecular tetraquarks and the charmoniumlike structures existing in the B→ XYZ+K process, Phys. Rev. D 104 (2021), 094010. Dai:2018nmw L. R. Dai, G. Y. Wang, X. Chen, E. Wang, E. Oset and D. M. Li, The B^+→ J/ψω K^+ reaction and D^∗D̅^∗ molecular states, Eur. Phys. J. A 55 (2019) no.3, 36. Zhang:2020rqr Y. Zhang, E. Wang, D. M. Li and Y. X. Li, Search for the D^*D̅^* molecular state Z_c(4000) in the reaction B^-→ J/ψρ^0 K^-, Chin. Phys. C 44 (2020) no.9, 093107. Wang:2017mrt E. Wang, J. J. Xie, L. S. Geng and E. Oset, Analysis of the B^+→ J/ψϕ K^+ data at low J/ψϕ invariant masses and the X(4140) and X(4160) resonances, Phys. Rev. D 97 (2018), 014017. LHCb:2021uow R. Aaij et al. [LHCb], Observation of New Resonances Decaying to J/ψ K^+ and J/ψϕ, Phys. Rev. Lett. 127 (2021), 082001. CDF:2009jgo T. Aaltonen et al. [CDF], Evidence for a Narrow Near-Threshold Structure in the J/ψϕ Mass Spectrum in B^+→ J/ψϕ K^+ Decays, Phys. Rev. Lett. 102 (2009), 242002. D0:2013jvp V. M. Abazov et al. [D0], Search for the X(4140) state in B^+ → J/ψϕ K^+ decays with the D0 Detector, Phys. Rev. D 89 (2014), 012004. LHCb:2020bls R. Aaij et al. [LHCb], A model-independent study of resonant structure in B^+→ D^+D^-K^+ decays, Phys. Rev. Lett. 125 (2020), 242001. LHCb:2020pxc R. Aaij et al. [LHCb], Amplitude analysis of the B^+→ D^+D^-K^+ decay, Phys. Rev. D 102 (2020), 112003. Belle:2015yoa A. Vinokurova et al. [Belle], Search for B decays to final states with the η_c meson, JHEP 06 (2015), 132 [erratum: JHEP 02 (2017), 088]. Belle-II:2018jsg E. Kou et al. [Belle-II], The Belle II Physics Book, PTEP 2019 (2019), 123C01 [erratum: PTEP 2020 (2020), 029201]. Bhardwaj:2018ffc V. Bhardwaj [Belle-II], Prospects in spectroscopy with Belle II, Springer Proc. Phys. 234 (2019), 181-187. Xie:2022lyw J. M. Xie, M. Z. Liu and L. S. Geng, Production rates of D_s^+ D_s^- and D D̅ molecules in B decays, Phys. Rev. D 107 (2023), 016003. Wang:2020pem Z. Wang, Y. Y. Wang, E. Wang, D. M. Li and J. J. Xie, The scalar f_0(500) and f_0(980) resonances and vector mesons in the single Cabibbo-suppressed decays Λ_c → p K^+K^- and pπ^+π^-, Eur. Phys. J. C 80 (2020) 9, 842. Wang:2021naf J. Y. Wang, M. Y. Duan, G. Y. Wang, D. M. Li, L. J. Liu and E. Wang, The a_0(980) and f_0(980) in the process D_s^+ → K^+ K^- π^+, Phys. Lett. B 821 (2021), 136617. Liu:2020ajv W. Y. Liu, W. Hao, G. Y. Wang, Y. Y. Wang, E. Wang and D. M. Li, Resonances X(4140), X(4160), and P_cs(4459) in the decay of Λ_b→ J/ψΛϕ, Phys. Rev. D 103 (2021), 034019. Duan:2020vye M. Y. Duan, J. Y. Wang, G. Y. Wang, E. Wang and D. M. Li, Role of scalar a_0(980) in the single Cabibbo suppressed process D^+ →π ^+π ^0η, Eur. Phys. J. C 80 (2020) 11, 1041. Zhang:2022xpf H. Zhang, Y. H. Lyu, L. J. Liu and E. Wang, Role of the scalar f_0(980) in the process D_s^+ →π^+π^0π^0, Chin. Phys. C 47 (2023) no.4, 043101. Li:2020fqp X. C. Feng, L. L. Wei, M. Y. Duan, E. Wang and D. M. Li, The a_0(980) in the single Cabibbo-suppressed process Λ_c →π^0η p, [arXiv:2009.08600 [hep-ph]]. Ali:1998eb A. Ali, G. Kramer and C. D. Lu, Experimental tests of factorization in charmless nonleptonic two-body B decays, Phys. Rev. D 58, 094009 (1998).
http://arxiv.org/abs/2307.04892v1
20230710203027
Entity Identifier: A Natural Text Parsing-based Framework For Entity Relation Extraction
[ "El Mehdi Chouham", "Jessica López Espejel", "Mahaman Sanoussi Yahaya Alassan", "Walid Dahhane", "El Hassane Ettifouri" ]
cs.CL
[ "cs.CL" ]
1 .001 Entity Identifier: A Natural Text Parsing-based Framework For Entity Relation Extraction mode = title]Entity Identifier: A Natural Text Parsing-based Framework For Entity Relation Extraction [1] [1]<tnote text> 1]El Mehdi Chouham[] [email protected] 1]Jessica López Espejel[ orcid=0000-0001-6285-0770 ] [email protected] [1]organization=Novelis Research and Innovation Lab, addressline=207 Rue de Bercy, city=Paris, postcode=75012, state=, country=France 1]Mahaman Sanoussi Yahaya Alassan[] [email protected] 1]Walid Dahhane[orcid=0000-0001-5387-3380] [email protected] 1]El Hassane Ettifouri[ orcid=0000-0001-5299-9053 ] [email protected] [1] [1]Corresponding author [1]Corresponding author [1] The field of programming has a diversity of paradigms that are used according to the working framework. While current neural code generation methods are able to learn and generate code directly from text, we believe that this approach is not optimal for certain code tasks, particularly the generation of classes in an object-oriented project. Specifically, we use natural language processing techniques to extract structured information from requirements descriptions, in order to automate the generation of CRUD (Create, Read, Update, Delete) class code. To facilitate this process, we introduce a pipeline for extracting entity and relation information, as well as a representation called an "Entity Tree" to model this information. We also create a dataset to evaluate the effectiveness of our approach. * We have presented Entity Identifier, a pipeline method for transforming requirements specifications in natural language into a model diagram that incorporates Stanford Scene Graph Parsing. * We create a dataset and define evaluation metrics to assess the effectiveness of our approach and facilitate future research in this area. * Our method achieves high scores on simple requirement statements, but struggles in handling complex Wikipedia paragraphs. Entity Relation Extraction Entity Tree Natural Language Processing _set:nn stm / mktitle nologo [ [ August 12, 2023 =================== § INTRODUCTION In Natural Language Processing (NLP), many tasks put effort on converting input text into a more readily understandable form for humans. Examples of such tasks include translation <cit.>, summarization <cit.>, question answering <cit.>, text rephrasing <cit.>, and named entity recognition <cit.>. However, only a relatively small number of tasks, such as sequence and token classification, may be primarily useful for machines. We believe that the development of automatic methods to model natural language for machine use has the potential to enable a wide range of future applications. These models may allow for the creation of systems that can understand and interpret human language and leverage this knowledge for downstream use, potentially leading to new and innovative applications in various industries and sectors. For instance, in the field of text-to-code generation, current AI models such as CodeBERT <cit.>, CodeT5 <cit.>, JaCoText <cit.>, and Codex <cit.> have shown promising results, but still struggle with lack of optimization, inconsistency, and syntax issues. The latest is a major limitation, as syntactically correct code is necessary for a machine to execute it. Additionally, practical considerations such as code structure, clearness, and readability are important aspects of software development that these models have not yet been able to fully address. We believe that incorporating a structuring phase for natural language inputs could significantly advance the capabilities of tasks like text-to-code generation. In this work, we work on automating the job of requirement analysis by extracting key information that can be directly used to build UML (United Modeling Language) Class diagram and generate class code for CRUD (Create, Read, Update, Delete) operations from natural language requirement specifications. Our primary goal is exploring the benefits of structuring natural language for code generation. Specifically, we aim to extract entities and relationship triplets, including their characteristics, in a manner similar to the joint entity and relation extraction task <cit.>. In addition, we aim to extract common unnamed entities, data types, class attributes, and database relations between entities, in order to build a rich representation, we refer to as an Entity Tree. This representation can be useful for downstream tasks. Figure 1 illustrates an example of the Entity Tree representation. In development workflows, diagrams such as UML and MCD (Merise Conceptual Diagram) are extensively used for engineering and visualization purposes. These diagrams enable collaborators to analyze, refine, and plan different aspects of a project more effectively. Our approach not only grants direct access to these diagram types but also simplifies the generation of class code and database structure using template heuristics. This liberates developers from repetitive tasks, allowing them to concentrate on more challenging ones. The key advantage of this approach in code generation task is the creation of more dependable code with fewer syntax errors. In this work, we present a parsing framework that constructs an Entity Tree (ET) from natural language text. To the best of our knowledge, there are currently no datasets available for common entity and relation extraction. Therefore, we create a dataset and define evaluation metrics to assess the effectiveness of our approach and facilitate future research in this area. Our method achieves notable results in the overall extraction of triplets. The rest of the paper is organized as follows: In section <ref>, we present the related work. In section <ref>, we provide a comprehensive formulation of the problem. In Section <ref>, we describe in details our proposed method. In Section <ref>, we present the experimental protocol and the proposed dataset. In Section <ref>, we provide obtained results and discuss them. Finally, We conclude in Section <ref> and present and some future directions. § RELATED WORK There is a significant amount of prior research on system requirements parsing, but most existing models do not anticipate the use of an entirely natural language input. <cit.> proposed a framework for requirement parsing that utilizes a set of word processing tools and typing rules. This approach involves the user in typing the input, with the system providing error messages and highlighting patterns to enforce adherence to what the paper refers to as a controlled natural language for requirements' specifications. This approach relies on the user to follow specific typing rules and may not be suitable for users who want a fully natural user experience while typing the input. In <cit.>, a greater emphasis is placed on the use of natural language processing and ontology techniques with OpenNLP to build the parsing architecture. This approach allows for more flexibility for the user, but the parsers rely on transforming complex sentences into simple sequences rather than consistently representing the information. Additionally, the approach does not utilize coreference resolution, meaning that splitting sentences can result in loss of linking information. Furthermore, the use of rules to ignore certain concept words later in the pipeline may lead to error propagation and confusion. Overall, this approach may require more user intervention to ensure accurate parsing of the input. Similarly, <cit.> uses a suite of natural language processing tools to extract UML diagrams directly. Their approach involves the use of heuristic rules for identifying classes, relationships, and objects. However, our research posits that it is more effective to extract entities and relationships jointly, as the semantics of a sentence are dependent on the combination of both. Furthermore, while the aforementioned tool is presented as software specifically designed for the creation of UML diagrams, our research aims to output the extracted elements in a more widely-used representation. The approach presented in <cit.> addresses the issue of information loss through the use of a coreference resolver in the pipeline. The pipeline utilizes both dependency parsing and syntactic parsing during the parsing phase, and employs a set of rules to construct a Domain Model in a subsequent phase. However, the approach has some limitations, including the fact that coreference resolution is performed after segmentation, and the reliance on noun phrases and verbs instead of a compact representation for building the domain model. In our research, we focus on a fundamental task of common entity and relation extraction. Unlike traditional entity and relation extraction, our task aims to extract all entities and relationships in a text, not just named entities. We use the extracted sets of entities and relationships to build a model of the system. We believe that this approach can highlight the potential utility of jointly extracting unnamed entities and relations as a distinct task within the field of knowledge extraction. After optimizing our pipeline for this task, we then build our domain model equivalent referred to as the Entity Tree. § PROBLEM FORMULATION Our goal is to design a mechanism that translates a natural language description of system requirements into a structured representation called an Entity Tree (as shown in Figure <ref>). This representation will enable us to easily generate UML, MCD, CRUD diagrams, and database structures. Let S be a sequence and E={e_1, e_2, ..., e_|S|} the set of its entities' components. An Entity Component e_1 is defined by 4 elements: * Attributes - set of descriptive labels representing the entity. * Extends - the entity it extends, if it has a parent entity. * Name - the entity name. * Relationships - set of relationships with other entities. We define a relationship as the link between two entities. Here we refer to the entity containing the list of relationships as the subject e_s, the target entity as the object e_o and the predicate as the relation r. We explicitly define a relationship as a dictionary of the following eight items: * Extract number - the exact number of the subject entities e_s, if mentioned. * Primitivity - it indicates if the e_o is primitive. An entity is primitive if it is a literal such as a number, a date, a boolean, etc. In other words, when it is not a Class instance. Values which are considered primitive can be redefined by the user. * Min - cardinality on the e_s side. * Max - cardinality on the e_o side. * Object attributes - set of labels describing e_o. * Object name - the e_o name. * Relationship type - the predicate r. * Type - the type notion refers to the parent class of e_o if it has one, otherwise it assigns a corresponding primitive type. The Entity Tree models the entities and relationships of a sentence in a very rich way as it describes every item and its relation with other items. Its data structure serves as a representation of entities and their relationships within a sentence or piece of text. Its utilization allows for a more intricate and comprehensive depiction of sentence structure, surpassing conventional approaches such as parsing trees. In the example "The cat sat on the mat.", the words "cat" and "mat" are entities, and the relationship between them is "sat on". In our case, the Entity Tree can also include additional information about each entity, such as its type or attributes. This presentation offers valuable applications in tasks such as information extraction, text summarization, and question answering. § PROPOSED METHOD Our approach uses a combination of multiple natural language processing (NLP) tools to extract entities, relationships, and their characteristics. An overview of our pipeline is shown in Figure <ref>. Firstly, the input text undergoes coreference resolution to replace each pronoun with the word to which it refers. Secondly, the text is segmented into sentences. Thirdly, for each sentence, we use a Scene Graph Parser (SGP)[A Stanford's CoreNLP tool] <cit.> to obtain the corresponding scene graph. An aggregator is used to fuse different scene graphs. Finally, we apply additional methods to double-check features extraction, generate cardinality, and build up the Entity Tree representation of the input. We detail each component of our system in the following subsections. §.§ Text Segmentation To use the CoreNLP scene graph parser for building a scene graph, the input should adhere to the scene graph parser’s input format. According to the official documentation, the implementation only works on single sentences. For this reason, the input goes through sequential methods, a sentence segmentation function and a coreference resolver. Firstly, the text goes through Spacy’s integrated sentence sequencer. The latter ensures that the text will not only be split based on dots. Instead, the split is done using regular expressions while taking into consideration links, phone numbers, decimal numbers, etc. Secondly, pronouns are replaced by the nominal group to which they refer to in sentences starting with a pronoun. This prevents sending to the scene graph parser sentences with no nominal groups and spread confusing downstream. The text segmentation block outputs a list of clear sentences, that can be semantically understood without the need for the rest of the text. In the next step, each sentence will be sent to the scene graph parser. §.§ Scene Graph Parsing The scene graph parser extracts knowledge from a sentence into a scene graph. As its name suggests, the scene graph can be thought of as a representation of the scene described in the input sentence. Various implementations are available. In our work, we stick with the rule-based scene graph parser since its implementation showed the best stability. Practically, the CoreNLP implementation returns a list of relationships. A relationship consists of a subject, the predicate and the object, and a list of attributes built similarly with a list of sets, each containing a subject, the predicate and an attribute. Scene graph parser starts by building an enhanced version of a dependency tree of the sentence (Klein and Manning, 2003), only then it resolves quantificational modifiers, coreferences and plural. §.§.§ Quantificational modifiers Dependency tree parsers treat noun expression quantificational modifiers such as “both of” or “a bunch of” like noun phrases leading to the latent dependency trees containing root pronouns instead of the subject. Scene graph parser tackles this issue by checking if a noun expression is a quantification modifier against a precompiled list. This addition guarantees an intermediate dependency tree were the head component is an entity. §.§.§ Coreference The parser performs a coreference resolution on the pronouns of the dependency tree, disambiguating pronouns and other words that might be ambiguous without context. It uses an adaptation of the <cit.> algorithm for dependency trees, to save underlying semantic link between sentences, clear up any confusions and improve the accuracy of downstream natural language processing tasks. §.§.§ Plural resolution In the context of plural resolution, the scene graph parser is tasked with addressing the challenge of "collective-distributive" ambiguity, which refers to the difficulty of determining whether a group noun refers to a single entity that is comprised of multiple individual parts (collective) or multiple distinct entities (distributive). On the one hand, in the example sentence "Two guys in police uniforms", the parser would need to determine that the noun "guys" refers to two distinct individuals, each of whom is wearing a police uniform. On the other hand, in the example sentence "Two passengers in the backseat of the car", the parser would need to recognize that the noun "passengers" refers to two distinct individuals, but that there is only one backseat entity. The scene graph parser only considers that the distributive plural is far more common. This is convenient as it aligns with the common nature of specifications' descriptions. At this stage the parser outputs what is referred to as a Semantic Graph. From this point, the semantic graph will undergo a set of methods to extract objects and attributes. §.§.§ Rule-based Parser The goal of these procedure is to extract object entities, relations, and attributes from the input sentence using a semantic graph and Semgrex2 expressions. As of the time this research was conducted, Semgrex2 expressions were able to capture a wide range of pattern constructions. It supports the following patterns: * Adjectival modifiers * Subject-predicate-object constructions * Subject-predicate constructions * Copular constructions * Prepositional phrases * Possessive constructions * Passive constructions * Clausal modifiers of nouns In addition to the rule-based approach, the CoreNLP library offers a classifier-based approach for extracting relations between entities. However, we have found that the rule-based approach is more reliable, as the classifier-based approach may overlook complex patterns and underlying semantics. The output of this process is a scene graph, which consists of a list of relationships and a list of attributes. We transform the scene graph into a list of entities, each of which has a sub-list of attributes and a sub-list of relationships. A relationship is a link between two entities, consisting of an object entity and a predicate verb describing the connection between the subject entity and the object entity. §.§ Cardinality Extraction The scene graph produced at this stage does not include information about the cardinality of entities, as the process converts the corresponding words into their lowercase stem. To address this issue, our cardinality extractor iterates through the list of relationships and the corresponding sentences to determine cardinality. Plural noun expressions are assigned a cardinality of "*", while singular noun expressions are assigned a cardinality of "1". Additionally, for database structure purposes, if an object is described as "optional" in the sentence, the corresponding entity is assigned a cardinality of "0". This will help easily deduce database cardinality types' downstream if needed. For example, in "A level has multiple bosses" we can deduce a many-to-one type of cardinality. Furthermore, this module also extracts and preserves any indicated numerical quantifiers as the "Exact Number" entry of the Entity Tree. §.§ Inheritance Resolver Attributes in a scene graph can be either noun expressions or adjectives. However, in our case, for a noun attribute to be considered as an adjective, it should not be mentioned else where as an entity, otherwise it will primarily be considered as a parent class to the described entity instead of an attribute. We implement an engine that uses a pre-defined set of rules to iterate through scene graph attributes. It looks up for relationships containing a “to be” predicate and applies rules to determine the entity's relative position in the inheritance hierarchy. In this way, we to build up the inheritance. §.§ Type description Similar to a variable type in programming, we assign to entities the kind of data that they can hold. If there are types explicitly described through specific attributes appearing in a list pre-defined by the user (such as Date, Long, Short, and Int), the corresponding entity is marked as primitive and is assigned this type. Object Entities which appear as subjects elsewhere in the text are not assigned a type. Thus, their value is defaulted to "String". The Entity Tree produced reflects domain model-like information, including class attributes and relationships with additional entries. Heuristics can be applied to this Entity Tree to generate consistent class code or UML class diagrams from directly system requirements. With simple rules script, we easily generate the UML class diagram illustrated in Figure <ref>. § EXPERIMENTAL PROTOCOL In this section, we will present our dataset, as well as the metrics used to evaluate our method. §.§ Dataset §.§.§ Entity relation triplet dataset We aim to evaluate our entity relation extraction work on a dataset specifically tailored for this purpose. However, we have found that existing datasets such as WebNLG <cit.> and New York Times <cit.> primarily concentrate on the extraction of named entities, which is not appropriate for our use case. For this reason, we constructed a dataset consisting of randomly selected Wikipedia paragraphs and labeled each paragraph with its corresponding (e_s, relationship, e_o) triple for evaluation purposes. Our dataset, contains : * Text theme * The text content * The corresponding (e_s, r, e_o) triplets * Scene Graph Extracted Dependencies The dataset contains a total of 198 examples. The dataset is only intended for evaluation. We therefore consider that this amount of examples is sufficient to evaluate an algorithm that does not contain a training phase. §.§.§ Patterns set A close examination of Wikipedia paragraphs reveals that they can be quite complex, which is at odds with the more precise, concise, and syntactically clear nature of system requirements descriptions. To evaluate the performance of our parser on a more representative way, we have developed an additional checklist of six patterns. §.§ Metrics For evaluation, we define some metrics to evaluate the extraction accuracy. We believe that a successful information extraction of the sentence resides in a consistent extraction of the triplet (e_s, r, e_o). Therefore, we focus on evaluating this aspect. We propose metrics based on Intersection over Union as well as BLEU <cit.> score. The approach of using BLEU is motivated by the idea that our task can be viewed as a type of translation, and the BLEU metric is well-suited to evaluating the quality of translations. •  Pair IoU - First of all, we introduce an IoU metric to evaluate the number of exactly detected pairs. IoU_Pair(A, B) = A ∩ B/A ∪ B where A is reference entity pairs' set, and B is candidate entity pairs' set. For the sake of readability, e_s, e_o refer to the concatenation of the name and the attribute of each of the subject entity and the object entity respectively. •  Pairs mean BLEU - We acknowledge that detecting exact pairs in complex text can be challenging, particularly when the entities are compound. Despite this, we believe that the use of the BLEU metric may provide a more suitable approach for comparing texts in this context compared to other methods. In this context, we introduce a BLEU mean of pairs metric. .5 !BLEU_Mean(D_x, Dy) = ∑_i ∈ D_x∑_i ∈ D_y[B(e_s^i, e_s^j) ≥ k ] ×[B(e_o^i, e_o^j) ≥ k ] /| D_x | with : .5 !B(entity^i, entity^j) = BLEU(entity^i, entity^j) + BLEU(entity^j, entity^i)/2 D_x is the reference sentences' set, D_y is the candidate sentences' set, and k is the threshold parameter. Comparisons with B() values that are less than k are omitted. •  Pairs exclusive BLEU - We introduce another BLEU-based metric that only takes into account max BLEU value of the two permutations of reference and generated pair. .5 !BLEU_Exclusive(D_x, Dy) = ∑_i ∈ D_x∑_i ∈ D_y[B(e_s^i, e_s^j) ≥ k ] ×[B(e_o^i, e_o^j) ≥ k ] /| D_x | with : .5 !B(entity^i, entity^j) = max [BLEU(entity^i, entity^j), BLEU(entity^j, entity^i)] entity refers to the concatenation of the name and the attribute of entity. •  Triplet BLEU - Finally, we introduce a similar metric that evaluates both entities and their relation based on BLEU value of their concatenation. .5 !BLEU_Triplet(D_x, Dy) = ∑_i ∈ D_x∑_i ∈ D_y[B(W(triplet^i), W(triplet^j)) ≥ k ] /| D_x | with : .5 !B(triplet^i, triplet^j) = max [BLEU(triplet^i, triplet^j), BLEU(triplet^j, triplet^i)] where: W(.) is the concatenation of e_s, r and e_o of a triplet through white space characters, triplet refers to the concatenation of the name and the attribute of each entity and the relationship. § RESULTS AND DISCUSSION Our entity identification pipeline achieves high scores on the basic patterns set, with scores of 0.905, 0.929, and 0.929 using the metrics IoU_Pair, [email protected] and [email protected], respectively. It also performed exceptionally well in identifying triplets, earning a perfect score. These results demonstrate that the pipeline is capable of handling simple requirement statements, which is necessary for achieving our final goal. However, when applied to complex Wikipedia paragraphs, the model encounters challenges and struggles to maintain the same level of performance. While the pipeline still manages to achieve decent results in terms of the overall extraction metric BLEU_Triplet on the WikiTriplet dataset, it performs poorly when it comes to detecting entity pairs. Specifically, it obtains scores of 0.004, 0.036, and 0.036 on the IoU_Pair, [email protected], and [email protected] metrics, respectively, indicating a significant drop in performance for entity pair identification in complex text. Our objective is to build a parser that performs well on simple sentences while also being adaptable to complex entries. For this reason, we have adopted an iterative optimization approach. This approach focuses on continuously improving the parser's performance based on its performance on the basic patterns list. The idea behind this strategy is to establish a solid basis in optimizing the pipeline's performance on simpler inputs and then gradually enhancing it to handle more complex text. § CONCLUSIONS AND PERSPECTIVES We have presented Entity Identifier, a pipeline method for transforming requirements specifications in natural language into a model diagram that incorporates Stanford Scene Graph Parsing. The latter is a natural text parser originally used for extracting scene information from image queries. We propose a novel task called common entities and relations extraction that aims to extract all related entities in a text and their relationship, this task will help better model natural text for machine use. While our entity identification pipeline demonstrates impressive capabilities in handling simple requirement statements, it encounters difficulties when confronted with complex Wikipedia paragraphs. Thus, an improvement of the current work would be to expand this task into all entities extraction including common words. In addition, it would be interesting to expand our evaluation dataset to include more examples from different sources. cas-model2-names
http://arxiv.org/abs/2307.04847v1
20230710183759
Emerging jet probes of strongly interacting dark sectors
[ "Juliana Carrasco", "Jose Zurita" ]
hep-ph
[ "hep-ph" ]
=1 ⟨ IP_2D⟩α_3DPU_dzm_Xm_π_Dc τ_π_D IFIC 23-24 a]Juliana Carrasco,a]Jose Zurita[a]Instituto de Física Corpuscular, CSIC-Universitat de València, Catedrático José Beltrán 2, E-46980, Paterna, Spain [email protected]@ific.uv.es A strongly interacting dark sector can give rise to a class of signatures dubbed dark showers, where in analogy to the strong sector in the Standard Model, the dark sector undergoes its own showering and hadronization, before decaying into Standard Model final states. When the typical decay lengths of the dark sector mesons are larger than a few centimeters (and no larger than a few meters) they give rise to the striking signature of emerging jets, characterized by a large multiplicity of displaced vertices. In this article we consider the general reinterpretation of the CMS search for emerging jets plus prompt jets into arbitrary new physics scenarios giving rise to emerging jets. More concretely, we consider the cases where the SM Higgs mediates between the dark sector and the SM, for several benchmark decay scenarios. Our procedure is validated employing the same model than the CMS emerging jet search. We find that emerging jets can be the leading probe in regions of parameter space, in particular when considering the so-called gluon portal and dark photon portal decay benchmarks. With the current 16.1 fb^-1 of luminosity this search can exclude down to O (20) % exotic branching ratio of the SM Higgs, but a naive extrapolation to the 139 fb^-1 luminosity employed in the current model-independent, indirect bound of 16 % would probe exotic branching ratios into dark quarks down to below 10 %. Further extrapolating these results to the HL-LHC, we find that one can pin down exotic branching ratio values of 1%, which is below the HL-LHC expectations of 2.5-4 %. We make our recasting code publicly available, as part of the LLP Recasting Repository. Emerging jet probes of strongly interacting dark sectors [ August 12, 2023 ========================================================= § INTRODUCTION Extensions of the Standard Model with a strongly coupled, non-Abelian dark sector have received considerable attention in the recent years. The phenomenological possibilities are varied, due the large number of parameters, such as the gauge group dimension (number of dark colors), the matter field content (number of dark flavors), the mass hierarchies in the dark sector, and the coupling strengths between the dark sector and the Standard model, as well as the internal dark sector couplings. Considering a collider operating at a center of mass energy √(s), the subclass of models which confine at a scale Λ_D, and where the dark sector masses m_D ≲Λ_D << √(s) give rise to a class of signatures generically dubbed dark showers, in analogy to the familiar parton shower in the strong sector of the Standard Model, see <cit.> for a comprehensive review of the current phenomenological, experimental and theoretical status. Dark showers give rise to uncommon, exotic collider objects, such as trackless jets <cit.>, emerging jets <cit.>, semi-visible jets <cit.> and SUEPs <cit.>. An experimental program aiming at dark shower signatures at the LHC is already underway <cit.>. It is customary <cit.> to dissect the collider phenomenology of dark showers into three pieces: i) production, ii) showering (which actually includes hadronization) and iii) decay. The production part consists of the parton-level production of dark quarks (customarily through 2 → 2 processes), the showering phase includes both the emission of dark gluons and dark quarks and the formation of bound states (dark hadrons), akin to the known behaviour of the strong sector of the Standard model. Indeed, the showering process yields a distinctive signal, as, unlike in normal searches for new phenomena, one is not targeting one or two new particles, but in principle many of them. Hence, the large multiplicity inherent to the dark parton showering is what makes them stand out from other Beyond Standard Model (BSM) searches. The decay modes of these dark hadrons (including SM particles such as leptons, quarks, gluons, or potentially not decaying at all contributing to the overall dark matter abundance) combined with the lifetime spectra of these dark hadrons span a large number of phenomenologically distinct scenarios. Decay benchmarks to guide the experimental exploration have been recently put forward in <cit.>. If the dark sector contains some particles that are long-lived (another research direction that has received considerable attention in the recent past, see e.g. <cit.> for reviews), with lifetimes in the few mm to few meters, which decay into SM particles, they give rise to emerging jets (EJ). In this work we present a simple and flexible reinterpretation of the CMS search for emerging jets <cit.> seizing all publicly available information. We validate our procedure by reproducing the CMS results for their benchmark model, proposed in <cit.>. The software developed for the reinterpretation procedure has been uploaded to the LLP Recasting Repository <cit.>, which we expect to be useful for those interested in a straightforward reinterpretation of this study. We later apply our procedure to the concrete case of exotic Higgs decays, namely we obtain bounds on the branching ratio of the SM-like 125 GeV Higgs boson into a pair of dark quarks. We also show that the bounds arising from the reinterpretation of the emerging jet search, albeit not designed to target this scenario, can nonetheless provide leading constraints in large portions of the parameter space. This article is organized as follows. In section <ref> we briefly review the phenomenology of emerging jets and the t-channel models from <cit.> which give rise to them. In section <ref>, we show our validation of the CMS emerging jet search. In section <ref> we apply our recasting procedure to a series of BSM decay benchmarks devised in  <cit.>. We reserve section <ref> for conclusions. Technical details about the validation of the CMS Emerging Jet is described in Appendix <ref>. § EMERGING JET PHENOMENOLOGY When considering a strongly interacting dark sector, the known behaviour of QCD provides a guidance for the relevant phenomenological features of the model. A QCD-like sector with gauge group SU(N_C_D) and N_f_D degenerate Dirac fermions (dark quarks, q_D) would then exhibit asymptotic freedom, and confine at a scale Λ_D, where m_q_D≲Λ_D. When these particles are produced at a collider with a center-of-mass energy √(s)≲ 30 Λ_D[If Λ_D would be closer to the available energy √(s), then the dark quarks have limited phase space to shower. The formed dark mesons can then be probed by searches for resonant bound states see e.g <cit.>.], the dark quarks hadronize into dark hadrons (π_D, ρ_D, ω_D, ...) which are clustered into collimated dark jets. The resulting signatures will then depend on the dark hadron decay, more specifically on the lifetime spectrum c τ_D, but the main underlying theme is that the shower process can lead to a large multiplicity of objects, while in traditional searches only one or two new particles are targeted. Following <cit.>, the dark shower can be decomposed into the three parts described above: production, hadronization (in the dark sector) and dark hadron decays. For the production at a collider, is necessary to connect the Standard Model to the dark sector, which is done through a portal. For emerging jets, the focus of this work, the model proposed in <cit.> employs a a bi-fundamental scalar X (namely, charged under both QCD and the dark sector) which interacts with a SM down-type quark and a dark quark via the following Lagrangian L⊃ - κ_ijq̅_R i q_D_j X . While in principle κ is a 3 × N_C_D matrix, we consider here the case where there is one single universal coupling to the right-handed down type quark, to avoid bounds from flavour physics (FCNCs, neutral meson-mixing, rare decays). In this model, X is pair produced with a sizable rate through gauge interactions, and the subsequent X → d_R q_D decay (which happens with 100 % branching ratio) leads us to expect two light jets and two emerging jets. Since the mediator particle X appears in t-channel exchanges, the model of equation <ref> is colloquially referred to as a t-channel model. In this paper we will also study the production via an s-channel SM Higgs boson h, which falls in the category of exotic Higgs decays <cit.>. Measurements of the Standard Model Higgs properties set Br(h → exotic < 0.16) at 95 % C.L. from both ATLAS <cit.> and CMS <cit.> with a total integrated luminosity of 139 and 138 fb^-1 respectively. These bounds are not completely model independent, as they assume that the Higgs boson couples to the electroweak gauge bosons with a strength equal or less to the SM one, which can be violated in certain BSM scenarios. The combination of ATLAS and CMS with 3000 fb^-1 is expected to set a limit of 2.5%, under the assumption that the current systematic uncertainties would be halved <cit.>, or 4% with the current systematic uncertainties <cit.>. This leaves ample room for the h → q_D q_D to occur with sizable rates. We note that the identification of a resonant, long-lived q_D q_D topology using Machine Learning techniques has been recently studied in reference <cit.> for the SM Higgs and in reference <cit.> for a TeV scale Z'. The showering and hadronization in the dark sector is conducted through the Hidden Valley module <cit.> within Pythia8 <cit.>. The non-perturbative nature of the dark QCD-like theories prevents from consistently connecting UV and IR parameters based on a perturbative approach, and hence it is customary to consider a dark sector consisting of spin-1 dark vector mesons ρ_D, and of spin-0 pseudoscalar dark pions, π_D. As the π_D arise from a breaking of a chiral dark symmetry, they are parametrically lighter than the other mesons in the theory, which decay into dark pions if kinematically allowed. Hence, the phenomenology is dictated by the dark pion properties, in particular their lifetime c τ_π_D. We distinguish three possible regimes depending on the dark pion lifetime. If the dark pions decay promptly (c τ_π_D≲ 1 mm), they end up giving multi-jet signals[If some dark hadrons would remain stable, one would have an invisible component within a jet, giving rise to semi-visible jets <cit.>. ]. If, on the contrary, the dark pions are stable in the detector (c τ_π_D≳ 1 m) then they appear as missing energy, and can be targeted by the suite of missing energy signatures that are customarily searches in the dark matter program at the LHC <cit.>. For c τ_π_D∈ [0.001 - 1 ] m, the dark pions decay inside the detector volume with different decay lengths, depending on their boost and on the fact that their actual decay position is sampled from an exponential distribution. The decay patterns of the dark pions can be quite varied. As mentioned before, in reference <cit.> to avoid dealing with non-trivial bounds from flavour processes, a 100 % decay rate to right-handed down quarks was assumed. Yet, the possibilities for the decay are quite numerous, and in that light reference <cit.> proposed five decay benchmark models, dubbed decay portals, based on a minimal set of theoretical priors. These decay portals describe how the pseudoscalar and vector dark mesons decay into Standard Model particles [In reference <cit.> the pseudoscalar and vector mesons are noted as η̃ and ω̃. For a unified notation in this paper, we replace them by π_D and ρ_D, respectively. ]. If π_D decays into gluons (photons) through a dimension 5 operator we have the gluon (photon) portal. If π_D instead couples to the Standard Model Higgs via the H^† H operator, then we have the Higgs portal, where the decays to the SM quarks follow the Yukawa hierarchy of a SM-like Higgs with m_H = m_π_D. The other two decay portals have the π_D decaying either through its mixing with the photon (akin to the γ-ρ mixing in the Standard Model), or through the chiral anomaly into a pair of dark photons A^', inspired by the π_0 →γγ SM process. The corresponding Pythia configuration cards for each of these portals can be generated through the public python script <cit.>. The number of free parameters in both our scenarios is still quite large, and here we follow some additional choices made in the literature. Regarding the t-channel EJ model, N_C_D=3 and N_f_D=7 are inspired by the study of <cit.>, and the dark sector mass parameters are chosen in proportion Λ_D: m_q_D: m_ρ_D: being 2:2:4:1. This choice ensures that the vector meson always decays into two dark pions, and was followed by the CMS collaboration in their emerging jets search <cit.> [The inclusion of a non-trivial flavour structure, where Λ_D > m_q_D is assumed, was studied in <cit.>.]. The free parameter for the analysis are then , and . Regarding the s-channel SM Higgs production with its different decay portals we assume N_C_D=3 and N_f_D=1, and Λ_D: m_q_D: m_ρ_D: m_π_D is now 2.5:0.4:2.5:1. Since the mediator mass is known, the only free parameters of the model are , and the exotic branching ratio, H → q_D q_D. We note that while in reference <cit.> a minimum proper lifetime as a function of the mass for each decay portal was estimated from theoretical considerations, we prefer to remove those prejudices and consider the three parameters as fully independent and uncorrelated. A few details about our simulations are in order. Within the Pythia Hidden Valley module, we set the parameter to 1.1 Λ, and the flag is set to 0.318, following the considerations discussed in Appendix A of <cit.>. In section <ref> we employ Pythia version 8.212, used in the CMS study, as our aim is to reproduce the published limits. For section <ref> we employ instead Pythia 8.307, because (as explained in <cit.>) this version has corrected a previous flaw in the code, that tended to overproduce hidden hadrons at very low p_T. § VALIDATION OF THE CMS EMERGING JET SEARCH In this section we discuss in detail the validation of the CMS search for emerging jets using a total integrated luminosity of 16.1 fb^-1 <cit.>. The CMS collaboration targets the dark QCD model from Equation <ref>, via p p → X X followed by X → q  q_D. Hence naively one expect to find two emerging jets and two SM jets. Events are selected by passing the H_T > 900 GeV trigger, where H_T is the scalar sum of the transverse momenta of all hadronic jets in the event, clustered with R=0.4 using the anti-k_T algorithm <cit.> applied to all tracks with p_T > 1 GeV. These events are required to have at least four jets within |η| < 2.0, and they undergo a further selection using kinematical variables to tag these jets as “emerging” and to define signal regions (called sets in the CMS paper). The explicit requirements are collected in Appendix <ref>, together with the 95 % C.L. limit to the number of signal events in each selection set, S_95^i. The total number of signal events in each set can be computed as N_S^i (, , ) = σ (p p → X X) × ( BR(X → q q_D))^2 × A_i (, , ) × L where L is the total integrated luminosity, A_i is the acceptance for the i-th set number [In this article, we will use the terms “efficiency” and “acceptance” interchangeably to denote the A_i function.] and the production rate σ ( p p → X X → q q_D q q_D) has been decomposed into the pair production cross section for X pairs (which proceeds through gauge couplings and hence is independent of and ) times a branching ratio of X → q q_D, which we set to unity along this work [We have explicitly verified that the narrow width approximation is fulfilled in our model points, which allows us to factorize the total rate into a cross section and a branching ratio.]. To benchmark the search, CMS considered the following parameters: * [GeV]={400,600,800,1000,1250,1500,2000} . * [GeV]={1,2,5,10} . * [mm]={1,2,5,25,45,60,100,150,225,300,500,1000}. For the =5 GeV case, CMS has provided the acceptances A_i in the - plane, indicating which of the seven selection numbers is the most sensitive one. In other words, for each - point scanned, with =5 GeV, only one of the possible seven A_i functions is given. To validate the search, we proceed in three steps. First, we check that using the provided A_i efficiencies we can reproduce the published 95 % C.L exclusion limits. In a second step, we check the degree of accuracy that we obtain for the two published kinematic distributions for the emerging jet tagging variables, and finally we show that we can reproduce with reasonable accuracy the said efficiencies and exclusion limits. The last step is crucial for our analysis, since this is what allows for reinterpretation, i.e.: derive limits from a experimental search on a model that has not been targeted by the experimental collaboration. In what follows, we define “exclusion” by requiring that the ratio of our predicted number of events from equation <ref> over the excluded one, R_95 = max_i{N_S^i/S_95^i} , is equal to unity, which is a common practice when performing reinterpretations <cit.>. §.§ Exclusion using published efficiencies We start by comparing the published exclusion limit with those that can be derived from using the A_i map from <cit.>. We present our results in figure <ref>, where the published CMS exclusion is shown as solid black. For our results, we need to provide a production cross section for the p p → X X process. On one hand, we use the cross section reported during the run of Pythia 8, which is a leading-order (LO) result in the strong QCD coupling α_S, shown in green. Second, we employ the cross sections used by CMS from <cit.> corresponding to down-type squark pair production, computed at the next-to-leading order in perturbation theory, and including a next-to-leading logarithmic correction from soft gluon resummation <cit.>, which is displayed in blue. We conclude that the provided A_i values are self-consistent, and we also verify that the limits were derived using NLO cross sections. §.§ Kinematic distributions As a second step of our validation, we will employ the published kinematic distributions. To tag the jets passing the selection as emerging, the following track-based variables <cit.> are considered: * : the median of the unsigned transverse impact parameter. * : distance between the z position of the primary vertex (PV), z_PV and the z position of the track at its closest approach to the PV. * D_N: the 3-D distance between the track and the primary vertex, weighted by the inverse resolution, D_N^2 = ( z_PV - z_trk/0.01  cm ) ^2 + ( r_trk-r_PV/σ_r)^2 . * : the ratio between the scalar p_T sum of all tracks with D_N < (certain value), normalized by the scalar p_T sum of all tracks, hence 0 ≤≤ 1. We illustrate the variable in figure <ref>. From the primary vertex located at (z,r) = (0,0) we consider that only a SM jet (which tags the vertex) and a long-lived dark pion emerge. The dark pion decays at (z_tr, r_tr) into three tracks 1,2,3, which for illustration purposes we consider as giving rise to only one jet. The track trajectories meet at the decay vertex: the tracks are drawn in black, and their prolongations in grey. We indicate with d_i the closest distance between the i-th track and the primary vertex, hence the r(z) component gives the transverse (longitudinal) impact parameter. In the figure, the median of the (d_i)_r corresponds to (d_2)_r, hence the jet originating from the dark pion has = |(d_2)_r|. Of the above variables, CMS presents results for and , before the selection cuts (which we describe below). Regarding the additional two variables, the variable D_N should be small for tracks originating from prompt particles, and large for displaced tracks; while is used for pile-up rejection. We note that both D_N and enter in these distributions through the definition of , which depends on D_N. Since CMS has not made explicit the D_N threshold employed in their figure 3, we have considered the three values employed to define the signal regions: 4, 10 and 20. Out of them, we have verified that the agreement is maximized for D_N=10, and hence D_N < 10 has been employed in the shown calculation shown below. As these last two variables are defined at the track level instead of at the jets level, we understand that kinematic distributions are not provided, which nonetheless would have provided an additional validation check for the proper reinterpretation of the search. Two important effects ought to be included for a realistic attempt at the reproduction of the CMS results: the tracking reconstruction efficiency ϵ_ trk and the smearing of the impact parameters. CMS reports the tracking efficiency dependence in terms of the p_T, η, and transverse vertex position (r) of the track <cit.>. From figure 8 of this article, one can see that above p_T > 1 GeV one can consider the tracking efficiency independent of p_T and to a lesser extent, of η. Regarding r, the efficiency diminishes with the displacement distance, as can be seen from figure 12 of <cit.>, which is obtained from a t t̅ sample at √(s)=7 TeV. The figure shows the cumulative efficiency for each of the iterations (0-5) of the tracking algorithm. While this effect is less relevant for lifetimes of few millimeters, it has an impact for the benchmark point with c τ_D = 25 mm and for larger lifetimes. Nonetheless, it is clear that since the tracking efficiency is a very complicated function that can only be reliable obtained from having access to the full detector simulation (and detector information) we will pursue four different parametrizations for ϵ_ trk(r) * Use the reported value of Iteration 5 from figure 12 of <cit.>. [It_5] * Use the reported value of Iteration 4 from figure 12 of <cit.>. [It_4] * Consider that tracks with at least one hit in the inner detector are reconstructed with 100 % efficiency, and with 0 % if not: ϵ_ trk(r) = Θ(r - 102  mm). [R]. * Consider ϵ_ trk(r) = 1, to illustrate the typical deviation obtained when no efficiency is considered. [A] Regarding the impact parameter smearing, we note that for jets originating from SM quarks, one can expect to have =0, if the majority of the tracks of the light jet are prompt. However, the value of zero is obviously fictitious once the transverse impact parameter has been smeared to account for reconstruction effects. While the smearing functions have a non-trivial dependence with the η and p_T of the corresponding track (the resolution σ_r, which we have taken from figures 14a and 15a of <cit.>), the typical resolution would be of about 50 μm, and hence the variable would peak around this value for SM light-quark jets. We show our results for the and variables in Figure <ref>, where we present the results for A as dashed red, and for It_5, It_4 and R in solid blue, orange and green. From the left panel we see that the naive A approach does not describe the distribution as well as any of the other criteria, while the three other curves fit the signal distribution with reasonable accuracy. Moreover, we also see that the proper inclusion of the transverse impact parameter smearing is necessary to explain the distribution of for the QCD jets from the signal, which is displayed as dashed purple. From the right panel we see that the distribution for the signal does not change much with the different criteria It_5, It_4, R and A. Since on this variable one only applies a < 0.25 cut (see Appendix <ref>) our attention is only in the proper reproduction of the first bins, and the mismatch at the tails is not relevant for us. Hence we delay the final judgement of which parametrization of the tracking efficiency to use to the next step of our validation: to reproduce the published A_i efficiencies. The analysis defined eight different jet identification criteria on the four relevant variables to consider a jet as emerging. These criteria are supplemented by the requirement to have a minimum of two EJs , or one EJ jet with large transverse missing energy (MET), and by additional cuts on H_T and on the p_T of the four hardest jets. The combination of the EJ criteria and the additional cuts define seven selection sets. The explicit requirements are collected in Appendix <ref>, together with the 95 % C.L. limit to the number of signal events in each selection set, S_95^i. §.§ Reproducing efficiencies and exclusion limits If our interest would be to perform a reinterpretation of the emerging jets results in the context of the same model used by the collaboration (or one with a similar topology) then we could employ the reported acceptances A_i to derive the published limits, as we did in Section <ref>. We stress that our goal is to perform a flexible reinterpretation of this search, namely to employ it to derive limits on a model that the search has not considered. Hence, what we need is to fully validate our pipeline to compute the acceptance of the selection sets for the benchmark model used in the CMS study. We show in Figure <ref> the ratio of our computed acceptances over the published CMS results in the left panels (the color bar indicate the A_i value from CMS) and the obtained exclusion limits in the right panels, where we have employed the It_5 (upper row), It_4 (middle row) and R (lower row) parametrization of the tracking efficiencies. We can see that the best agreement is obtained with the R parametrization, while the other two tend to overestimate the efficiencies. We see that we have agreement up to 20-30 % for large masses in the iteration R, which degrades for lower masses and also extreme lifetime values, where the overall acceptances are nonetheless at the per-mille level or lower. We also note that the R parametrization also gives an acceptable exclusion limit, and hence we decide to adopt it for the rest of the article. We note that with more examples provided by CMS (or simply by providing the efficiencies in all signal regions) one could attempt a more complex parametrization of the efficiency. We consider hence this search as validated, and will proceed in the next section to derive bounds on the parameter space of Exotic Higgs decays. Our analysis code that allows us to derive the exclusions have been uploaded to the LLP Recasting Repository <cit.>, making it publicly available to facilitate the reinterpretation of the emerging jets search for arbitrary models. Further instructions and the relevant documentation to run the code can be found in the Repository. § REINTERPRETATION FOR HIGGS MEDIATED DARK SHOWERS When the SM Higgs h couples to the dark quarks the expected number of signal events reads N_S^i (, ) = σ^ proc_ SM× BR (h → Q_D Q_D) × A_i (m_H, , ) × L , where now the only free physical parameters are the dark pion mass and its lifetime, and the exotic Higgs branching ratio into dark quarks. To further define our framework, we need to select a decay portal for our dark mesons. We follow here the proposal of reference <cit.> and we consider the gluon, vector, Higgs and dark-photon portals.[It is obvious that the photon portal, where π_D →γγ, does not pass the emerging jet selection cuts. Hence this decay portal is not further considered here.] We have verified our implementation of these decay portal benchmarks by reproducing the dark meson multiplicities from reference <cit.>.[We are indebted to Simon Knapen for useful communication during the validation phase.] We start by analyzing the acceptance A_i as a function of the dark pion lifetime and masses, for the gluon decay portal, for all five considered production Higgs mechanisms, which we show in figure <ref>. It is worth mentioning here that we do expect the EJ search not to be optimal for Higgs decays into dark quarks, as it originally targets two quarks and two dark quarks, while the Higgs decay would only give at parton level two dark quarks. However, since we are keeping the ρ_D →π_D π_D channel open, and since there is additional radiation from the initial state gluons and from the decay portals themselves, we do still obtain acceptances on the 10^-4 range, which can suffice to obtain an exclusion given that with 16.1 fb^-1, O (10^6 ) Higgs bosons would be produced at the 13 TeV LHC via gluon-fusion. From the figure we can see that the dependence in is non trivial, obtaining a maximum around 10 mm, while for the dependence is quite flat, except for the heavier masses of 20-30 GeV: those dark pions obtain a reduced boost from the Higgs compared to lighter ones. It is intriguing to see that, owing to the additional radiation, the ttH production has a higher acceptance, about an order of magnitude larger than gluon fusion, and about a factor of five larger than associated production with a vector boson. We note that vector fusion has the lowest acceptance, and this is due to the fact that the additional radiation in VBF goes in the forward direction, while the EJ analysis focuses on central jets. We stress that while we only show the gluon portal decay benchmark here, all the other portal decay models show an analogous behaviour. The picture changes slightly once the production cross sections for each mechanism is considered, which is shown in figure <ref>. Here we multiply the maximum acceptance with the production cross section for each mechanism, and the total luminosity of the emerging jet search (16.1 fb^-1). Hence the y-axis directly displays the expected number of events for each production mode. We have added here the overall number of events obtained by summing over all possible production modes, in a dashed-brown line. We see now, that owing to the larger cross section of the GF mechanism (two orders of magnitude over ttH, factors of 15-25 for the modes involving gauge bosons), the overall number of events, A σ L, is larger by an order of magnitude compared to the other modes. We also see that the impact of including all decay modes instead of only gluon fusion amounts to about 20 % of the total number of events. In view of our findings we will focus from now on only on the dependence of our results with the lifetime for a =5 GeV mass, and we will also include all Higgs production modes in our study. We study now the sensitivity for the different decay portals considered in ref <cit.>. To that end we present in figure <ref> the efficiencies as a function of the dark pion lifetime, for = 5 GeV. In order to obtain reliable estimates for these acceptances, we have simulated 10^7 Monte Carlo events per parameter space point. Of the possible decay portals, we then find that the sensitivity is larger (and similar) for the dark photon and gluon portals, and lower (and similar) for the vector and Higgs portals. We then will select in what follows the gluon (G) and Higgs (H) decay portals, as they correspond to the extreme values for the efficiencies for the four portal scenarios considered. These two decay portals correspond to the following operators π_D G^μνG̃_μν (G), π_D H^† H (H) . In the gluon portal one expects a showered enriched with SM hadrons produced from the produced gluons, while in the Higgs portal the decays would follow a Yukawa-like structure, and one can expect a shower enriched with heavy flavour quarks. Using the acceptance from figure <ref>, we show in figure <ref> the excluded exotic Higgs branching ratio as a function of the lifetime, for a dark pion mass of 5 GeV. The solid line is using the existing dataset from the EJ search (16.1 fb^-1). For comparison we show the ATLAS limit of 0.21, which was obtained with a 8 times larger dataset (139 fb^-1), shown in red dashed. For a fair comparison we rescale our EJ limit to this luminosity (dashed lines), assuming that the uncertainty is dominated by the statistical error, which given the event counts in the different signal regions, and the reported systematic errors, is a good approximation. We also include constraints from prompt searches using CheckMATE2 for prompt <cit.> and long-lived searches <cit.>, shown in dashed grey (green) for the gluon (Higgs) portal. These constraints come from prompt searches including missing energy from ATLAS, more concretely from <cit.>, using 139 fb^-1 of data: as the lifetime of the dark pions become large, many of them appear as missing transverse energy. We note that our benchmark choice for = 5 GeV correspond to a challenging phase space due to the softness of the decay products, otherwise it is clear that for heavier dark pion masses constraints from other searches would apply. One prominent example would be the Zh production, with h decaying through light scalars or pseudoscalars into displaced jets <cit.>, which report meaningful bounds on the exotic branching fraction for pseudoscalar masses as light as 15 GeV. Finally, we include an estimation of the HL-LHC, for both the BSM Higgs branching ratio (taken from <cit.>) and for the statistic dominated approximation of the EJ search. It is worth stressing that the HL-LHC run would not be similar to the current LHC setup in terms of capabilities to deal with long-lived particles (cite some HL-LHC expectations), significantly improving in many aspects, hence these limits must be considered with a grain of salt (also the statistical dominance of the study might not be fully justified). From the figure we see that the EJ search can obtain better bounds than the model-independent BSM branching fraction, for lifetimes in the 5-60 (7-50) GeV when using the combined production (gluon fusion production) for the Higgs portal decay benchmark. For the sake of illustration we will describe now the bounds on this model, but the behaviour would be similar for other decay portals. Prompt searches with missing energy can constrain large lifetimes (c τ≳ 400 mm). For clarity reasons we have refrained from showing HL-LHC extrapolations from CheckMATE, but they would only be more sensitive than the BSM Higgs study for lifetimes in the 300-700 mm range, with the exact value depending on the final HL-LHC limit. Nonetheless, the phenomenological picture is similar to the one with the current dataset: for low lifetimes the BSM Higgs limit dominates, in an intermediate regime the EJ reinterpretation takes over, and for longer lifetimes the BSM Higgs searches become more sensitive, with missing energy searches becoming relevant for long-lifetimes. As stressed before, exotic Higgs decays are not a target of the EJ analysis, and hence it would be interesting to consider the use of emerging jet taggers in other production modes. We leave this option for future work. § CONCLUSIONS AND OUTLOOK In this work we have performed detailed studies focused on the reinterpretation of the CMS emerging jet search. This signature belongs to the class of signatures that are collectively dubbed as “dark showers”, which stem from having a strongly-interacting dark (secluded) sector. In this dark sector new matter (and gauge) fields are added, which are assumed to hadronize, like in the SM strong sector. In particular, emerging jets correspond to the case where the dark sector mesons are have macroscopically appreciable decay lenghts, which make these final states also fall in the class of exotic phenomena dubbed “long-lived particles” (LLPs). Our reinterpretation procedure has been validated by carefully following the CMS study. We have obtained good agreement with the published distributions on the and variables, and also reproduced the publicly available efficiencies for the benchmark model employed in the search. We have reproduced the published exclusion limits through two different routes, one by employing directly the CMS published efficiencies and another one by computing the efficiencies ourselves through our own Monte-Carlo simulation. Here there is a large uncertainty in the exact parametrization of the tracking efficiency. We have attempted a few different parametrizations, and employed the one that, while possibly over-simplified, can reproduce the published efficiencies (and exclusion limits) with a reasonable accuracy. We would like to stress that while the relevant information of the CMS study was publicly available and clearly explained, getting in contact with the authors of the experimental study was nonetheless needed in order to comprehend a few crucial details. Their response has been instrumental to understand details concerning the track efficiency and the impact parameter smearing used in the study. Since it would be desirable that a reinterpretation of an experimental study can be done without this contact (as it can happen that the main authors of a given analysis might not be always part of the collaboration), we also took the opportunity to comment in the text for which aspects a clarification was needed, and which additional material would have helped us to carry our the reinterpretation. Using our validated pipeline, we have focused on the exploration of a SM Higgs boson decaying into two dark quarks (fermions charged solely under the new strong sector, akin to the SM quarks). To that extent, we have considered the inclusive production of the Standard Model Higgs from gluon fusion, Higgs-strahlung, vector-boson-fusion and associated production with a t t̅ pair, and analyzed four decay benchmark portal models proposed in <cit.>, which are dubbed gluon, dark photon, Higgs and vector portals. We have found that, while the efficiencies for the Higgs production rank in the 10^-3:-5 range, owing to the large production cross section we can obtain meaningful bounds in the relevant parameter space, which are competitive with the current exclusion on undetected Higgs branching ratio of 16 %, set by the ATLAS and CMS collaborations. We have checked, with the help of CheckMATE, that the existing prompt searches can bring meaningful bounds only for the large lifetime regime, ≳ O (100 mm). We have also considered the existing HL-LHC extrapolations for the undetected Higgs branching ratio, and compared them with a similar naive extrapolation of the emerging jet search sensitivity (relying only on statistical uncertainties being present). Yet, it is expected that the HL-LHC will have a number of improvements to detect long-lived particles, which could render the final projections better than our naive extrapolations. As a byproduct of our analysis, we have made publicly available our Pythia 8 analysis code in the LLP Recating Repository <cit.>, which can be used to compute the experimental acceptance (and the exclusion limits) with arbitrary BSM models, provided they are implemented in Pythia8. We would like to stress that the exotic Higgs decay exclusion from <cit.> is an indirect bound, based on a global fit to the observed Higgs properties. Hence, if a signal is detected, its characterization would require an independent study. In contrast, if the emerging jet search starts seeing an excess, one can already infer that a new long-lived object is being produced from a Higgs boson decay, information that is crucial for the proper characterization of a putative BSM signal. We end by noting that the EJ requirements of having four hard jets do not precisely target the exotic decays of a SM Higgs boson. In spite of the analysis not being optimal, we see that we can exclude exotic branching ratio of 30 % in the gluon and dark photon decay portals, which can go down to the percent level for HL-LHC. Therefore, it might be worthwhile to explore EJ searches that focus on dark quark decays from a SM Higgs boson (or from a new scalar), which could have higher sensitivity than the model independent search for undetected Higgs branching ratios. §.§ Acknowledgements We would like to thank Juliette Alimena, Nishita Desai, Alberto Escalante del Valle, Simon Knapen, Emmanuel Francois Perez and Pedro Schawaller for useful discussions, and Baibhab Pattnaik for a careful reading of the manuscript. We are indebted to the authors of the CMS emerging jet analysis: Alberto Belloni, Yi-Mu Chen, Sarah Eno and Long Wang for their patience to answer our questions about technical details in their study. JC and JZ are supported by the Generalitat Valenciana (Spain) through the plan GenT program (CIDEGENT/2019/068), by the Spanish Government (Agencia Estatal de Investigación) and ERDF funds from European Commission (MCIN/AEI/10.13039/501100011033, Grant No. PID2020-114473GB-I00). § CMS EMERGING JETS In this Appendix we include additional material from our CMS validation described in  <ref>. The CMS collaboration employs four variables to identify emerging jets in both signal and background regions, which have been defined in section <ref>. For the six signal regions, the specific requirements on each of these variables are shown in Table <ref>. Based on these requirements, CMS further defines signal regions (called “sets” in the CMS paper), where a given EMJ criteria is accompanied by a set of cuts on the jets, requiring either two emerging jets, or one emerging jet plus large missing transverse energy. The event yield excluded in each signal region at the 95 % C.L. by CMS is shown in the rightmost column, S_95. The information from these tables has been included in the companion code uploaded to the LLP Recasting Repository <cit.>. We have also collected there the details on the different tracking efficiency parametrization employed in this work. JHEP
http://arxiv.org/abs/2307.05420v1
20230711163549
Similarity-Based Parameter Transferability in the Quantum Approximate Optimization Algorithm
[ "Alexey Galda", "Eesh Gupta", "Jose Falla", "Xiaoyuan Liu", "Danylo Lykov", "Yuri Alexeev", "Ilya Safro" ]
quant-ph
[ "quant-ph" ]
Differential Analysis of Triggers and Benign Features for Black-Box DNN Backdoor Detection Hao Fu, Prashanth Krishnamurthy, Member, IEEE Siddharth Garg, Member, IEEE, Farshad Khorrami, Senior Member, IEEE Department of Electrical and Computer Engineering, New York University, Brooklyn, NY, 11201, USA. E-mail: {hf881, prashanth.krishnamurthy, sg175, khorrami} @nyu.edu. This work was supported in part by the Army Research Office under grant number W911NF-21-1-0155 and by the New York University Abu Dhabi (NYUAD) Center for Artificial Intelligence and Robotics, funded by Tamkeen under the NYUAD Research Institute Award CG010. Code is available at <https://github.com/fu1001hao/Five-Metrics-Detector.git>. August 12, 2023 ======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== The quantum approximate optimization algorithm (QAOA) is one of the most promising candidates for achieving quantum advantage through quantum-enhanced combinatorial optimization. A near-optimal solution to the combinatorial optimization problem is achieved by preparing a quantum state through the optimization of quantum circuit parameters. Optimal QAOA parameter concentration effects for special MaxCut problem instances have been observed, but a rigorous study of the subject is still lacking. In this work we show clustering of optimal QAOA parameters around specific values; consequently, successful transferability of parameters between different QAOA instances can be explained and predicted based on local properties of the graphs, including the type of subgraphs (lightcones) from which graphs are composed as well as the overall degree of nodes in the graph (parity). We apply this approach to several instances of random graphs with a varying number of nodes as well as parity and show that one can use optimal donor graph QAOA parameters as near-optimal parameters for larger acceptor graphs with comparable approximation ratios. This work presents a pathway to identifying classes of combinatorial optimization instances for which variational quantum algorithms such as QAOA can be substantially accelerated. § INTRODUCTION Quantum computing seeks to exploit the quantum mechanical concepts of entanglement and superposition to perform a computation that is significantly faster and more efficient than what can be achieved by using the most powerful supercomputers available today <cit.>. Demonstrating quantum advantage with optimization algorithms <cit.> is poised to have a broad impact on science and humanity by allowing us to solve problems on a global scale, including finance <cit.>, biology <cit.>, and energy <cit.>. Variational quantum algorithms, a class of hybrid quantum-classical algorithms, are considered primary candidates for such tasks and consist of parameterized quantum circuits with parameters updated in classical computation. The quantum approximate optimization algorithm (QAOA) <cit.> is a variational algorithm for solving classical combinatorial optimization problems. In the domain of optimization on graphs, it is most often used to solve NP-hard problems such as MaxCut <cit.>, community detection <cit.>, and partitioning <cit.> by mapping them onto a classical spin-glass model (also known as the Ising model) and minimizing the corresponding energy, a task that in itself is NP-hard. In this work we demonstrate two related key elements of optimal QAOA parameter transferability. First, by analyzing the distributions of subgraphs from two QAOA MaxCut instance graphs, one can predict how close the optimized QAOA parameters for one instance are to the optimal QAOA parameters for another. Second, by analyzing the overall parity of both donor-acceptor pairs, one can predict good transferability between those QAOA MaxCut instances. The measure of transferability of optimized parameters between MaxCut QAOA instances on two graphs can be expressed through the value of the approximation ratio, which is defined as the ratio of the energy of the corresponding QAOA circuit, evaluated with the optimized parameters γ, β, divided by the energy of the optimal MaxCut solution for the graph. While the optimal solution is not known in general for relatively small instances (graphs with up to 256 nodes are considered in this paper), it can be found by using classical algorithms, such as the Gurobi solver <cit.>[The Gurobi solver provides classically optimal MaxCut solutions in a competitive speed with known optimization gap. For the purpose of this work, there is no particular reason to choose Gurobi over IPOPT or other similarly performing solvers.]. We first focus our attention on similarity based on the subgraph decomposition of random graphs and show that good transferability of optimized parameters between two graphs is directly determined by the transferability between all possible permutations of pairs of individual subgraphs. The relevant subgraphs of these graphs are defined by the QAOA quantum circuit depth parameter p. In this work we focus on the case p = 1; however, our approach can be extended to larger values of p. Higher values of p lead to an increasing number of subgraphs to be considered, but the general idea of the approach remains the same. This question is beyond the scope of this paper and will be addressed in our future work. We then move to similarity based on graph parity and determine that we can predict good optimal parameter transferability between donor-acceptor graph pairs with similar parities. Here, too, more work remains to be done regarding the structural effects of graphs on optimal parameter transferability. Based on the analysis of the mutual transferability of optimized QAOA parameters between all relevant subgraphs for computing the MaxCut cost function of random graphs, we show good transferability within the classes of odd and even random graphs of arbitrary size. We also show that transferability is poor between the classes of even and odd random graphs, in both directions, based on the poor transferability of the optimized QAOA parameters between the subgraphs of the corresponding graphs. When considering the most general case of arbitrary random graphs, we construct the transferability map between all possible subgraphs of such graphs, with an upper limit of node connectivity d_max = 6, and use it to demonstrate that in order to find optimized parameters for a MaxCut QAOA instance on a large 64-, 128-, or 256-node random graph, under specific conditions, one can reuse the optimized parameters from a random graph of a much smaller size, N = 6, with only a ∼1% reduction in the approximation ratio. This paper is structured as follows. In Section <ref> we present the relevant background material on QAOA. In Section <ref> we consider optimized QAOA parameter transferability properties between all possible subgraphs of random graphs of degree up to d_max = 6. We then extend the consideration to parameter transferability using graph parity as a metric, and we demonstrate the power of the proposed approach by performing optimal transferability of QAOA parameters in many instances of donor-acceptor graph pairs of differing sizes and parity. We find that one can effectively transfer optimal parameters from smaller donor graphs to larger acceptor graphs, using similarities based on subgraph decomposition and parity as indicators of good transferability. In Section <ref> we conclude with a summary of our results and an outlook on future advances with our approach. § QAOA The quantum approximate optimization algorithm is a hybrid quantum-classical algorithm that combines a parameterized quantum evolution with a classical outer-loop optimizer to approximately solve binary optimization problems <cit.>. QAOA consists of p layers (also known as the circuit depth) of pairs of alternating operators, with each additional layer increasing the quality of the solution, assuming perfect noiseless execution of the corresponding quantum circuit. With quantum error correction not currently supported by modern quantum processors, practical implementations of QAOA are limited to p ≤ 3 because of noise and limited coherence of quantum devices imposing strict limitations on the circuit depth <cit.>. Motivated by the practical relevance of results, we focus on the case p = 1 in this paper. §.§ QAOA Background Consider a combinatorial problem defined on a space of binary strings of length N that has m clauses. Each clause is a constraint satisfied by some assignment of the bit string. The objective function can be written as C(z) = ∑_α=1^m C_α(z), where z = z_1z_2⋯ z_N is the bit string and C_α(z) = 1 if z satisfies the clause α, and 0 otherwise. QAOA maps the combinatorial optimization problem onto a 2^N-dimensional Hilbert space with computational basis vectors |z⟩ and encodes C(z) as an operator C diagonal in the computational basis. At each call to the quantum computer, a trial state is prepared by applying a sequence of alternating quantum operators |⟩_p := U_B(β_p)U_C(γ_p)… U_B(β_1)U_C(γ_1)|s⟩ , where U_C(γ) = e^-iγ C is the phase operator; U_B(β)=e^-iβ B is the mixing operator, with B defined as the operator of all single-bit σ^x operators; B = ∑_j=1^N σ_j^x; and |s⟩ is some easy-to-prepare initial state, usually taken to be the uniform superposition product state. The parameterized quantum circuit (<ref>) is called the QAOA ansatz. We refer to the number of alternating operator pairs p as the QAOA depth. The selected parameters β⃗,γ⃗ are said to define a schedule, analogous to a similar choice in quantum annealing. Preparation of the state (<ref>) is followed by a measurement in the computational basis. The output of repeated state preparation and measurement may be used by a classical outer-loop algorithm to select the schedule β⃗,γ⃗. We consider optimizing the expectation value of the objective function ⟨ C⟩_p = ⟨β⃗, γ⃗|_pC|β⃗, γ⃗⟩_p , as originally proposed in <cit.>. The output of the overall procedure is the best bit string z found for the given combinatorial optimization problem. Figure <ref> presents a schematic pipeline of the QAOA algorithm. We emphasize that the task of finding good QAOA parameters is challenging in general, for example because of encountering barren plateaus <cit.>. Acceleration of the optimal parameters search for a given QAOA depth p is the focus of many approaches aimed at demonstrating quantum advantage. Examples include warm- and multistart optimization <cit.>, problem decomposition <cit.>, instance structure analysis <cit.>, and parameter learning <cit.>. §.§ MaxCut For studying the transferability of optimized QAOA parameters, we consider the MaxCut combinatorial optimization problem. Given an unweighted undirected simple graph G = (V,E), the goal of the MaxCut problem is to find a partition of the graph's vertices into two complementary sets such that the number of edges between the two sets is maximized. In order to encode the problem in the QAOA setting, the input is a graph with |V| = N vertices and |E| = m edges, and the goal is to find a bit string z that maximizes C = ∑_jk∈ E C_jk, where C_jk = 1/2(-σ_j^z σ_k^z + 1). It has been shown in <cit.> that on a 3-regular graph, QAOA with p=1 produces a solution with an approximation ratio of at least 0.6924. §.§ QAOA Simulator and Classical MaxCut Solver Calculating the approximation ratio for a particular MaxCut problem instance requires the optimal solution of the combinatorial optimization problem. This problem is known to be NP-hard, and classical solvers require exponential time to converge. For our experiments, we use the Gurobi solver <cit.> with the default configuration parameters, running the solver until it converges to the optimal solution. For our QAOA simulations, we use QTensor <cit.>, a large-scale quantum circuit simulator with step-dependent parallelization. QTensor simulates circuits based on a tensor network approach, and as such, it can provide an efficient approximation to certain classes of quantum states <cit.>. § PARAMETER TRANSFERABILITY Solving a QAOA instance calls for two types of executions of quantum circuits: iterative optimization of the QAOA parameters and the final sampling from the output state prepared with those parameters. While the latter is known to be impossible to simulate efficiently for large enough instances using classical hardware instead of a quantum processor <cit.>, the iterative energy calculation for the QAOA circuit during the classical optimization loop can be efficiently performed by using tensor network simulators for instances of a wide range of sizes <cit.>, as described in the preceding section. This is achieved by implementing considerable simplifications in how the expectation value of the problem Hamiltonian is calculated by employing a mathematical reformulation based on the notion of the reverse causal cone introduced in the seminal QAOA paper <cit.>. Moreover, in some instances, the entire search of the optimal parameters for a particular QAOA instance can be circumvented by reusing the optimized parameters from a different “related” instance, for example for which the optimal parameters are concentrated in the same region. Optimizing QAOA parameters for a relatively small graph, called the donor, and using them to prepare the QAOA state that maximizes the expectation value ⟨ C⟩_p for the same problem on a larger graph, called the acceptor, is what we define as successful optimal parameter transferability, or just transferability of parameters, for brevity. The transferred parameters can be used either directly without change, as implemented in this paper, or as a “warm start” for further optimization. In either case, the high computational cost of optimizing the QAOA parameters, which grows rapidly as the QAOA depth p and the problem size are increased, can be significantly reduced. This approach presents a new direction for dramatically reducing the overall runtime of QAOA. Optimal QAOA parameter concentration effects have been reported for several special cases, mainly focusing on random 3-regular graphs <cit.>. Brandao et al. <cit.> observed that the optimized QAOA parameters for the MaxCut problem obtained for a 3-regular graph are also nearly optimal for all other 3-regular graphs. In particular, the authors noted that in the limit of large N, where N is the number of nodes, the fraction of tree graphs asymptotically approaches 1. We note that, for example, in the sparse Erdös–Rényi graphs, the trees are observed in short-distance neighborhoods with very high probability <cit.>. As a result, in this limit, the objective function is the same for all 3-regular graphs, up to order 1/N. The central question of this manuscript is under what conditions the optimized QAOA parameters for one graph also maximize the QAOA objective function for another graph. To answer that question, we study transferability between subgraphs of a graph, since the QAOA objective function is fully determined by the corresponding subgraphs of the instance graph, as well as transferability between graphs of similar parities, in order to determine structural effects of graphs on effective transferability. §.§ Subgraph Transferability Analysis It was shown in the seminal QAOA paper <cit.> that the expectation value of the QAOA objective function, ⟨ C⟩_p, can be evaluated as a sum over contributions from subgraphs of the original graph, provided its degree is bounded and the diameter is larger than 2p (otherwise, the subgraphs cover the entire graph itself). The contributing subgraphs can be constructed by iterating over all edges of the original graph and selecting only the nodes that are p edges away from the edge. Through this process, any graph can be deconstructed into a set of subgraphs for a given p, and only those subgraphs contribute to ⟨ C⟩_p, as also discussed in Section <ref>. We begin by analyzing the case of MaxCut instances on 3-regular random graphs for QAOA circuit depth p = 1, which have three possible subgraphs <cit.>. Figure <ref> (top row) shows the landscapes of energy contributions from these subgraphs, evaluated for a range of γ and β parameters. We can see that all maxima are located in the approximate vicinity of each other. As a result, the parameters optimized for any of the three graphs will also be near-optimal for the other two. Because any random 3-regular graph can be decomposed into these three subgraphs, for QAOA with p = 1, this guarantees that optimized QAOA parameters can be successfully transferred between any two 3-regular random graphs, which is in full agreement with <cit.>. The same effect is observed for subgraphs of 4-regular; see Figure <ref> (middle row). The optimized parameters are mutually transferable between all four possible subgraphs of 4-regular graphs. Notice, however, that the locations of exactly half of all maxima for the subgraphs of 4-regular graphs do not match with those for 3-regular graphs. This means that one cannot expect good transferability of optimized parameters across MaxCut QAOA instances for 3- and 4-regular random graphs if these optimal parameters are to be transferred directly. It has been recently shown in <cit.> that gamma parameters can be rescaled in order to generalize between different random d-regular graphs. Focusing now on all five possible subgraphs of 5-regular graphs, Figure <ref> (bottom row), we notice that, again, good parameter transferability is expected between all instances of 5-regular random graphs. Moreover, the locations of the maxima match well with those for 3-regular graphs, indicating good transferability across 3- and 5-regular random graphs. We discuss parameter concentration for instances of random graphs in a later section; similar discussions can be found in <cit.> and <cit.> in the context of 3-regular graphs. To further investigate transferability among regular graphs, we evaluate the subgraph transferability map between all possible subgraphs of d-regular graphs, d ≤ 8; see Figure <ref>. The top panel shows the colormap of parameter transferability coefficients between all possible pairs of subgraphs of d-regular graphs (d ≤ 8, 35 subgraphs total). Each axis is split into groups of d subgraphs of d-regular graphs, and the color values in each cell represent the transferability coefficient T(D, A) computed for the corresponding directional pair of subgraphs D, A, defined as follows. For every subgraph G, we performed numerical optimization with 200 steps, repeated 20 times with random initial points. This process results in 20 sets of optimal parameters of the form (γ_G_i, β_G_i) pairs, the best of which we will denote as (γ_G*, β_G*). Doing so for the donor subgraph D and the acceptor subgraph A, the transferability coefficient T(D,A) averages over the QAOA energy contribution of each (γ_D_i, β_D_i) on the acceptor subgraph A as follows: T(D, A) = 1/20∑_i = 1^20A(γ_D_i,β_D_i)/A(γ_A*,β_A*), where A(γ, β) is the QAOA MaxCut energy of subgraph A as a function of parameters (γ, β). Instead of averaging over the 20 optimal parameters of the donor subgraph, we could have considered only the contribution of the donor's best optimal parameters (γ_D*, β_D*) in the above equation. For most donors, however, these best parameters were universal and hence yielded high transferability to most acceptors. However, in practice, because of a lack of iterations or multistarts, we may converge to non-universal optimal parameters, resulting in the donor's poor transferability with some acceptors. The likelihood of converging to these non-universal optima for random graphs is discussed in Supplement <ref>. Universal and nonuniversal optimal parameters are discussed in detail in Section <ref>. This inconsistency was discussed for 3-regular and 4-regular subgraphs earlier in this section. For example, half of the local optima of 3 regular subgraphs have good transferability to 4-regular subgraphs while the half yield poor transferability, as shown in in Fig <ref>. Thus, to reflect practical considerations and avoid such inconsistency, we average over the contributions of 20 optimal parameters of the donor subgraph in Equation (<ref>). It is worth noting here that this averaging over 20 optimal parameters can result in poor transferability, as seen for donor subgraph #0 to acceptor subgraphs #2, #9, #20, and #35. For these cases, there is a considerable subset of the donor's optimal parameters that lead to poor transferability. All considered subgraphs are shown in the bottom panel of Figure <ref>. Note that parameter transferability is a directional property between (sub)graphs, and good transferability from (sub)graph D to (sub)graph A does not guarantee good transferability from A to D. This general fact can be easily understood by considering two graphs with commensurate energy landscapes, for which every energy maximum corresponding to graph D also falls onto the energy maximum for graph A, but some of the energy maxima for graph D do not coincide with those of graph A. The regular pattern of alternating clusters of high- and low-transferability coefficients in Figure <ref> illustrates that the parameter transferability effect extends from 3-regular graphs to the entire family of odd-regular graphs, as well as to even-regular graphs, with poor transferability between the two classes. For example, the established result for 3-regular graphs is reflected at the intersection of columns and rows with the label “(3)” for both donor and acceptor subgraphs. The fact that all cells in the 3x3 block in Figure <ref>, corresponding to parameter transfer between subgraphs of 3-regular graphs, have high values, representing high mutual transferability, gives a good indication of optimal QAOA parameter transferability between arbitrary 3-regular graphs <cit.>. §.§ General Random Graph Transferability Having considered optimal MaxCut QAOA parameter transferability between random regular graphs, we now focus on general random graphs. Subgraphs of an arbitrary random graph differ from subgraphs of random regular graphs in that the two nodes connected by the central edge can have a different number of connected edges, making the set of subgraphs of general random graphs much more diverse. The upper panel of Figure <ref> shows the transferability map between all possible subgraphs of random graphs with node degrees d ≤ 6, a total of 56 subgraphs, presented in the lower panel. The transferability map can serve as a lookup table for determining whether optimized QAOA parameters are transferable between any two graphs. Figure <ref> reveals another important fact about parameter transferability between subgraphs of general random graphs. Subgraphs labeled as (i, j), where i and j represent the degrees of the two central nodes of the subgraph, are in general transferable to any other subgraph (k, l), provided that all {i, j, k, l} are either odd or even. This result is a generalization of the transferability result for odd- and even-regular graphs described above. Figure <ref>, however, shows that a number of pairs of subgraphs with mixed degrees (not only even or odd) also transfer well to other mixed-degree subgraphs, for example, subgraph #20 (3, 4) →subgraph #34 (4, 5). The map of subgraph transferability provides a unique tool for identifying smaller donor subgraphs, the optimized QAOA parameters for which are also nearly optimal parameters for the original graph. The map can also be used to define the likelihood of parameter transferability between two graphs based on their subgraphs. As was the case for random regular graphs (see Figure <ref>), we see clustering of optimal parameters for subgraphs of random graphs in Figure <ref>. §.§ Parameter Transferability Examples We will now demonstrate that the parameter transferability map from Figure <ref> can be used to find small-N donor graphs from which the optimized QAOA parameters can be successfully transferred to a MaxCut QAOA instance on a much larger acceptor graph. Initially, we consider three 256-node acceptor graphs to be solved and three 6-node donor graphs; see Figure <ref>. Table <ref> contains the details of the donor and acceptor graphs, including the total number of edges, their optimized QAOA energies, the energy of the optimal classical solution, and the approximation ratio. Graphs 1 and 4 consist exclusively of odd-degree nodes, graphs 2 and 5 contain roughly the same amount of both odd- and even-degree nodes, and graphs 3 and 6 contain exclusively even-degree nodes. The optimized QAOA parameters for the donor and acceptor graphs were found by performing numerical optimization with 20 restarts, and 200 iterations each. Additionally, we use a greedy ordering algorithm and an RMSprop optimizer, with a learning rate of 0.002. Table <ref> shows the results of the corresponding transfer of optimized parameters from the donor graphs ##1–3 to the acceptor graphs ##4–6, correspondingly. The approximation ratios obtained as a result of the parameter transfer in all three cases show only a 1–2% decrease compared with those obtained by optimizing the QAOA parameters for the corresponding acceptor graphs directly. These examples demonstrate the power of the approach introduced in this paper. To extend our analysis of parameter transferability between QAOA instances, we perform transferability of optimal parameters between large sets of small donor graphs to a fixed, larger acceptor graph. In particular, we transfer optimal parameters from donors ranging from 6 to 20 nodes to 64-, 128-, and 256-node acceptor graphs. Figure <ref> shows the approximation ratio as we increase the number of donor graph nodes. These donor graphs were generated starting with graphs of exclusively odd-degree nodes and sequentially increasing the number of even-degree nodes until graphs of exclusively even-degree nodes were obtained. For each increasing number of node in a graph,  100 donor graphs were generated and each of their 20 sets of optimal parameters (20 multistarts) were transferred to the acceptor graph. We see that there are a few cases for which we achieve an approximation ratio that is comparable to the native approximation ratio for each of the acceptor graphs. Most notably, we can achieve good transferability of optimal parameters to larger (i.e., 256-node) acceptor graphs without having to increase the size of our donor graph. Each row of Figure <ref> corresponds to an increasing acceptor graph size, while each column corresponds to the parity of the acceptor graph (a formal definition and study of parity follow in the next section), with a transition from odd to even parity in graphs going from left to right. For the fully odd and fully even acceptor graphs, we notice a bimodal distribution in approximation ratios. Remarkably, for even acceptor cases, the bimodal distribution has one mode centered around the mean (white dot) and one above the mean. This points to the fact that, regardless of donor graph parity, one can achieve better parameter transferability when transferring optimal parameters to acceptor graphs with even parity. We see this transition from odd to even acceptor graphs in the way the bimodal distribution shifts, there being a monomodal distribution for the cases where the acceptor graphs are neither even nor odd. The reason for this increased likeliness of good transferability to even acceptor graphs will be explored in future work. For now, we turn our focus to parity in graphs as an alternative metric for determining good transferability between donor-acceptor graph pairs, one that does not involve subgraph decomposition (and parameter transferability between individual subgraphs). §.§ Parity and Transferability As mentioned previously, the transferability maps of regular and random subgraphs suggest that the parity of graph pairs may affect their transferability. Here, we define parity of a graph G = (V,E) to be the proportion of nodes of G with an even degree: π_G = n_even/|V|, where n_even is the number of even nodes in graph G. For this and upcoming sections, we focus on transferability between 20-node random graphs. That is, we perform optimal parameter transferability between 20-node donor and 20-node acceptor graphs. For every possible number of even-degree nodes (0,2,4,…, 20), we generated 10 graphs with distinct degree sequences, resulting in a total of 110 20-node random graphs, with maximum node degree restricted to 6. The computed transferability coefficients among each graph pair, sorted by their parity, are shown in Figure <ref>. Each block in the heatmap represents the average transferability coefficient of 100 graph pairs constructed from 10 distinct donor graphs and 10 distinct acceptor graphs. We can see that even graphs, those with π_G = 0.8-1, and odd graphs, those with π_G = 0-0.2, transfer well among themselves. However, the transferability between even donors and odd acceptors, as well as between odd donors and even acceptors, is poor. This heatmap also suggests that the mutual transferability of a donor graph is not necessary for its good transferability with other random graphs, where mutual transferability of a graph G is a measure of how well the subgraphs of G transfer among themselves. Formally, it is defined as MT(G) = ∑_d ∈{G}∑_a≠ d ∈{G} n_G(d) n_G(a)T(d, a)/T_G, where {G} is the set of distinct subgraphs of graph G, n_G(i) is the number of edges in G having subgraph i, and T_G = ∑_d ∈{G}∑_a≠ d ∈{G} n_G(d) · n_G(a) is the total number of subgraph pairs consisting of distinct subgraphs within G. Graphs with low mutual transferability are those whose subgraphs transfer poorly among themselves. This is true for graphs with a nearly equal number of odd-parity and even-parity subgraphs since subgraph pairs of different parity report poor transferability coefficients, as shown in Figs. <ref> and <ref>. In our case, such graphs are likely to be mixed-parity graphs, in other words, those with π_G = 0.4-0.6. However, the results in Figure <ref> show that these graphs have good transferability to nearly all random graphs in the data set. This trend can be explained by analyzing the energy landscapes of subgraphs. Most even- and odd-regular subgraphs have 4 maxima, two of which are universal for all regular subgraphs, as discussed in Section <ref>; the same trends were also observed in random subgraphs. Since energy landscapes of random graphs are sums of the energy landscape of its subgraphs, most 20-node random graphs share the same points, centers 1,2 in Figure <ref>, as their local or global optima, as shown in Figure <ref>. On the other hand, the remaining two nonuniversal optimal parameters of regular subgraphs are shared only across regular subgraphs of similar parity. This property is also emergent in random graphs. In Figure <ref>, only odd random graphs share centers 3,4 as their local optima, while only even graphs share centers 5,6 as their local optima. However, mixed parity contained a nearly equal number of odd and even subgraphs. Since nonuniversal maxima of even subgraphs are minima for odd subgraphs and vice versa, these nonuniversal local optima blur on the energy landscapes of mixed-parity graphs. As a result, these graphs' landscapes contain only universal maxima, as shown in the fourth energy landscape in Figure <ref>. With only universal parameters as their optimal parameters, mixed-parity graphs should indeed transfer well to all random graphs, as shown in the middle columns of Figure <ref>. The distribution of optimal parameters also explains poor transferability across random graphs of different parity. In Figure <ref>, the nonuniversal optimal parameters that maximize the MaxCut energy of odd random graphs, centers 3,4, also minimize that of even random graphs. Similarly, the nonuniversal optimal parameters that maximize the MaxCut energy of even random graphs, centers 3,4, also minimize that of odd random graphs. Consequently, transferring nonuniversal optimal parameters of even random graphs to odd random graphs and vice versa would result in poor approximation ratios, as evident in Figure <ref>. Furthermore, above-average transferability for all graph pairs can be attributed to universal parameters. As shown in Figure <ref>, all graph pairs have a true similarity or transferability coefficient greater than 0.60. Such a high lower bound can be attributed to universal parameters. Going back to Equation (<ref>), good transferability depends on whether the donor's optimal parameters (γ_D_i, β_D_i) optimize the acceptor graph as well. If most of the donor's optimal parameters are universal, in other words, are in the vicinity of centers 1,2 in Figure <ref>, then the transferability coefficient will be high, regardless of the acceptor graph. In fact, Figure <ref> in the Supplementary Material shows that, on average, all graphs reported at least half of their 20 optimal parameters as universal. As a result, they transfer well to other random graphs. §.§ Predicting transferability using subgraphs We have used the transferability coefficient to test whether an acceptor shares the same optimal parameters as its donor. In practice, this quantity is unknown because it requires knowledge of the acceptor's maximum energy. In earlier examples, we used the parity of graphs to explain transferability among random graphs, but the parity of a graph is just one emergent property from its subgraphs. Using subgraphs directly, we devise a subgraph similarity metric SS to predict the transferability ratio between a donor graph D = (V_D, E_D) and an acceptor graph A = (V_A, E_A) as follows: SS(D, A) = ∑_d ∈{D}∑_a ∈{A} n_D(d) n_A(a)T(d, a)/|E_D|·|E_A|, where {G} is the set of distinct p = 1 subgraphs of G, n_G(g) is the number of edges in G that share the subgraph g, and |E_D|·|E_A| is the total number of subgraph pairs across graphs D and A. Hence, this similarity metric states that the transferability coefficient of a donor and acceptor is the average transferability coefficient of donor subgraph-acceptor subgraph pairs. In Figure <ref> we compare this similarity metric with the true similarity or transferability coefficient. While this result does reveal a linear correlation between the two quantities, the metric clearly under approximates the transferability coefficient by 0.05 units on average. In fact, Figure <ref> shows that graph pairs with mixed-parity graphs as donors report the highest inconsistencies. This poor performance results from their constituent subgraphs. As discussed in Section <ref>, mixed-parity graphs consist of a nearly equal number of odd and even subgraphs. When optimized, these donor subgraphs may have nonuniversal optimal parameters. When transferred to an acceptor subgraph, the resulting transferability coefficient may be either poor or good, depending on the parity of that acceptor subgraph. While these nonuniversal optima do affect the subgraph similarity metric SS, they do not affect true similarity. As shown in Figure <ref>, mixed-parity graphs' optimal parameters are universal. Thus, they transfer well to any random graph, regardless of its parity. Therefore, the subgraph similarity metric underestimates true similarity because it fails to capture that most optimal parameters of mixed-parity graphs are universal. §.§ Predicting transferability using parity Another approach to predicting transferability or similarity between two graphs is using their parity. In Section <ref> we observed that two graphs of similar parity have a high transferability ratio. If this correlation was ideal, then results shown in Figure <ref> would resemble those in Figure <ref>. The parity similarity metric PS corresponding to the latter figure is easy to compute: PS(D, A) = 1 -0.29·|Parity(D) - Parity(A)|. Thus, this metric penalizes graph pairs consisting of different parity graph pairs. Note that the lowest value of this metric is ≈ 0.71, which is consistent with results from Figure <ref>. Figure <ref> illustrates the performance of this new similarity metric. The plot contains discrete columns because it is not possible to generate 20-node graphs with an arbitrary number of even-degree nodes. In order to ensure that the sum of a degree sequence is even, the associated graphs vary only in even-degree nodes in increments of two, resulting in parity of 0.0, 0.1, 0.2, … 1.0. This discretization is also reflected in the similarity metric. To test our similarity metric for the data set shown in <ref>, we compare our metric with the approximation ratio. Figure <ref> shows that as parity between donor-acceptor pairs approaches 1 (i.e., the donor and acceptor graphs have the same parity), we achieve a higher approximation ratio. Noticeably, we see that we can have a good approximation ratio even if our parity similarity does not dictate so. This can be attributed to the fact that we are exploiting only one structural feature from our graphs. These results indicate that one can use a parity approach to determine good transferability between donor-acceptor pairs. Furthermore, one can generate a parity metric that caters to specific graphs (please refer to Section <ref>). § CONCLUSIONS AND OUTLOOK Finding optimal QAOA parameters is a critical step in solving combinatorial optimization problems by using the QAOA approach. Several existing techniques to accelerate the parameter search are based on advanced optimization and machine learning strategies. In most works, however, various types of global optimizers are employed. Such a straightforward approach is highly inefficient for exploration because of the complex energy landscapes for hard optimization instances. An alternative effective technique presented in this paper is based on two intuitive observations: (1) the energy landscapes of small subgraphs exhibit “well-defined” areas of extrema that are not anticipated to be an obstacle for optimization solvers (see Figure <ref>), and (2) structurally different subgraphs may have similar energy landscapes and optimal parameters. A combination of these observations is important because, in the QAOA approach, the cost is calculated by summing the contributions at the subgraph level, where the size of a subgraph depends on the circuit depth p. With this in mind, the overarching idea of our approach is solving the QAOA parameterization problem for large graphs by optimizing parameterization for much smaller graphs and reusing it. We started with studying the transferability of parameters between all subgraphs of random graphs with a maximum degree of 8. Good transferability of parameters was observed among even-regular and odd-regular subgraphs. At the same time, poor transferability was detected between even- and odd-regular pairs of graphs in both directions, as shown in Figs. <ref> and <ref>. This experimentally confirms the proposed approach. A remarkable demonstration of random graphs that generalizes the proposed approach is the transferability of the parameters from 6-node random graphs (at the subgraph level) to 64-node random graphs, as shown in Figure <ref>. The approximation ratio loss of only 1–2% was observed in all three cases. Furthermore, we demonstrated that one need not increase the size of the donor graph to achieve high transferability, even for acceptor graphs with 256 nodes. Following the subgraph decomposition approach, we showed that one can determine good transferability between donor-acceptor graph pairs by exploiting their similarity based on parity. We see a good correlation between subgraph similarity and parity similarity. In the future, we wish to address the exploitation of graph structure to determine good donor candidates, since subgraph similarities involve overhead calculations of QAOA energies for each pair of donor-acceptor subgraphs. One may notice that we studied parameter transferability only for p = 1, where the subgraphs are small and transferability is straightforward. However, our preliminary work suggests that this technique will also work for larger p, which will require advanced subgraph exploration algorithms and will be addressed in our following work. In particular, we wish to explore the idea of generating a large database of donor graphs and, together with a graph-embedding technique, obtain optimal QAOA parameters for transferability. We hope that by training a good graph-embedding model, we will be able to apply our technique to various sets of graphs and extend our approach to larger depths. A machine learning approach has been used to determine optimal QAOA parameters <cit.>, but a study of machine learning for donor graph determination is still an open question. Another future direction is to determine whether the effects of parity of a graph hold for p>1. In particular, we found that the parity of a graph affects the distribution of optimal parameters, as shown in Figs. <ref> and <ref>. It remains to be seen whether parameters concentrate for p>1 and, if so, how parity affects their distribution. Analysis of these trends will be critical for the applicability of PS for p>1. This work was enabled by the very fast and efficient tensor network simulator QTensor developed at Argonne National Laboratory <cit.>. Unlike state vector simulators, QTensor can perform energy calculations for most instances with p ≤ 3, d ≤ 6 and graphs with N ∼1,000 nodes very quickly, usually within seconds. For this work we computed QAOA energy for 64-node graphs with d ≤ 5 at p = 1, a calculation that took a fraction of a second per each execution on a personal computer. With state vector simulators, however, even such calculations would not have been possible because of the prohibitive memory requirements for storing the state vector. As a result of this work, finding optimized parameters for some QAOA instances will become quick and efficient, removing this major bottleneck in the QAOA approach and potentially removing the optimization step altogether in some cases, eliminating the variational nature of QAOA. Moreover, our approach will allow finding parameters quickly and efficiently for very large graphs for which it will not be possible to use simulators or other techniques. Our method has important implications for implementing QAOA on relatively slow quantum devices, such as neutral atoms and trapped-ion hardware, for which finding optimal parameters may take a prohibitively long time. Thus, quantum devices will be used only to sample from the output QAOA state to get the final solution to the combinatorial optimization problem. Our work will ultimately bring QAOA one step closer to the realization of quantum advantage. § ACKNOWLEDGMENTS This research was developed with funding from the Defense Advanced Research Projects Agency (DARPA). The views, opinions and/or findings expressed are those of the author and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. Government. A.G., D.L. X.L., I.S., and Y.A. are supported in part by funding from the Defense Advanced Research Projects Agency. This work used in part the resources of the Argonne Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC02-06CH11357. The authors thank Ruslan Shaydulin for insightful discussions. unsrtnat § SUPPLEMENTAL MATERIAL §.§ Predicting transferability using parity – extension Parity of graph correlates with the approximation ratios of a graph on the 6 centers shown in Figure <ref>. This trend can also be used to predict similarity, given two assumptions. First, the 20 computed optima are distributed among the 6 centers. This can be verified for 20-node graphs in Figure <ref>. Second, we can predict the location of these centers from the following observations from Figure <ref> and Figure <ref>: * All graphs have 4 local optima, two of which are universal. * For perfectly odd and perfectly even graphs, the local optima are distributed equally among the the four local optima. * The fraction of optimal parameters which are universal increase and that of optimal parameters which are nonuniversal decrease decrease as parity of a graph becomes mixed. * Graphs that have c_3 as their optima also have c_4 as their optima. The same is true for centers c_5 and c_6. Algorithm <ref> combines these observations to predict the distribution of local optima for a graph among the 6 centers: where AR(G, c_i) is the approximation ratio of graph G at center c_i and n_i,j is the number of local optima distributed equally among centers c_i, c_j. Given these assumptions, for a given donor D and acceptor A, we first compute OptimaDistribution(D), that is, distribution of optimal parameters of the donor graph. Let n_i be the number of optima of the donor graph D occurring at center c_i. Then the subgraph + parity similarity metric is SPS(D, A) = 1/20∑_i = 1^6n_i AR(A, c_i). §.§ Comparing metrics To see correlation between subgraph similarity metric and parity similarity metric, we compare these metrics for the transferability studies performed on the large set of 6–20-node acceptor graphs and fixed 64-, 128-, and 256-node acceptor graphs. Figure <ref> shows a correlation between subgraph similarity and parity similarity. Furthermore, we see that for either a high subgraph similarity or high parity similarity we obtain a good approximation ratio. Furthermore, for the case of 100^2 20-node graph pairs, we do a statistical comparison of the three metrics we propose. There results are given in Table <ref>. While SPS may not have the best mean squared error or the best Pearson correlation coefficient, it best captures the relationship between parity and transferability coefficients. This is evident from the heatmap in Figure <ref> closely resembling the heatmap of Figure <ref>. However, the former assumes that the latter is symmetric about x=50%, which results in inconsistencies between SPS and true similarity.
http://arxiv.org/abs/2307.04723v2
20230710173354
Quark/Gluon Discrimination and Top Tagging with Dual Attention Transformer
[ "Minxuan He", "Daohan Wang" ]
hep-ph
[ "hep-ph" ]
e1e-mail: [email protected] e2e-mail: [email protected] Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing 100190, PR China University of Chinese Academy of Sciences, Beijing 100049, PR China Department of Physics, Konkuk University, Seoul 05029, Republic of Korea Quark/Gluon Discrimination and Top Tagging with Dual Attention Transformer Minxuan Hee1,addr1,addr2 Daohan Wange2,addr3 Received: date / Accepted: date ========================================================================== Jet tagging is a crucial classification task in high energy physics. Recently the performance of jet tagging has been significantly improved by the application of deep learning techniques. In this work, we propose Particle Dual Attention Transformer for jet tagging, a new transformer architecture which captures both global information and local information simultaneously. Based on the point cloud representation, we introduce the Channel Attention module to the point cloud transformer and incorporates both the pairwise particle interactions and the pairwise jet feature interactions in the attention mechanism. We demonstrate the effectiveness of the P-DAT architecture in classic top tagging and quark-gluon discrimination tasks, achieving competitive performance compared to other benchmark strategies. § INTRODUCTION In high-energy physics experiments, tagging jets, which are collimated sprays of particles produced from high-energy collisions, is a crucial task for discovering new physics beyond the Standard Model. Jet tagging involves distinguishing boosted heavy particle jets from those of QCD initiated quark/gluon jets. Since jets initiated by different particles exhibit different characteristics, two key issues arise: how to represent a jet and how to analyze its representation. Conventionally, jet tagging has been performed using hand-crafted jet substructure variables based on physics motivation. Nevertheless, these methods can often fall short in capturing intricate patterns and correlations present in the raw data. Over the past decade, deep learning approaches have been extensively adopted to enhance the jet tagging performance<cit.>. Various jet representations have been proposed, including image-based representation using Convolutional Neural Network (CNN)<cit.>, sequence-based representation with Recurrent Neural Network<cit.>, tree-based representation with Recursive Neural Network<cit.> and graph-based representation with Graph Neural Network (GNN)<cit.>. More recently, One representation approach that has gained significant attention is to view the set of constituent particles inside a jet as points in a point cloud. Point clouds are used to represent a set of objects in an unordered manner, described in a defined space, and are commonly utilized in various fields such as self-driving vehicles, robotics, and augmented reality. By adopting this approach, each jet can be interpreted as a particle cloud, which treats a jet as a permutation-invariant set of particles, allowing us to extract meaningful information with deep learning method. Based on the particle cloud representation, several deep learning architectures have been proposed, including Deep Set Framework<cit.>, ABCNet<cit.>, LorentzNet<cit.> and ParticleNet<cit.>. The Deep Set Framework provides a comprehensive explanation of how to parametrize permutation invariant functions for inputs with variable lengths, taking into consideration both infrared and collinear safety. Furthermore, it offers valuable insights into the nature of the learned features by neural networks. ParticleNet adapts the Dynamic Graph CNN architecture<cit.>, while ABCNet takes advantage of attention mechanisms to enhance the local feature extraction. The LorentzNet focused more on incorporating inductive biases derived from physics principles into the architecture design, utilizing an efficient Minkowski dot product attention mechanism. All of these architectures realize substantial performance improvement on top tagging and quark/gluon discrimination benchmarks. Over the past few years, attention mechanisms have become as a powerful tool for capturing intricate patterns in sequential and spatial data. The Transformer architecture<cit.>, which leverages attention mechanisms, has been highly successful in natural language processing and computer vision tasks such as image recognition. Notably, the Vision Transformer (ViT)<cit.>, initially designed for computer vision tasks, has demonstrated state-of-the-art performance on various image classification benchmarks. However, when dealing with point cloud representation, which inherently lack a specific order, modifications to the original Transformer structure are required to establish a self-attention operation that is invariant to input permutations. To address these issues, a recent approach called Point Cloud Transformer (PCT)<cit.> was proposed, which entails passing input points through a feature extractor to create a high-dimensional representation of particle features. The transformed data is then passed through a self-attention module that introduces attention coefficients for each pair of particles. To evaluate PCT's effectiveness in the context of a high-energy physics task, specifically jet-tagging, PCT was compared with other benchmark implementations using three different public datasets. PCT shares a similar concept with the ABCNet's attention mechanism, employing a self-attention layer to capture the importance of relationships between all particles in the dataset. Another notable approach is the Particle Transformer<cit.>, which incorporates pairwise particle interactions within the attention mechanism and obtains higher tagging performance than a plain Transformer and surpasses the previous state-of-the-art, ParticleNet, by a large margin. In recent studies, the Dual Attention Vision Transformer (DaViT)<cit.> has exhibited promising results for image classification. The DaViT introduces the dual attention mechanism, comprising spatial window attention and channel group attention, enabling the effective capture of both global and local features in images. These two self-attentions are demonstrated to complement each other. In this paper, we introduce the Channel Attention module to the Point Cloud Transformer and incorporate the pairwise particle interaction and the pairwise jet feature interaction to build a new network structure, called P-DAT. On the one hand, the Channel Attention module can grasp comprehensive spatial interactions and representations by taking into account all spatial locations while computing attention scores between channels. In this way, the P-DAT can combine both the local information and global information of the jet representation for jet tagging. On the other hand, the pairwise interaction features designed from physics principles can modify the dot-product attention weights, thus increasing the expressiveness of the attention mechanism. We evaluate the performance of P-DAT on top tagging and quark/gluon discrimination tasks and compare its performance against other baseline models. Our analysis demonstrates the effectiveness of P-DAT in jet tagging and highlights its potential for future applications in high-energy physics experiments. This article is organized as follows. In Section <ref>, we introduce the Particle Dual Attention Transformer for jet tagging and describe the key features of the model architecture. We also provide details of the training and validation process. In Section <ref>, we present and discuss the numerical results obtained for top tagging task and quark/gluon discrimination task, respectively. Finally, our conclusions are presented in Section <ref>. § MODEL ARCHITECTURE The focus of this paper is to introduce the Particle Dual Attention Transformer (P-DAT), which serves as a new benchmark approach for jet tagging. Based on the point cloud representation, we regard each constituent particle as a point in the η-ϕ space and the whole jet as a point cloud. The whole model architecture is presented in Figure.<ref>. The P-DAT architecture is composed of 5 main building blocks, namely the feature extractor, the particle self attention layers, the channel self attention layers, the class attention layers and the MLP. In order to process a jet of P particles, the P-DAT requires three inputs: the jet dataset, the particle interaction matrix and the jet feature interaction matrix derived from the kinetic information of each particle inside the jet. First of all, the feature extractor is employed to transform the input jet dataset from P× 10 to a higher dimensional representation P× N. As illustrated in Fig.<ref>(left), the feature extractor block contains two parts. The first part incorporates an EdgeConv operation<cit.> followed by 3 two-dimensional convolutional (Conv2D) layers and an average pooling operation across all neighbors of each particle. The EdgeConv operation adopts a k-nearest neighbors approach with k=20 to define a vicinity for each particle inside the jet based on Δ R = √(Δη^2 + Δϕ^2) in the η-ϕ space to extract the local information for each particle. To ensure the permutation invariance among particles, all convolutional layers are implemented with stride and kernel size of 1 and are followed by a batch normalization operation and GeLU activation function. The second part of the feature extractor consists of 3-layer MLP with (128,128,128) nodes each layer with GELU nonlinearity to handle the negative inputs. BN and LN operations are used for normalization between layers. Finally, the output from these two parts are concatenated to obtain the final output. This approach enables the extraction of input particle embeddings through both linear projection and local neighborhood mapping. Furthermore, we introduce a particle interaction matrix and a channel interaction matrix, both of which are designed based on physics principles and incorporated into the self attention module. For the particle interaction matrix, we use a 3-layer 2D convolution with (32,16,8) channels with stride and kernel size of 1 to map the particle interaction matrix to a new embedding P× P × N_h, where N_h is the number of heads in the particle self attention module which will be explained later. As for the channel interaction matrix, an upsampling operation and a 3-layer 2D convolution are applied to map the channel interaction matrix to a higher dimensional representation N× N, with N the input particle embedding dimension. The second primary building block is the particle self-attention block, which aims to establish the relationship between all particles within the jet using an attention mechanism. As presented in Fig.<ref>, three matrices, which are called query (Q), key (K), and value (V), are built from linear transformations of the original inputs. Attention weights are computed by matrix multiplication between Q and K, representing the matching between them. Similar to the Particle Transformer work<cit.>, we incorporate the particle interaction matrix U_1 as a bias term to enhance the scaled dot-product attention. This incorporation of particle interaction features, designed from physics principles, modifies the dot-product attention weights, thereby enhancing the expressiveness of the attention mechanism. The same U_1 is shared across the two particle attention blocks. After normalization, these attention weights reflect the weighted importance between each pair of particles. The self-attention is then obtained by the weighted elements of V, which result from multiplying the attention weights and the value matrix. It is important to note that P represents the number of particles, and N denotes the total number of features. The attention weights are computed as: 𝒜(𝐐, 𝐊, 𝐕) = Concat(_1,…,_N_h) where  _i = Attention(𝐐_i, 𝐊_i, 𝐕_i) = softmax[𝐐_i(𝐊_i)^T/√(C_h)+𝐔_1]𝐕_i where 𝐐_i=𝐗_i𝐖_i^Q, 𝐊_i=𝐗_i𝐖_i^K, and 𝐕_i=𝐗_i𝐖_i^V are ℝ^P × N_h dimensional visual features with N_h heads, 𝐗_i denotes the i_th head of the input feature and 𝐖_i denotes the projection weights of the i_th head for 𝐐, 𝐊, 𝐕, and N = C_h * N_h. The particle attention block incorporates a LayerNorm (LN) layer both before and after the multi-head attention module. A two-layer MLP, with LN preceding each linear layer and GELU nonlinearity in between, follows the multi-head attention module. Residual connections are applied after the multi-head attention module and the two-layer MLP. In our study, we set N_h=8 and N=64. The third main building block is the channel self-attention block, as shown in Fig.<ref>. Unlike the particle self-attention block, this block applies attention mechanisms to the jet features, enabling interactions among the channels. To capture global information in the particle dimension, we set the number of heads to 1, where each transposed token represents global information. Consequently, the channel tokens interact with global information across all channels. This global channel attention mechanism is defined as follows: 𝒜(𝐐_i, 𝐊_i, 𝐕_i) = softmax[𝐐_i^T𝐊_i/√(C)+𝐔_2]𝐕_i^T where 𝐐_i, 𝐊_i, 𝐕_i ∈ℝ^C × P are channel-wise jet-level queries, keys, and values. Note that although we transpose the tokens in the channel attention block, the projection layers 𝐖 and the scaling factor 1/√(C) are computed along the channel dimension, rather than the particle dimension. Similar as the particle self-attention block, we incorporate the channel interaction matrix U_2 as a bias term to enhance the scaled dot-product attention. This incorporation of jet channel interaction features, designed based on physics principles, modifies the dot-product attention weights, thereby enhancing the expressiveness of the attention mechanism. The same U_2 matrix is shared across the two channel attention blocks. After normalization, the attention weights indicate the weighted importance of each pair of jet features. The self-attention mechanism produces the weighted elements of V, obtained by multiplying the attention weights and the value matrix. Additionally, the channel attention block includes a LayerNorm (LN) layer before and after the attention module, followed by a two-layer MLP. Each linear layer is preceded by an LN layer, and a GELU nonlinearity is applied between them. Residual connections are added after the channel attention module and the two-layer MLP. The fourth main building block is the class attention block, which differs from the particle self-attention block by computing attention between a global class token and all particles using the standard Multi-Head Attention (MHA) mechanism. This class channel attention mechanism is defined as follows: Q = W_q x_class + b_q, K = W_q z + b_k, V = W_q z + b_v, z = [x_class, x^L] where z = [x_class, x^L] represents the concatenation of the class token and the particle embedding after the last particle attention block, denoted as x_L. In the first class attention block, the class token is obtained by performing max pooling on the output of the second channel attention block across all particles. In the second class attention block, the class token is obtained by performing average pooling on the output of the second channel attention block across all particles. Furthermore, the class attention block includes a LayerNorm (LN) layer before and after the attention module, followed by a two-layer MLP. Each linear layer is preceded by an LN layer, and a GELU nonlinearity is applied between them. Residual connections are added after the class attention module and the two-layer MLP. The last main building block is a 3-layer MLP with (448, 64, 2) nodes, as shown in Fig.<ref>(right). First, the outputs of the particle attention blocks and channel attention blocks are concatenated, followed by an average pooling operation across all particles. Subsequently, the outputs of the class attention blocks are concatenated. Finally, these two sets of outputs are concatenated and fed into the MLP. In addition, a batch normalization operation and the GeLU activation function are applied to the second layer, and a dropout rate of 0.5 is applied to the second layer. The last layer employs a softmax operation to produce the final classification scores. In summary, the P-DAT are composed of one feature extractor, two particle attention blocks, two channel attention blocks, two class attention blocks and one MLP. The feature extractor's output serves as the input for the first particle attention block. Subsequently, we alternate between the particle attention block and the channel attention block to capture both local fine-grained and global features. A dropout rate of 0.1 is applied to all particle attention blocks and channel attention blocks. As demonstrated in Ref.<cit.>, these two blocks complement each other: the channel attention provides a global receptive field in the particle dimension, enabling the extraction of high-level global jet representations by dynamically fusing features across global channel tokens. On the other hand, the particle attention refines local representations by facilitating fine-grained interactions among all particles, thereby aiding in the modeling of global information in the channel attention. After the second channel attention block, two class attention blocks which take the max pooling and average pooling on the output of the second channel attention block as class token are applied to compute the attention between a global class token and all particles using the standard Multi-Head Attention (MHA) mechanism. Finally, the two sets of outputs are concatenated and fed into the MLP and the resulting representation is normalized using a softmax operation. The model architecture is implemented in the PYTORCH deep learning framework with the CUDA platform. The training and evaluation steps are accelerated using a NVIDIA GeForce RTX 3070 GPU for acceleration. We adopt the binary cross-entropy as the loss function. To optimize the model parameters, we employ the AdamW optimizer<cit.> with an initial learning rate of 0.0004, which is determined based on the gradients calculated on a mini-batch of 64 training examples. In order to address the memory issue caused by huge input data, we implemented a strategy of continuously importing and deleting data during the training process. The network is trained up to 100 epochs, with the learning rate decreasing by a factor of 2 every 10 epochs to a minimal of 10^-6. In addition, we employ the early-stopping technique to prevent over-fitting. § JET CLASSIFICATION The P-DAT architecture is designed to process input data consisting of particles inside the jets. To ensure consistency and facilitate meaningful comparisons, we first sorted the particles inside the jets by transverse momentum and a maximum of 100 particles per jet are employed. The input jet is truncated if the particle number inside the jet is more than 100 and the input jet is zero-padded up to the 100 if fewer than 100 particles are present. This selection of 100 particles is sufficient to cover the vast majority of jets contained within all datasets, ensuring comprehensive coverage. Each jet is characterized by the 4-momentum of its constituent particles. Based on this information, we reconstructed 10 features for each particle. Additionally, for the quark-gluon dataset, we included the Particle Identification (PID) information as the 11-th feature. These features are as follows: {log E , log |p_x| , log |p_y| , log |p_z| , log p_T , p_T/p_TJ , E/E_J , Δη Δϕ , Δ R , PID}. For the pairwise particle interaction matrix, based on Refs.<cit.>, we calculated the following 5 features for any pair of particles a and b with four-momentum p_a and p_b as the sum of all the particles' four-momentum inside the particle a and particle b, respectively: Δ R = √((y_a - y_b)^2 + (ϕ_a - ϕ_b)^2), k_T = min(p_T,a, p_T,b) Δ, z = min(p_T,a, p_T,b) / (p_T,a + p_T,b), m^2 = (E_a+E_b)^2 - 𝐩_a+𝐩_b^2, Δ p_T = p_T,a-p_T,b where y_i represents the rapidity, ϕ_i denotes the azimuthal angle, p_T,i = (p_x, i^2+p_y, i^2)^1/2 denotes the transverse momentum, and 𝐩_i=(p_x,i, p_y,i, p_z,i) represents the momentum 3-vector and · is the norm, for i=a, b. As mentioned in Ref.<cit.>, we take the logarithm and use (lnΔ, ln k_T, ln z, ln m^2, ln p_T) as the interaction features for each particle pair to avoid the long tail problem. Apart from the 5 interaction features, we add one more feature for the Quark-Gluon benchmark dataset, defined as δ_i,j, where i and j are the PID of the particles a and b. For the pairwise jet feature interaction matrix, we selected 10 typical jet variables. Besides, for the quark-gluon dataset, we incorporated the 11th feature based on the Particle Identification (PID) information. The list of all jet variables used in this study is presented below. And the interaction matrix is constructed based on a straightforward yet effective ratio relationship, as illustrated in Table.<ref>. { E , p_x , p_y , p_z , p_T , ∑ p_Tf , ∑ E_f , Δη, Δϕ , Δ R , PID}. To provide a clearer explanation of the concept of the jet feature pairwise interaction matrix, we will now present a detailed description. The first 4 variables represent the four-momentum of the input jet. Specifically, p_T denotes the transverse momentum of the input jet, while ∑ p_Tf and ∑ E_f represent the sum of the transverse momentum fractions and the energy fractions of all the constituent particles inside the input jet, respectively. Additionally, Δη, Δϕ and Δ R correspond to the transverse momentum weighted sum of the Δη, Δϕ, Δ R of all the constituent particles inside the input jet, respectively. Here Δη, Δϕ and Δ R refer to the angular distances between each constituent particle and the input jet. Furthermore, PID represents the particle identification associated with the specific particle whose sum of transverse momentum accounts for the largest proportion of the entire jet transverse momentum. The entire jet feature pairwise interaction matrix is defined as a symmetric block matrix with diagonal ones. For convenience, we named {E , p_x , p_y , p_z , p_T , ∑ p_Tf , ∑ E_f} as variable set 1 and {Δη, Δϕ , Δ R} as variable set 2. We build the pairwise interactions among variable set 1 and variable set 2, respectively. Firstly, we employ a ratio relationship to define the interaction between E and { p_x , p_y , p_z , p_T} and the interaction between p_T and { p_x , p_y}, with no interaction between orthogonal components. Additionally, we establish that the interaction between ∑ E_f and E is 1, while no interactions exist between ∑ E_f and any other variables, except for E and PID. Similarly, we define the interaction between ∑ p_Tf and p_T as 1, with no interactions between ∑ p_Tf and any other variables, except for p_T and PID. Secondly, we apply a ratio relationship to define the interaction between Δ R and {Δη, Δϕ}, while no interaction is specified between {Δη and Δϕ}. Finally, we determine the interactions between PID and all other variables as the ratio of the sum of the corresponding variables of the particles associated with the PID to the variable of the jet. §.§ Quark/Gluon Discrimination The Quark-Gluon benchmark dataset<cit.> was generated with Pythia8 without detector simulation. It comprises of quark-initiated samples qq→Z→νν+(u,d,s) as signal and gluon-initiated data qq→Z→νν+g as background. Jet clustering was performed using the anti-kT algorithm with R = 0.4. Only jets with transverse momentum p_T ∈ [500, 550] GeV and rapidity |y| < 1.7 were selected for further analysis. Each particle within the dataset comprises not only the four-momentum, but also the particle identification information, which classifies the particle type as electron, muon, charged hadron, neutral hadron, or photon. The official dataset compromises of 1.6M training events, 200k validation events and 200k test events, respectively. In this paper, we focused on the leading 100 constituents within each jet, utilizing their four-momenta and particle identification information for training purposes. For jets with fewer than 100 constituents, zero-padding was applied. For each particle, a set of 11 input features was used, based solely on the four-momenta and identification information of the particles clustered within the jet. The accuracy, area under the curve (AUC), and background rejection results are presented in Table <ref>. §.§ Top Tagging The benchmark dataset<cit.> used for top tagging comprises hadronic tops as the signal and QCD di-jets as the background. Pythia8<cit.> was employed for event generation, while Delphes<cit.> was utilized for detector simulation. All the particle-flow constituents were clustered into jets using the anti-kT algorithm<cit.> with a radius parameter of R = 0.8. Only jets with transverse momentum p_T ∈ [550, 650] GeV and rapidity |y| < 2 were included in the analysis. The official dataset contains 1.2M training events, 400k validation events and 400k test events, respectively. Only the energy-momentum 4-vectors for each particles inside the jets are provided. In this paper, the leading 100 constituent four-momenta of each jet were utilized for training purposes. For jets with fewer than 100 constituents, zero-padding was applied. For each particle, a set of 10 input features based solely on the four-momenta of the particles clustered inside the jet was utilized. The accuracy, area under the curve (AUC), and background rejection results can be found in Table <ref>. § CONCLUSION This study applies the Particle Dual Attention Transformer as an innovative approach for jet tagging. Specifically, the P-DAT architecture incorporates the Channel Attention module to the Point Cloud Transformer, allowing for capturing the jet-level global information and particle-level local information simultaneously. In addition, we introduces the particle pairwise interactions and the jet feature pairwise interactions. This technique not only enables the extraction of semantic affinities among the particles through a self-attention mechanism and the semantic affinities among the jet features through a channel-attention mechanism, but also augments the self-attention and channel-attention by combining the physics-motivated pairwise interactions with the machined learned dot-production attention. We evaluate the P-DAT architecture on the classic top tagging task and the quark-gluon discrimination task and achieve competitive results compared to other benchmark strategies. Moreover, we solved the memory usage problem by importing and deleting data during training. However, the computational time problem regarding of using the full pairwise interaction matrix is still unresolved which could be an interesting direction for future research. This work is funded by the National Research Foundation of Korea, Grant No. NRF-2022R1A2C1007583.
http://arxiv.org/abs/2307.04668v2
20230710161133
Quantifying the Echo Chamber Effect: An Embedding Distance-based Approach
[ "Faisal Alatawi", "Paras Sheth", "Huan Liu" ]
cs.SI
[ "cs.SI", "cs.AI", "cs.LG" ]
Quantifying the Echo Chamber Effect: An Embedding Distance-based Approach Faisal Alatawi, Paras Sheth, Huan Liu Arizona State University {faalataw,psheth5,huanliu}@asu.edu August 12, 2023 ========================================================================================================== The rise of social media platforms has facilitated the formation of echo chambers, which are online spaces where users predominantly encounter viewpoints that reinforce their existing beliefs while excluding dissenting perspectives. This phenomenon significantly hinders information dissemination across communities and fuels societal polarization. Therefore, it is crucial to develop methods for quantifying echo chambers. In this paper, we present the Echo Chamber Score (ECS), a novel metric that assesses the cohesion and separation of user communities by measuring distances between users in the embedding space. In contrast to existing approaches, ECS is able to function without labels for user ideologies and makes no assumptions about the structure of the interaction graph. To facilitate measuring distances between users, we propose EchoGAE, a self-supervised graph autoencoder-based user embedding model that leverages users' posts and the interaction graph to embed them in a manner that reflects their ideological similarity. To assess the effectiveness of ECS, we use a Twitter dataset consisting of four topics - two polarizing and two non-polarizing. Our results showcase ECS's effectiveness as a tool for quantifying echo chambers and shedding light on the dynamics of online discourse. Echo Chamber, Polarization, Social Media, Ideology Detection, User Representation, Graph Auto-Encoder § INTRODUCTION In the age of digital communication, social media platforms have revolutionized the way we disseminate and consume information. Nevertheless, this evolution has brought about notable challenges, particularly the emergence of echo chambers and polarization <cit.>. These phenomena are often characterized by high levels of controversy between members of different groups and homogeneity among members of the same group <cit.>. This reinforces pre-existing beliefs <cit.>, discourages critical thinking <cit.>, promotes the spread of misinformation <cit.>, and leads to societal divisions. Hence, it is crucial to devise methods for measuring the extent and impact of echo chambers on social media. By quantifying them, we can better understand these phenomena and, consequently, devise strategies to mitigate echo chamber effects and foster more balanced and nuanced discussions. Ultimately, this could contribute to a better informed, open-minded, and empathetic society. Such efforts are particularly crucial in today's world, where topics such as politics, health, economics, and environmental issues, which are susceptible to echo chambers <cit.>, have far-reaching implications for society. Echo chambers are contingent on two dynamics: the interaction among users and the individual ideological leanings of these users. Numerous measures and metrics have been developed to leverage these dynamics, either separately or in conjunction. One such method, is to leverage the interactions graph to compute graph-specific metrics such as modularity <cit.>, or resort to other techniques like random walkers <cit.>. However, utilizing the graph introduces a difficulty, as a graph may exhibit modularity without necessarily being polarized or containing an echo chamber <cit.>. An alternate approach involves assessing the ideological disparity between users and their adjacent nodes within the graph, investigating correlations between a user's ideology and that of their neighbors <cit.>, or observing ideological deviations from the center of an opinion scale after deploying opinion-spreading models <cit.>. These methodologies, although insightful, are fraught with challenges. Labeling users to ascertain their ideologies or opinions is a laborious task that is susceptible to errors. Similarly, semi-supervised methods that depend on weak labels also present their own unique set of complications. In response to these issues, we introduce the Echo Chamber Score (ECS) a metric that captures the essence of the echo chamber concepts by focusing on the dynamic interactions both within and across different user communities. The crux of our approach is to gauge the similarity of users with their respective communities (i.e., cohesion) and across different communities (i.e., separation). Here, an interaction graph can be characterized as exhibiting an echo chamber-like structure if it exhibits a low average distance between users of a single community (i.e., high cohesion) and a high average distance between users across different communities (i.e., high separation). This strategy of using the distance allows us to bypass reliance on potentially incidental graph structures and eliminates the need to split the graph into two separate communities, an action that erroneously assumes inherent polarization. Further, our method uses similarity in the embedding space as a proxy for ideological distance, thereby circumventing the arduous and error-prone task of detecting individual users' ideologies. To facilitate the measurement of ideological distance, we propose EchoGAE, a self-supervised Graph Auto-Encoder <cit.> (GAE) based user embedding model. EchoGAE is designed to capture the ideological similarities among users through their interactions and shared posts, operating on two core principles: homophily <cit.>, where individuals associate and interact with those similar to themselves, and linguistic homophily <cit.>, the tendency of socially connected users to use language in similar ways. EchoGAE leverages homophilic interactions such as retweets, regarded as endorsements of similar ideologies <cit.>, along with the content of user posts. Both serve as inputs to capture and map these ideological similarities. The model architecture comprises an encoder that positions similar nodes closely together in the embedding space, and a decoder that uses users' embedding to reconstruct the graph structure in a self-supervised manner. Additionally, it utilizes Sentence-BERT <cit.>, a BERT-based language model, to embed tweets, thus reflecting their semantic similarities. By uniquely combining the interaction graph structure and linguistic information from user posts, EchoGAE generates representations that accurately reflect ideological similarities, establishing it as a robust tool for measuring the effects of echo chambers and polarization. In this research, we evaluate the ability of the Echo Chamber Score (ECS) to measure echo chamber effects within homophilic social interaction networks. Our experiments are based on real-life Twitter datasets related to four topics: two polarizing and two non-polarizing. Our findings confirm that the ECS metric accurately identifies polarized interaction graphs and quantifies the echo chamber effect in a manner consistent with existing state-of-the-art methods. Furthermore, ECS proves successful in determining which communities within the interaction graph are more polarized, demonstrating its unique ability to rank communities based on their polarization. We also verify that EchoGAE's user embedding effectively reflects ideological distances between users, showcasing its capacity to detect user ideologies. To promote reproducibility and foster further development in this field, we make our datasets and code available to the public[https://github.com/faalatawi/echo-chamber-scorehttps://github.com/faalatawi/echo-chamber-score]. § RELATED WORK Echo chambers and polarization measures can be divided into two main types: graph-based and ideology-based methods. Graph-based methods are based on the concept of a graph representing interactions between users on a given topic. These methods operate on the assumption that polarization can be observed within the graph itself. For instance, the modularity of a graph, which quantifies how well a graph can be divided into distinct communities, has been used to measure echo chambers <cit.>. However, challenges arise from this approach, as modularity and other similar methods may not accurately represent echo chamber phenomena due to the possibility that non-polarized graphs can also exhibit high modularity <cit.>. To address these limitations, new methods have been developed that scrutinize the interactions between communities within a graph. These improved methods involve dividing the graph into two distinct communities and measuring polarization at the boundaries between them <cit.>. An alternative approach involves using the Random Walk Controversy <cit.> (RWC), a popular polarization method <cit.> that calculates the probability of a random walker starting at one community and ending at another. Nonetheless, these methods have their own drawbacks, such as the necessity of splitting the communities in the graph and making an inherent assumption that the graph is already polarized. This results in difficulties in measuring polarization that may not actually exist. Our novel approach, the Echo Chamber Score (ECS), alleviates these issues. The ECS does not require the division of the graph into two communities and is capable of measuring the effects of echo chambers and polarization across any number of communities, making it a more flexible and accurate method for assessing polarization. Ideology-based methods for measuring echo chambers and polarization take a different approach, focusing on a user's ideological leaning and the users they interact with. Two primary approaches exist within this category: (1) measuring the ideological distance between a user and their neighboring users in the graph, and (2) measuring the divergence from an ideological center after applying an opinion-spreading model. In the first approach, the ideological leanings of all users are estimated and then compared to their neighboring users. The fundamental idea here is that an echo chamber is formed when users mostly interact with others who share similar opinions <cit.>. For instance, the ideology of users can be inferred from the hashtags they share or the content they post <cit.>. The polarization is then quantified by measuring the Pearson correlation between a user's ideological score and the average ideological score of their neighbors <cit.>. In the second approach, opinion-spreading models such as the Friedkin-Johnsen or DeGroot opinion model are utilized <cit.>. For instance, the Friedkin-Johnsen model operates by updating a node's opinion through repeatedly averaging the opinions of its neighbors until reaching equilibrium <cit.>. Polarization is then measured by how much opinions at equilibrium deviate from the average <cit.>. Alternatively, the DeGroot opinion model is used to construct a Polarization Index (PI) based on the probability density distribution of individuals' opinions <cit.>. A bimodal distribution would suggest the existence of polarization, while a unimodal distribution would indicate its absence <cit.>. Both these ideology-based approaches have challenges, such as the laborious and error-prone task of estimating users' ideological leanings from their content or interactions. Therefore, we have opted instead for a model based on similarity in the embedding space as a proxy for ideology, eliminating the need for ideology estimation. § METHODOLOGY This section presents our approach to quantifying echo chambers in online conversations. Our objective is to assess whether the discussion surrounding a given topic exhibits polarization and whether the communities formed by users can be characterized as echo chambers or comprise a diverse group of individuals with varying ideologies. To achieve this, we construct a graph G = (V, E), where V represents the set of social media users, and E represents the edges denoting homophilic interactions, such as retweets. Additionally, we obtain a set of communities Ω from a community detection algorithm, where each community consists of a group of users. Our primary aim is to measure the level of polarization within the entire graph by computing the Echo Chamber Score (ECS) for each community. Consequently, this section presents our novel ECS metric for quantifying echo chambers. However, as ECS relies on user embedding, we begin by introducing our user embedding framework, EchoGAE, which enables the representation of users based on their ideological similarity. §.§ Embedding Social Media Users The EchoGAE model (see figure <ref>) is essential to our methodology for quantifying echo chambers in online conversations. Its purpose is to embed users in a way that reflects their ideological similarity, facilitating the calculation of the Echo Chamber Score (ECS). By placing ideologically similar users closer in the embedding space, EchoGAE enables the measurement of cohesion and separation of communities in the graphs, the two components of ECS, as we will explain later in the section. EchoGAE is an adaptation of the Graph Auto-Encoder (GAE) model <cit.>, tailored for user embedding based on tweets and interactions. As a self-supervised model, EchoGAE eliminates the need for user ideological labeling. It employs two graph convolutional layers to encode the graph into a latent representation, which is subsequently decoded to reconstruct the graph structure. EchoGAE aims to minimize the binary cross-entropy between the real and reconstructed adjacency matrices. The EchoGAE model consists of two main components: an Encoder and a Decoder. The Encoder takes both the tweets and the graph as input to create node embeddings, which serve as the user embeddings. The Encoder is divided into two parts. Firstly, the tweets component utilizes Sentence-BERT <cit.> to embed the user's tweets, and the average of these tweet embeddings is taken to form the content embeddings (In fig <ref>, it's represented as the matrix 𝐗). Secondly, the network component leverages the adjacency matrix (𝐀 in fig <ref>) of the graph. Together, these components contribute to the creation of nodes embeddings (or users embeddings 𝐙∈ℝ^n × d where n is the number of users in the graph and d is the dimension of user embedding) that capture the information from both the users' content and their network interactions. The Decoder performs an inner product operation <cit.> on the node representations (σ(𝐙 * 𝐙^𝐓)) obtained from the Encoder, resulting in a reconstructed adjacency matrix (Â). Subsequently, the binary cross-entropy loss is used to train the model and ensure accurate graph reconstruction. §.§ Measuring the Echo Chamber Effect We introduce ECS (Echo Chamber Score), a measure for quantifying the echo chamber and polarization effects on social media. To measure the echo chamber effect using user embedding, we assess in-group cohesion and between-group separation <cit.>. We utilize the distance in the embedding space as a proxy for these factors, reflecting how closely related users within a community are (cohesion) and how distinct a community is from others (separation). Let Z ∈ℝ^n × d represent user embeddings, where n is the number of users and d is the embedding dimension. Additionally, let Ω = {ω_1, ω_2, …, ω_M} denote the set of communities, where ω_i ⊂ V represents the i^th community consisting of users. For a user u ∈ω, we compute the cohesion value (λ_u) as the average distance between u and other users in the same community using Equation <ref>. λ_u = 1/|ω|∑_v ∈ω v ≠ u dist(u, v) Here, |ω| denotes the number of users in the community ω, and dist(u, v) represents the distance (e.g., Euclidean) between users u and v in the embedding space (Z^(u) and Z^(v) respectively). Similarly, we compute the separation value (Δ_u) as the average distance between u and the nearest community other than ω using Equation <ref>. Δ_u = *min_ω∈Ω u ∉ω [ 1/|ω|∑_v ∈ω dist(u, v) ] To calculate the Echo Chamber Score (ECS) for a community ω = {u_1, u_2, …, u_N}, we use a formula inspired by the silhouette score <cit.> (in the appendix we show how to derive the ECS from the silhouette). Equation <ref> produces a score between 0 and 1, with a higher score indicating a greater likelihood of an echo chamber effect within the community. ECS^*(ω) = 1/|ω|∑_u ∈ωmax(Δ_u, λ_u) + Δ_u - λ_u/ 2 * max(Δ_u, λ_u) The Echo Chamber Score can be computed for the entire graph using Equation <ref>, where Ω represents the set of communities obtained from a community detection algorithm such as Louvain <cit.> or Leiden <cit.>. ECS(Ω) = 1/|Ω|∑_ω∈Ω ECS^*(ω) The Echo Chamber Score (ECS) allows for comparison across different graphs representing various controversial topics. A higher ECS indicates a higher degree of echo chamber within a conversation. The components of ECS can provide additional insights, such as ranking communities based on their polarization, using Equation <ref>. Note that our approach does not assume a specific number or size of communities and is independent of the community detection method. Moreover, it does not require prior knowledge of users' internal ideologies, setting it apart from related works <cit.>. §.§ Estimating Users' Ideology Our embedding model, EchoGAE, aims to position users with similar ideological leanings closer to each other in the embedding space. Therefore, we assume that we can utilize the distance in the embedding space to infer users' ideological leanings. This helps us evaluate whether EchoGAE embeds users in a way that reflects their ideology, which is the core idea behind ECS. After applying the EchoGAE embedding, we employ a clustering algorithm (e.g., KMeans) to detect two communities of users in the embedding space, denoted as ω_1 and ω_2. These communities represent the pro and anti sides of the debate, respectively. We follow similar works <cit.> that split the ideology spectrum into two sides. The ideology score for each user is calculated using Equation <ref>. It is determined by the difference between the average distance of the user u to other users in ω_1 and the average distance to users in ω_2. I(u) = 1/|ω_1|∑_v ∈ω_1 v ≠ u dist(u, v) - 1/|ω_2|∑_v ∈ω_2 v ≠ u dist(u, v) Here, dist represents any distance function normalized between 0 and 1. In our implementation, we employ the Euclidean distance, but other distance measures can be used. The ideology scores I(u) range from -1 to +1. Importantly, values of -1 and +1 do not inherently indicate "good" or "bad" ideologies. In Equation <ref>, the order of the communities (ω_1 and ω_2) affects the sign of the ideology score. If a user belongs to ω_1, their score is positive when ω_1 is in the first term. Reversing the order of communities changes the sign but not the magnitude of the score. This introduces an additional layer of complexity in evaluating our method, which we address in the experimental results section. § EXPERIMENTS In this section, we present the experiments we used to assess the effectiveness of our proposed method, Echo Chamber Score (ECS), in analyzing the echo chamber effect. To evaluate its performance and reliability, we compare ECS with two commonly used methods, Random Walk Controversy (RWC) and Polarization Index (PI). Additionally, we utilize ECS to analyze echo chambers at the community level, examining the distances between users in the embedding space to gain insights into the cohesion and separation of user communities. Furthermore, We conduct an experiment to determine if the distances in the embedding space can predict the ideological leaning of users. Finally, we perform an ablation study to examine the impact of using tweets in measuring the echo chamber effect and predicting user ideology. These experiments provide valuable insights into the performance and applicability of ECS in analyzing echo chambers, predicting user ideology, and assessing the role of tweets in these measurements. §.§ Datasets To investigate the echo chamber phenomenon, we selected four topics to examine user interactions related to these subjects. Two topics were controversial: abortion and gun debates, while the other two were non-controversial: the SXSW conference and the Super Bowl. The inclusion of non-controversial topics aimed to assess our method's performance in non-polarized settings. The datasets used in our experiments are outlined in Table <ref>, and we have made them publicly available[https://github.com/faalatawi/echo-chamber-scorehttps://github.com/faalatawi/echo-chamber-score] to ensure reproducibility and facilitate further research in the field of echo chamber analysis and detection. Data collection. To collect data for each topic, we identified frequently used keywords in discussions (see Table <ref>) and monitored the conversation. We then gathered the retweeters of the most popular tweets associated with these keywords. This data was used to construct a graph for each topic, where users were represented as nodes, retweet interactions formed the edges, and users' tweets provided node attributes. We collected up to 200 of the users' most recent tweets (excluding retweets) to ensure an adequate amount of user-generated text for analysis. The gun debate dataset was collected during the period of intense debate following the Uvalde school shooting in Uvalde, Texas, on May 24, 2022. Unfortunately, school shootings in the United States often ignite polarized discussions <cit.> on gun violence and constitutional gun ownership rights. To capture this discourse, we selected commonly used words from both sides of the debate and monitored the conversation from May to July. We then selected the top 1200 most retweeted tweets and constructed the retweet graph. The resulting graph (shown in the lower left panel of Figure <ref>) exhibited two communities, as identified by the Louvain algorithm <cit.>, indicating the presence of two polarized communities <cit.>. Similarly, we collected the retweet graph from the abortion rights debate following the US Supreme Court ruling on abortion that was issued on June 24, 2022, using relevant keywords. Both the gun debate <cit.> and abortion <cit.> have been widely studied as topics for analyzing echo chambers and polarization. On the other hand, for non-controversial topics, we selected the topics that have been used to study echo chambers Super Bowl <cit.> and SXSW <cit.>. The Super Bowl is an annual sports event in the US, while the SXSW conference is an annual event that combines music, film, and interactive media in Austin, Texas. We followed the same data collection procedure as with the controversial topics. Labeling. To evaluate the embedding quality of EchoGAE in capturing ideological similarity, we estimated users' ideological leanings. Following previous works that used news URLs to infer political leanings <cit.>, we obtained ideological labels for URLs from the non-partisan media watchdog AllSides[https://www.allsides.com/media-bias]. To assign labels to users, we utilized the news URLs they post as indicators of their ideology, using AllSides' political leanings for news websites' URLs. A user's political leaning is calculated as the average of the news articles they share. AllSides' ratings consist of five categories: left, center-left, center, center-right, and right, to which we assigned values of -1, -0.5, 0, 0.5, and 1, respectively. It is important to note that these values indicate opposing sides of the debate and do not inherently represent good or bad ideologies. We only used labels for users who shared at least five links. The number of labeled users for each dataset is specified in Table <ref>. Notably, controversial topics tend to have more labeled users due to the nature of user engagement with these topics, as users are more likely to express their ideological leanings in these topics. §.§ Measuring the Echo Chamber Effect In this experiment, our objective is to evaluate the effectiveness of our proposed method in measuring the echo chamber effect. To accomplish this, we compare our method with commonly used techniques for calculating polarization and echo chamber effects. This comparison aims to demonstrate that our method performs comparably to existing methods and produces reliable results for measuring the echo chamber effect. For our experiments, we utilize two widely used baselines: Random Walk Controversy (RWC) <cit.> and Polarization Index (PI) <cit.>. We then compare these baselines with our proposed method, Echo Chamber Score (ECS). RWC measures the likelihood of transitioning from one community to another in a network, where a value close to one indicates polarization and close to zero indicates no polarization. On the other hand, PI measures the degree of segregation within a population by modeling the propagation of opinions based on the probability density distribution of individuals' opinions. To compute RWC, we partition the graph into two communities using the FluidC <cit.> algorithm. Subsequently, we calculate the probability of transitioning from one partition to another. For PI, we employ the DeGroot opinion model <cit.> with labeled users as seeds to disseminate opinions, and then we compute the PI index for each graph. In contrast to RWC, our proposed method ECS does not require dividing the graph into two communities. The graph may consist of multiple communities, and any community detection method can be employed. In this study, we use the Louvain algorithm <cit.> to identify the communities, which are then used to compute ECS. Furthermore, unlike PI, our method does not rely on any labeled users, as we utilize the embeddings obtained from EchoGAE. As shown in Table <ref>, our approach effectively assigns higher scores to controversial topics (e.g., Gun debate and Abortion) compared to non-controversial ones, demonstrating its ability to perform on par with existing methods. Our method aligns with PI, a highly regarded technique that employs ideology labels to gauge polarization. PI's approach closely approximates the actual labels, and our method exhibits strong agreement with it, as evidenced by a 0.99 Pearson correlation. In contrast, there are notable differences between our method and RWC. For instance, both ECS and PI indicate that the Gun Control debate is more polarized than the Abortion debate, which contradicts the findings of RWC. We posit that the requirement of RWC to partition the graph into only two communities hinders its performance. By relaxing this requirement, our measure ECS can evaluate any number of communities identified by various community detection algorithms. These techniques (RWC, PI, and ECS) enable us to rank topics based on their polarization levels, from highest to lowest. Both PI and our method (ECS) consistently rank the topics in a similar manner. It is worth noting that our method considers the Gun debate more polarized than the Abortion debate, aligning with opinion polls. According to the Pew Research Center[https://www.pewresearch.org/], in 2022, 61% of Americans supported abortion access, while only 53% advocated for stricter gun laws. This demonstrates greater disagreement and polarization within the Gun debate compared to the Abortion debate. §.§ Analysing the Echo Chamber Effect on Community Level To showcase ECS's capability in analyzing the echo chamber at a more detailed level, we conducted an experiment to examine the insights provided by our measure at the community level. The objective was to determine which community within a topic exhibited a higher level of polarization. For this experiment, we focused on the controversial topics, namely the Gun debate and Abortion, and explored how we could investigate the interaction both between and within communities. These topics were chosen due to the presence of echo chambers, as identified in the previous experiment. Upon examining the gun dataset, we observed that the debate surrounding guns and school shootings exhibited a higher level of polarization compared to abortion, as evidenced by an ECS score of 0.714 compared to 0.626 (see Table <ref>). Applying the Louvain algorithm, we identified two communities in the interaction graph, with sizes of 3984 and 2582 nodes, respectively. Computing the ECS* (equation <ref>) for each community, we obtained echo chamber scores of 0.739 and 0.676, indicating polarization and ideological homogeneity within both communities. Notably, the larger community demonstrated a little bit higher level of polarization. Upon labeling a sample of ten users from each community, we discovered that the larger community aligned with the anti-gun group, while the smaller community represented the pro-gun group. By examining the 2D projection of the EchoGAE embedding of users (refer to Figure <ref>), we observed that the blue community (anti-gun) appeared to have similar size as the other community (pro-gun), suggesting close levels of polarization between the two communities. However, the anti-gun higher ECS score indicates that this group is more homogenized than the other group, which is surprising. It is possible that this debate around guns is not a right and left issue, and more centrists voices are participating in the debate. This analysis would be challenging to perform using PI or RWC techniques. However, ECS, being community-independent and not reliant on ideology labels, enables such analysis without prior knowledge of community divisions and ideologies. In the abortion dataset, we identified two communities with sizes of 3933 and 1154. The ECS* scores for these communities were 0.6 and 0.69, respectively. To gain deeper insights, we conducted a random sampling of ten users from each community and manually examined their Twitter accounts. Our analysis revealed that the larger community primarily consisted of supporters of abortion rights. On the other hand, the anti-abortion community exhibited a higher level of polarization compared to the other community. This finding aligns with the opinion polls mentioned earlier, as the anti-abortion group tends to hold a more fringe position compared to the pro-abortion group. Additionally, this alignment can be observed in the abortion rights vote that took place in Kentucky, which is considered a conservative state. During the vote, the majority of voters rejected[https://www.pbs.org/newshour/politics/kentucky-voters-reject-constitutional-amendment-on-abortion] the proposal to restrict abortion rights. §.§ Using Ideology Detection to Verify the Embedding Space We assume that the distance in the embedding space could be used to predict the political leaning of users and that users with similar ideological leanings are closer to each other in the embedding space. If we prove that the distance in the embedding space could be used to estimate the ideology of users, we could then use the distance to measure the echo chamber effect, as we rely on the distance to measure the separation (eq <ref>) and cohesion (eq <ref>) of communities in order to gauge the echo chamber effect. After labeling users, we then split the labeled users into training and validation sets (10% - 90% respectively). Since our model is unsupervised, the training set is used by the baseline model only, and we use the validation set to validate the estimation of both models. For the baseline model, we used the DeGroot opinion model <cit.>, in which the user’s ideology is the average ideology of their neighbors. After embedding users using EchoGAE, we employed the KMeans algorithm to detect two communities of users in the embedding space, referred to as ω_1 and ω_2, representing the pro and anti sides of the debate. Lastly, we calculated the ideology score of each user, taking into account their distances to the members of communities ω_1 and ω_2 in the embedding space as shown in equation <ref>. In Table <ref>, we present our method's outcomes for estimating ideology compared to the baseline. The resulting ideology scores were compared to the pseudo-scores obtained from AllSides labeling. Our analysis involved comparing our ideology scores to those obtained from the AllSides labeling and the baseline model using Mean Absolute Error (MAE), and Mean Squared Error (MSE). The results shown in Table <ref> demonstrate that our model performs comparably to the semi-supervised baseline, even though our method is unsupervised (we do not use any labels in our model). Furthermore, as depicted in Figure <ref>, a high degree of concurrence is observed between the distributions of the predicted and actual ideologies. It should be noted that in Equation <ref>, the order of the communities (i.e., ω_1 and ω_2) influences the sign of the ideology score. For instance, if a user belongs to ω_1 (i.e., is more closely associated with users in ω_1), their ideology score would be positive if ω_1 appeared in the equation's first term. However, if the order of the communities is reversed, the score's magnitude remains the same, but the sign changes. Consequently, in our measurement, we tried both orders (i.e., ω_1 first then it becomes the second) and report the minimum value. §.§ Ablation Study The primary objective of this study is to examine the impact of the components of EchoGAE on the performance of two tasks: measuring the echo chamber effect and predicting the ideology of users. Specifically, the study explores the significance of using textual information, i.e., tweets, in these tasks. Table <ref> presents the results obtained from this study. It demonstrates that the model's performance is enhanced when tweets are utilized. This finding emphasizes the importance of linguistic similarity in measuring echo chambers and estimating ideology. Therefore, the study suggests that investing more resources to extract knowledge from tweets could lead to improved accuracy in both tasks. However, the study also observes that good results can be achieved with the graph component alone, in situations where textual information is unavailable. Notably, even in cases where the difference in echo chamber scores between controversial and non-controversial topics is not substantial, the tweet-less model still performs well by assigning higher scores to controversial topics. In conclusion, this study provides empirical evidence supporting the importance of incorporating textual information, such as tweets, in measuring echo chambers and estimating ideology. Nevertheless, it also highlights that satisfactory results can be obtained with graph-only models in the absence of textual data. § CONCLUSION In this paper, we introduced Echo Chamber Score (ECS), a novel metric for quantifying echo chambers and polarization in social media networks. ECS leverages an embedding space to measure the cohesion and separation of user communities, providing insights into the echo chamber effect. To enable this measurement, we presented EchoGAE, a self-supervised user embedding model that captures ideological similarities among users and generates accurate embeddings. Our evaluation of ECS on a Twitter dataset demonstrated its effectiveness in ranking topics based on echo chamber scores and ordering communities by polarization levels. Compared to existing metrics, ECS showcased unique capabilities in capturing the dynamics of online discourse. Our research contributes to understanding and quantifying echo chambers and polarization, which could help the development of strategies to mitigate their negative impacts and promote a more informed and open-minded society. IEEEtran [Deriving the ECS* equation] Here we derive the ECS* equation. We start with the silhouette score <cit.>: ECS^*(ω) = 1/|ω|∑_u ∈ω [ Δ_u - λ_u/max(Δ_u, λ_u) ] We want to scale it from 0 to 1 instead of -1 to +1. So we have: ECS^*(ω) = 1/|ω|∑_u ∈ω1/2 [ Δ_u - λ_u/max(Δ_u, λ_u) + 1 ] Multiply the terms within the square brackets by the denominator of the first fraction, max(Δ_u, λ_u): ECS^*(ω) = 1/|ω|∑_u ∈ω1/2 [ Δ_u - λ_u + max(Δ_u, λ_u)/max(Δ_u, λ_u) ] Finally, rearrange the terms in the numerator, then separate the fractions to have the ECS equation: ECS^*(ω) = 1/|ω|∑_u ∈ωmax(Δ_u, λ_u) + Δ_u - λ_u/ 2 * max(Δ_u, λ_u)
http://arxiv.org/abs/2307.03870v1
20230708005332
Opacity of Parametric Discrete Event Systems: Models, Decidability, and Algorithms
[ "Weilin Deng", "Daowen Qiu", "Jingkai Yang" ]
cs.FL
[ "cs.FL", "cs.SY", "eess.SY" ]
Opacity of Parametric Discrete Event Systems: Models, Decidability, and Algorithms Weilin Deng, Daowen Qiu^⋆, and Jingkai Yang Weilin Deng is with the School of Internet Finance and Information Engineering, Guangdong University of Finance, Guangzhou, 510521, China (e-mail: [email protected]). Daowen Qiu (Corresponding author) is with the Institute of Quantum Computing and Computer Theory, School of Computer Science and Engineering, Sun Yat-Sen University, Guangzhou, 510006, China (e-mail: [email protected]). Jingkai Yang is with the School of Mathematics and Statistics, Yulin Normal University, Yulin, 537000, China (e-mail: [email protected]). =================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Finite automata (FAs) model is a popular tool to characterize discrete event systems (DESs) due to its succinctness. However, for some complex systems, it is difficult to describe the necessary details by means of FAs model. In this paper, we consider a kind of extended finite automata (EFAs) in which each transition carries a predicate over state and event parameters. We also consider a type of simplified EFAs, called Event-Parameters EFAs (EP-EFAs), where the state parameters are removed. Based upon these two parametric models, we investigate the problem of opacity analysis for parametric DESs. First of all, it is shown that EFAs model is more expressive than EP-EFAs model. Secondly, it is proved that the opacity properties for EFAs are undecidable in general. Moreover, the decidable opacity properties for EP-EFAs are investigated. We present the verification algorithms for current-state opacity, initial-state opacity and infinite-step opacity, and then discuss the complexity. This paper establishes a preliminary theory for the opacity of parametric DESs, which lays a foundation for the opacity analysis of complex systems. Opacity, discrete-event systems, parametric finite automata, extended finite automata § INTRODUCTION Over the last ten years, the problem of opacity analysis for discrete event systems (DESs) received considerable attention. Opacity is an important security property, which was initially introduced in computer science to analyze cryptographic protocols. Roughly speaking, a DES is said to be opaque, if the intruder cannot determine the occurrence of the secret behavior by his observations to the system. Finite automata (FAs) model is a popular tool to describe DESs in logical level due to its succinctness <cit.>. The notions of language-based opacity <cit.>, <cit.>, current-state opacity <cit.>, initial-state opacity <cit.>, infinite-step opacity <cit.> and pre-opacity <cit.> for FAs model were well investigated in recent years. In addition, the opacity enforcement based on the techniques of supervisory control and output obfuscation were proposed (e.g., see <cit.>-<cit.> and references therein). For some complex systems, it is difficult to describe the necessary details and analyze their opacity properties by means of FAs model, and thus some extended models are necessary. Actually, the opacity properties for various extended models were investigated recently, such as time systems <cit.>, networked systems <cit.>, Petri nets <cit.>-<cit.>, cyber-physical systems <cit.>, probabilistic systems <cit.> fuzzy systems <cit.>, and the other systems <cit.>-<cit.>. In the field of system modeling, control flow refers to the possible sequences of the interactions between a system and its environment, and data flow refers to the constraints on data parameters in the interactions <cit.>. FAs model well describes control flow, but fails to capture data flow and the mutual influence between control flow and data flow efficiently. A typical example is modeling network protocols, where the models must characterize how different parameter values in sequence numbers, user IDs, socket IDs, etc., affect the control flow. Another easy-to-understand example is modeling the process of web-site registering that usually requires a user to provide her/his identical password twice (see Examples <ref>-<ref> and Remark <ref> in Section II for details). Obviously, it is difficult and inefficient for FAs model to do such things. To address this problem, in this paper, we also consider a kind of extended finite automata (EFAs), in which the states and events are both augmented with parameters, and each transition carries a predicate and an update function over these parameters. The EFAs model is a powerful but complicated tool. It is hard to analyze some properties of EFAs, and we prove that the opacity properties of EFAs are undecidable. Thus, we also consider a simplified EFAs model, called Event-Parameters EFAs (EP-EFAs), where the state parameters are removed. By means of the transitions carrying predicates over parameters, the models of EFAs and EP-EFAs improve FAs model in efficiently representing and handling some complex systems where control flow, data flow and the interactions between them are required to be characterized. In general, EFAs and EP-EFAs can be viewed as a special type of infinite and finite state models, respectively, with infinite alphabet, which have been well investigated in computer science (e.g., see <cit.>-<cit.>). For the general infinite state automata (ISA), only a few properties are decidable <cit.>, <cit.>, However, for some types of ISA, there exist quite a few decidable properties, e.g., the properties of reachability, simulation and eventuality of Well-Structured ISA are all decidable <cit.>. On the other hand, for the finite state models with infinite alphabet, there are many decidable properties, as well as undecidable properties <cit.>. For example, the emptiness and language inclusion of 1N-RAs are decidable; however, its universality and equivalence are undecidable <cit.>. In this paper, the aforementioned EFAs and EP-EFAs are referred to as parametric DESs collectively. We would like to establish a preliminary theory for the opacity of parametric DESs, which lays a foundation to analyze the opacity of some complex systems. To the best of our knowledge, this is the first study on the opacity analysis of parametric DESs. The main contributions of this paper are as follows. * Two parametric models, i.e., EFAs and EP-EFAs, are introduced for DESs, and then it is proved that the latter can be simulated by the former but the reverse does not hold. This means that EFAs model is more expressive than EP-EFAs model. We also illustrate that these two parametric models are both more expressive and efficient than FAs model. * We formulate the current-state opacity, initial-state opacity and infinite-step opacity for parametric DESs, and then prove that these opacity properties for EFAs are all undecidable in general. The basic idea of the proof is reducing the halting problem of Two-Counter Machines (2CMs) to the verification of the opacity properties. * We investigate the decidable opacity properties for EP-EFAs. Based on the symbolic observer, the verification algorithms for current-state opacity, initial-state opacity and infinite-step opacity are provided, and the complexity is analyzed. The rest of this paper is organized as follows. The system models for parametric DESs are introduced and investigated in Section II. The problem formulation and necessary assumptions are provided in Section III. In Section IV, the opacity properties of EFAs are proved to be undecidable, and in Section V, the decidable opacity properties of EP-EFAs are studied. Finally, Section VI concludes this paper. § PARAMETRIC MODELS In this section, we present some notations, and introduce two parametric models: extended finite automata (EFAs) and Event-Parameters EFAs (EP-EFAs), and then discuss their expressiveness and efficiency. Let ℕ be the set of natural numbers, and [m:n] be the set of integers {m,m+1,…,n}. Let Σ be an alphabet, Σ^* be the set of finite strings over Σ including empty string ϵ, Σ^k be the set of strings of length k, and Σ^≤ k be the set of strings of length i, i ∈ [0:k], over Σ. A language L over Σ is a subset of Σ^*. We denote by |Ω| the number of elements in the set of Ω, and by |s| the length of the string s ∈ L with a slight abuse of notation. A discrete event system (DES) is usually modeled as a finite automaton H=(Q, q_0,Σ, δ) <cit.>, where Q is the finite set of states, Q_0⊆ Q is the set of initial states, Σ is the finite set of events, and δ:Q ×Σ→ Q is the deterministic (partial) transition function. The transition function δ can be extended to domains Q ×Σ^* and 2^Q × 2^Σ^* by the usual manner. The generated language by H is L(H) = {s | ∃ q_0∈ Q_0, q ∈ Q, s.t. q = δ(q_0,s) }. A Boolean algebra is a tuple 𝒜=(𝒰, Ψ, ∙), where 𝒰 is the universe of discourse, and Ψ is the set of predicates closed under the Boolean connectives, substitution, equality and if-then-else terms <cit.>. The element φ∈Ψ is called an 𝒰-predicate in 𝒜, or just predicate when 𝒰 and 𝒜 are clear from the context. The denotation function ∙: Ψ→ 2^𝒰 maps a predicate to the valuations of variables that make the predicate true. Hence, for any φ, ψ∈Ψ, φ∧ψ = φ∩ψ, φ∨ψ = φ∪ψ, φ = 𝒰\φ <cit.>. For the true predicate ⊤ and false predicate , we have ⊤ = 𝒰 and = ∅. For any φ∈Ψ, φ is said to be satisfiable, denoted by isSat(φ), if φ≠∅. This paper solely focuses on Boolean algebras in which the predicate satisfiability is decidable. Throughout this paper, we denote by X and Y the (infinite or finite) domains of event and state parameters, respectively, and denote by x and y (with superscript and subscript usually) the event and state parameters, respectively. In addition, we use a, b (with superscript and subscript usually) to denote the specific values of event and state parameters, respectively. The model of extended finite automata (EFAs) is defined as follows. An extended finite automaton (EFA) is defined as E = (Q, Σ, X, Q_0, Q_m, Y, Y_0, R ) where Q is the finite set of state tags, Σ is the finite set of event tags, X is the domain of one event parameter, Q_0⊆ Q and Q_m⊆ Q are the sets of tags of initial and marked states, respectively, Y is the domain of the state parameter and Y_0⊆ Y is the domain of parameter for initial states, and R is the set of symbolic transitions and each symbolic transition r ∈ R is of form q q where * q ∈ Q and q∈ Q are the tags of source and target states, respectively, which carry state parameters y_q∈ Y and y_q∈ Y, respectively; * k ≥ 0, the step-length of the transition, is the size of the tuple of event parameters in this transition; * σ∈Σ is the tag of the event, and if k ≥ 1, it carries a k-tuple of event parameters ⟨ x_σ^1, x_σ^2, …, x_σ^k⟩, x_σ^i∈ X, i ∈ [1:k], otherwise, it carries no event parameter; * φ is the guard of transition r, and it is a (Y × X^k)-predicate if k ≥ 1 otherwise a Y-predicate, and if event σ occurs at state q with the proper values of parameters to enable φ, then the transition r may be fired; * ξ, a Y × X^k→ Y function if k ≥ 1 otherwise a Y → Y function, is responsible for updating the parameter of target state according to the given parameters of source state and event when the transition r is fired. We denote by Ξ the special updating function that does nothing. If there are multiple transitions that can be fired at a state, then only one of them is fired nondeterministically. E is said to be deterministic, if no more than one transition can be fired synchronously at each state, i.e., for two different transitions q q_1 and q q_2, φ_1(b, ⟨ a_1, a_2, …, a_k⟩) ∧φ_2(b, ⟨ a_1, a_2, …, a_k⟩) dose not hold for any state parameter value b and event parameters values ⟨ a_1, a_2, …, a_k⟩. Moreover, the implicit ϵ-selfloop q q can be viewed as the special 0-step-length transition q q. The step-length of E is defined as the maximum of the step-lengths of the symbolic transitions in E. Actually, the symbolic transition q q, k ≥ 1, defines the set of concrete transitions {(q,b) (q,b) | (b, ⟨ a_1, a_2, …, a_k⟩) ∈φ ∧b = ξ(b,⟨ a_1, a_2, …, a_k⟩) }, where (q,b) and (q,b) are the source and target states, respectively, and σ⟨ a_1, a_2, …, a_k⟩ is the parameterized event of the concrete transition. For example, suppose X=Y={0, 1,2}, the symbolic transition q q denotes the set of concrete transitions {(q,0) (q,0), (q,1) (q,2)}. The symbolic transitions allow EFAs model to efficiently characterize the control flow (i.e., the possible sequences of fired transitions), data flow (i.e., the constraints on event parameters) and their interactions in a system. A parameterized string is a sequence of parameterized events, and a parameterized language is a set of parameterized strings. For a parameterized string u = v_1v_2… v_n, where v_i=σ_i⟨ a_i^1, a_i^2, …, a_i^k_i⟩, k_i≥ 0 [ if k_i= 0, then v_i=σ_i and 𝔇(u) = 𝔇(v_1… v_i-1v_i+1… v_n).], the data string of u, denoted by 𝔇(u), is obtained by stripping all the event tags σ_i, i.e., 𝔇(u) = ⟨ a_1^1, a_1^2, …, a_1^k_1⟩ ⟨ a_2^1, a_2^2, …, a_2^k_2⟩ … ⟨ a_n^1, a_n^2, …, a_n^k_n⟩ is a sequence of event parameter tuples. Intuitively, a data string is a sequence of data exchanges between the system and its environment that meet the data constraints. The flat data string is obtained by flattening the parameters of a data string in order, i.e., 𝔣𝔇(u) = a_1^1 a_1^2… a_1^k_1 a_2^1 a_2^2… a_2^k_2… a_n^1 a_n^2… a_n^k_n. Given an EFA E = (Q, Σ, X, Q_0, Q_m, Y, Y_0, R ). If there exist a series of concrete transitions (q_i,b_i) (q_i+1, b_i+1), where these v_i are parameterized events, i ∈ [1:n], n ≥ 1, we define the combined concrete transition as the path (q_1,b_1) (q_n+1, b_n+1) where u = v_1v_2… v_n. The language between a set of source states Q_1⊆ Q and a set of target states Q_2⊆ Q of the EFA E = (Q, Σ, X, Q_0, Q_m, Y, Y_0, R ) is defined as follows. L_Q_1^Q_2(E)= { u | ∃ q_1∈ Q_1, q_2∈ Q_2, b_1, b_2∈ Y, s.t. (q_1,b_1) (q_2, b_2) ∧ ( q_1∈ Q_0⇒ b_1∈ Y_0) }. The generated language and marked language by the EFA E are, respectively, defined as L(E) = L_Q_0^Q(E) and L_m(E) = L_Q_0^Q_m(E). The data language and marked data language of the EFA E are, respectively, defined as L_d(E) = ⋃_u ∈ L(E)𝔇(u) and L_md(E) = ⋃_u ∈ L_m(E)𝔇(u). The flat data language and flat marked data language of the EFA E are, respectively, defined as L_fd(E) = ⋃_u ∈ L(E)𝔣𝔇(u) and L_fmd(E) = ⋃_u ∈ L_m(E)𝔣𝔇(u). The EFA E shown in Fig. <ref> simulates the process of user registering in a web-site, where the user is required to provide his password twice for confirming its correctness. Suppose the charset for nickname and password are both Ω, then the domains of state and event parameters are Y=X=Ω^*. In the symbolic transition from q_0 to q_1, the user inputs his nickname, and the guard ⊤ does not block any input and the updating function Ξ does nothing. In the symbolic transition from q_1 to q_2, the user inputs his password for the first time (denoted by x_σ_2^1), and the updating function ξ is defined as y_q_2← x_σ_2^1 that means the password is stored to the target state q_2 as its parameter y_q_2. In the symbolic transitions from q_2 to q_3 and from q_2 to q_0, the user provides his password for the second time (denoted by x_σ_3^1). If these two passwords are identical (i.e., y_q_2=x_σ_3^1), then the former transition is fired, and the process goes to the final state q_3 and terminates successfully, otherwise (i.e., y_q_2≠ x_σ_3^1) the latter transition is fired, and the process fails and goes back to the initial state q_0. Note that the EFA E shown in Fig. <ref> is of 1-step-length. A more concise 3-step-length EFA with only two states and two symbolic transitions can also describe the same process. Before introducing this, we present a simplified EFAs model that has no state parameter. An Event-Parameters EFA (EP-EFA) is defined as S = (Q, Σ, X, Q_0, Q_m, T ), where Q is the finite set of states, Σ is the finite set of event tags, X is the domain of one event parameter, Q_0⊆ Q and Q_m⊆ Q are the sets of initial and marked states, respectively, and T is the set of symbolic transitions and each symbolic transition t ∈ T is of form q q where * q ∈ Q and q∈ Q are the source and target states, respectively; * k ≥ 0, the step-length of the transition, is the size of the tuple of event parameters in this transition; * σ∈Σ is the tag of the event, and if k ≥ 1, it carries a k-tuple of event parameters ⟨ x_σ^1, x_σ^2, …, x_σ^k⟩, x_σ^i∈ X, i ∈ [1:k], otherwise it carries no event parameter; * φ is the guard of transition t, and it is an X^k-predicate if k ≥ 1 otherwise the true predicate ⊤, and if event σ occurs at state q with the proper values of parameters to enable φ, then the transition t may be fired. If there are multiple transitions that can be fired at a state, then only one of them is fired nondeterministically. E is said to be deterministic, if no more than one transition can be fired synchronously at each state, i.e., for two different transitions q q_1 and q q_2, φ_1(⟨ a_1, a_2, …, a_k⟩) ∧φ_2(⟨ a_1, a_2, …, a_k⟩) does not hold for any event parameters values ⟨ a_1, a_2, …, a_k⟩[If k = 0, then φ_1 = φ_2 = ⊤ by the definition of symbolic transition. In this case, the determinism requires that there do not exist such two different transitions q q_1 and q q_2, which is actually the condition for deterministic FAs.]. Moreover, the implicit ϵ-selfloop q q can be viewed as the special 0-step-length transition q q. The step-length of S is defined as the maximum of the step-lengths of the symbolic transitions in S. The symbolic transition q q represents the set of concrete transitions {q q | ⟨ a_1, a_2, …, a_k⟩∈φ}, where σ⟨ a_1, a_2, …, a_k⟩ is the parameterized event of the concrete transition. For example, suppose X={0, 1, 2}, then the symbolic transition q q denotes the set of concrete transitions {q q, q q, q q, q q}. According to Definitions <ref> and <ref>, EP-EFAs model is just a special type of EFAs model without state parameters. This makes it impossible to keep information in the states, and thus limits the expressiveness of EP-EFAs model inevitably. The definitions of the parameterized string, data string, flat data string in EP-EFAs model are the same as themselves in EFAs model. Given an EP-EFA S = (Q, Σ, X, Q_0, Q_m, T ). If there exist a series of concrete transitions q_i q_i+1 where these v_i are parameterized events, i ∈ [1:n], n ≥ 1, we define the combined concrete transition as the path q_1 q_n+1 where u = v_1v_2… v_n. The language between the set of source states Q_1⊆ Q and the set of target states Q_2⊆ Q of the EP-EFA S is defined as follows. L_Q_1^Q_2(S)= { u | ∃ q_1, q_2∈ Q_1 s. t. q_1 q_2}. The generated language and marked language by the EP-EFA S are, respectively, defined as L(S) = L_Q_0^Q(S) and L_m(S) = L_Q_0^Q_m(S). The data language and marked data language of the EP-EFA S are, respectively, defined as L_d(S) = ⋃_u ∈ L(S)𝔇(u) and L_md(S) = ⋃_u ∈ L_m(S)𝔇(u). The flat data language and flat marked data language of the EP-EFA S are, respectively, defined as L_fd(S) = ⋃_u ∈ L(S)𝔣𝔇(u) and L_fmd(S) = ⋃_u ∈ L_m(S)𝔣𝔇(u). An EP-EFA S and an EFA E are said to be data-equivalent, if L_fmd(S) =L_fmd(E). The EP-EFA S shown in Fig. <ref> also simulates the process of user registering in a web-site. In this EP-EFA S, the event σ carries a 3-tuple of event parameters ⟨ x_σ^1, x_σ^2, x_σ^3⟩, where the first element is for user's nickname, and the second and third elements are both for user's password. Hence, if x_σ^2 = x_σ^3, then the process goes to the final state q_1 and terminates successfully, otherwise it fails and stays in state q_0. It is easy to verify that the EP-EFA S is data-equivalent to the EFA E shown in Fig. <ref>, as the parameters consumed in the transitions q_0→ q_1→ q_2→ q_0 and q_0→ q_1→ q_2→ q_3 in E are exactly the same as that consumed in the transitions q_0→ q_0 and q_0→ q_1 in S, respectively. Examples (<ref>-<ref>) show that, although the state parameter is removed, EP-EFAs model still retains a fair expressiveness by reading multiple event parameters as needed in each transition. By the definitions, the models of EFAs and EP-EFAs allow for infinite state/event spaces, while FAs model only supports finite ones. This means the parametric models are more powerful than FAs model. In Examples (<ref>-<ref>), suppose |Ω| = M and X=Ω^≤ N, then |X| = ∑_i=1^N M^i. To simulate the process of user registering in this finite space, FAs model needs at least (|X|+3) states and |X|*(|X|+2) transitions, as shown in Fig. <ref>. This suggests that even in a finite space, FAs model may be quite inefficient for certain complex systems when compared with the parametric models. There exists an EFA E that cannot be data-equivalent with any EP-EFA S_E. First of all, we construct an EFA E with X=Y=ℕ, as shown in Fig. <ref>. Obviously, E accepts even number of increasing natural numbers, i.e., the marked data string of E has the form of a_1a_2… a_2*n, where n ≥ 1, a_i+1 > a_i, i ∈ [1:(2*n-1)]. Secondly, we prove there does not exist a data-equivalent EP-EFA S_E for the EFA E by contradiction. Suppose there exists a data-equivalent EP-EFA S_E, where the number of the states is m and the step-length is K. Take a flat marked data string of S_E u=a_1a_2… a_2*n where 2*n > (m-1)*K. Suppose that u visits the sequence of states q_0→ q_1→…→ q_l in S_E, where q_0∈ Q_0 and q_l∈ Q_m. Since the step-length of S_E is K, we have l*K ≥ 2*n, and thus l>m-1. This means that there exist two states q_i, q_j in the sequence of visited states of u such that q_i = q_j and 0 ≤ i < j ≤ l, as the EP-EFA S_E has m states. Suppose that the parameters consumed from state q_i to state q_j are a_ia_i+1… a_j, 1 ≤i < j≤ 2*n. Obviously, the flat data string u = a_1… a_i-1 a_ia_i+1… a_j a_ia_i+1… a_j a_j+1… a_2*n also can be marked by S_E. Since S_E and E are data-equivalent, u is marked by E. However, it is not true, as a_j > a_i and û is not a sequence of increasing numbers. Hence, the contradiction is generated, which implies there does not exist a data-equivalent EP-EFA S_E for the EFA E shown in Fig. <ref>. For any EP-EFA S, there always exists a data-equivalent EFA E_S. It is straightforward by Definitions <ref> and <ref>. Propositions <ref> and <ref> imply that EFAs model is more expressive than EP-EFAs model. The models of EFAs and EP-EFAs extend FAs to an infinite model by means of the symbolic transitions carrying predicates over the infinite parameter space. With the help of the satisfiability modulo theories (SMT) solvers (e.g., Z3, Open SMT, MathSAT5, etc., see <cit.> for details), the data types that can be efficiently processed by parametric models include real/integer, bit vectors, arrays, difference logic, inductive data, etc. Therefore, the models of EFAs and EP-EFAs are quite expressive tools for DESs. A longer step-length adds the expressiveness of EP-EFAs. As evidence, the k-step-length transition q_1 q_2 has no equivalent series of transitions with a lower step-length. However, for the EFAs model, a longer step-length does not add its expressiveness, as the state parameter can be used to store the necessary information during the transitions. The subsequent proposition presents a formal demonstration for this fact. For any m-step-length EFA E_m, m > 1, there always exists a data-equivalent 1-step-length EFA E_1. Given any m-step-length EFA E_m, we construct the data-equivalent 1-step-length EFA E_1 as follows. For each symbolic transition (q,y_q ) (q, y_q) of E_m, 1 < k ≤ m, we add (k-1) new states: q^i, i∈ [1:k-1], and k events σ^j, j ∈ [1:k], and then construct a chain of k 1-step-length transitions q^j-1 q^j where q^0 = q and q^k = q to replace the transition q q. Specifically, the update functions are defined as follows: ξ^1def= [ y_q^1(1) ← x_σ^1^1 ] and for j ∈ [2:k-1], ξ^jdef= [ y_q^j(1) ← y_q^j-1(1); …; y_q^j(j-1) ← y_q^j-1(j-1); y_q^j(j) ← x_σ^j^1 ] where y_q^j(i) means the i^th element of the state parameter of q^j, and ξ^kdef=ξ(x_σ^1/y_q(1), …, x_σ^k-1/y_q(k-1), x_σ^k/x_σ^k^1), where the “A/B" denotes the substituting A by B in function ξ. The predicates are as follows: φ^i = ⊤ for i∈ [1:k-1], and φ^k = φ(x_σ^1/y_q(1), …, x_σ^k-1/y_q(k-1), x_σ^k/x_σ^k^1) where the “A/B" denotes the substituting A by B in the predicate. Obviously, φ^k is a (Y × X)-predicate where Y = X^k-1. The intuitive meaning of these new transitions is as follows. Each new transition is responsible for transmitting state parameters from source state to target state and storing one event parameter to target state parameter; and the first (k-1) transitions are guarded with ⊤ and the last one is guarded with φ^k that is equivalent with φ. In addition, ξ^k is also equivalent with ξ. This means that for any k event parameters, the transition (q,y_q ) (q, y_q) is fired if and only if the chain of transitions is fired, and meanwhile the parameter of the final state q is also updated in the same way. Thus, by replacing each transition of E_m with such a chain of transitions, we can obtain the data-equivalent 1-step-length E_1. § PROBLEM FORMULATION AND ASSUMPTIONS In this section, we present some assumptions and then formulate the problems discussed in this paper. In rest of this paper, we focus on the problem of opacity analysis for a parametric DES modeled by an EFA E = (Q, Σ, X, Q_0, Q_m, Y, Y_0, R ) or an EP-EFA S = (Q, Σ, X, Q_0, Q_m, T ). In the following, the parametric DES is denoted by G, and the notation L_Q_1^Q_2(G) is the language calculated by Equation (<ref>) when G is an EFA, and by Equation (<ref>) when G is an EP-EFA. The basic assumptions in this paper are as follows. * Assumption 1: The secret and non-secret behavior of the parametric system can be coded into its state space. We consider the following two cases: 1) the secret and non-secret behavior are the sets of data strings arriving in the given secret states Q_s⊆ Q and non-secret states Q_ns⊆ Q, respectively, and Definitions <ref> and <ref> are of this case; 2) the secret and non-secret behavior are the sets of data strings originating from the given secret initial states Q_s⊆ Q_0 and non-secret initial states Q_ns⊆ Q_0, respectively, and Definition <ref> is of this case. * Assumption 2: The intruder knows the complete structure of the parametric DES G, and he can observe the data exchanges between the system and its environment during the interactions (i.e., data language L_d(G)) through a static observation function θ. The observation function θ is defined as: for any data string d = ⟨ a_1^1a_1^2… a_1^k_1⟩ ⟨ a_2^1a_2^2… a_2^k_2⟩…⟨ a_j^1a_j^2… a_j^k_j⟩∈ L_d(G), θ(d) = ⟨θ(a_1^1)θ(a_1^2) …θ(a_1^k_1) ⟩⟨θ(a_2^1)θ(a_2^2) …θ(a_2^k_2) ⟩ …⟨θ(a_j^1)θ(a_j^2) …θ(a_j^k_j) ⟩ where θ(a_m^n) = a_m^n, if ϑ(a_m^n) holds, ϵ, otherwise, and ϑ is the X-predicate describing the observable condition for data elements, and the empty observation “⟨ϵ⟩" in θ(d) can be removed directly. The set of observations for G is defined as Θ(G) = ⋃_u ∈ L(G)θ(𝔇(u)). An observable unit of the observation w, w ∈Θ(G), is the substring of form “⟨ a_ia_i+1… a_i+k⟩", k ≥ 0, in w. Let |w|_u denote the number of observable units in w. According to the definition, the observations such as “⟨ a_1a_2⟩⟨ a_3⟩" and “⟨ a_1⟩⟨ a_2a_3⟩" are considered to be different. Two identical observations have the same number of observable units and the corresponding units are equal to each other. Therefore, an observable unit is regarded as a minimal information structure acquired by the intruder, and this paper considers the data language rather than the flat data language in opacity analysis. The main reasons for this treatment are as follows. 1) Since each parameter tuple is transmitted between the system and its environment as a whole and the observable unit is the observable part of parameter tuple, the intruder will obtain each observable unit as a whole. 2) Similar to the literature of opacity analysis <cit.>-<cit.>, this paper also assumes that intruders have sufficient memory and computation capabilities to keep the history of the observations and update the state estimation for the system instantaneously by their latest observations. Based on these assumptions, we present three opacity properties for parametric DESs in the following. (current-state opacity) Given the parametric DES G with the set of secret states Q_s⊆ Q, the set of non-secret states Q_ns⊆ Q, and the observation function θ. G is said to be current-state opaque w.r.t. Q_s, Q_ns and θ, if (∀ u ∈ L^Q_s_Q_0(G)) (∃ v ∈ L^Q_ns_Q_0(G)) θ(𝔇(u))=θ(𝔇(v)). (initial-state opacity) Given the parametric DES G with the set of secret initial states Q_s⊆ Q_0, the set of non-secret initial states Q_ns⊆ Q_0, and the observation function θ. G is said to be initial-state opaque w.r.t. Q_s, Q_ns and θ, if (∀ u ∈ L^Q_Q_s(G)) (∃ v ∈ L^Q_Q_ns(G)) θ(𝔇(u))=θ(𝔇(v)). (infinite-step opacity) Given the parametric DES G with the set of secret states Q_s⊆ Q, the set of non-secret states Q_ns⊆ Q, and observation function θ. G is said to be infinite-step opaque w.r.t. Q_s, Q_ns and θ, if (∀ uu∈ L_Q_0^Q(G): u ∈ L^Q_s_Q_0(G)) (∃ vv∈ L_Q_0^Q(G) : v ∈ L^Q_ns_Q_0(G)) [θ(𝔇(u))=θ(𝔇(v)) ∧θ(𝔇(u))=θ(𝔇(v))]. The opacity properties of parametric DESs presented in Definitions <ref>, <ref>, <ref> have the same intuitive meanings as their counterparts of the classic DESs. We would investigate the opacity properties for EFAs and EP-EFAs in Sections IV and V, respectively. § UNDECIDABILITY OF OPACITY IN EFAS In this section, we prove that the opacity properties presented in Definitions <ref>, <ref>, <ref> for EFAs are all undecidable in general. The main idea of the proof is reducing the halting problem of two-counter machines to the verification of the opacity properties. A counter machine is an abstract machine used to model computation in formal logic and theoretical computer science. A counter machine consists of several registers, each of which only can store an integer number, and a set of arithmetic operations and control instructions. Minsky introduced a type of counter machines including two registers r_j, j ∈{1, 2}, and three instructions: INC(r_j), DEC(r_j) and JZ(r_j, z) with the semantics of r_j← r_j + 1, r_j← r_j - 1, and goto(z) if r_j=0, respectively <cit.>. This kind of machines is usually called Two-Counter Machines (2CMs) in the literature. 2CMs are Turing equivalent <cit.>. It is well known that the halting problem for Turing machines is undecidable. Therefore, by Lemma 1, we have the following result. The halting problem of 2CMs is undecidable. Obviously, a configuration of a 2CM with program P can be described as a triple (r_1,r_2, c ) ∈ℕ^3, where r_1 and r_2 keep the values of the first and second registers, respectively, and c keeps the value of program counter. Let x(j) denote the j^th entry of the configuration x∈ℕ^3, j∈ [1:3]. Let |P| denote the number of instructions in program P. Firstly, we formulate the (ℕ^3×ℕ^3)-predicate φ^step that characterizes the configuration evolution of the 2CM with program P after executing a single instruction, where the first and second elements refer to the current and subsequent configurations, respectively. Let φ_i be the (ℕ^3×ℕ^3)-predicate describing the relation of the configurations before and after the executing of the i^th instruction of program P. We formulate φ_i according to the type of the i^th instruction as follows. * If the i^th instruction is INC(r_j), j ∈{1,2}, then φ_i(y, x) def= [(x(j) = y(j) + 1) ∧ (x(3-j) = y(3-j)) ∧ (y(3) = i ) ∧ (x(3) = i + 1)], where the first clause means that the j^th register increases by 1, the second clause means the other register remains unchanged, the third and fourth clauses mean that the program is executing the i^th instruction and the next instruction to be executed is the (i+1)^th one, respectively. * If the i^th instruction is DEC(r_j), j ∈{1,2}, then φ_i(y, x) def= [(x(j) = y(j) - 1) ∧ (x(3-j) = y(3-j)) ∧ (y(3) = i ) ∧ (x(3) = i + 1)]. The intuitive meaning of this equation is similar to that of the previous one. * If the i^th instruction is JZ(r_j,z), j ∈{1,2}, then φ_i(y, x) def= [(x(1) = y(1) ) ∧ (x(2) = y(2)) ∧ (y(3) = i ) ∧ (x(3) = ( y(j) = 0 ? z: i+1 ))], where the first and second causes mean that both the registers remain unchanged, the third cause means that the program is executing the i^th instruction, and the last cause adopts a Java-language-style expression to describe the if-then-else term, i.e., if the register r_j equals 0, then the next instruction to be executed is the z^th one, otherwise the (i+1)^th one. Hence, we obtain the special predicate φ^step for program P as follows. φ^step(y, x) def=⋁_i ∈ [1:|P|]φ_i(y, x). The (ℕ^3×ℕ^3)-predicate φ^eq describing whether two configurations are equal to each other or not is defined as follows. φ^eq(y, x) def=⋀_i∈{1,2,3}[(x(i) = y(i) )] Obviously, φ^step and φ^eq are both predicates in the Boolean algebra 𝒜=(ℕ^3×ℕ^3, Ψ, ∙). For the specific program P, we denote by ℕ^3-predicates φ^ini and φ^fin its initial configuration and final configuration, respectively. Based on the above discussions, we prove that the current-state opacity of EFAs is undecidable by constructing a special parametric DES E_P w.r.t program P and reducing the halting problem of P to the verification of current-state opacity of E_P. The current-state opacity of EFAs is undecidable in general. Firstly, we construct the EFA E_P = { Q={ q_0,q_1, q_2,q_3}, Σ = {σ_1, σ_2, σ_3, σ_4}, X = ℕ^3, Q_0={q_0}, Y = ℕ^3, Y_0=φ^ini , R } w.r.t. a 2CM with program P (shown in Fig. <ref>). The predicates of φ^step and φ^eq are defined in Equations (<ref>) and (<ref>), respectively. The predicates of φ^ini and φ^fin, as the logic characterization for the initial and final configurations of program P, respectively, are X-predicates and also can be regarded as special (Y× X)-predicates where the first variable (i.e., state parameter) has no influence to the predicates. In the symbolic transitions, the update function ξ^sto just stores the event parameter to the target state as parameter, e.g., ξ^sto in the transition from q_0 to q_1 is defined as: y_q_1← x_σ_1^1. Let the set of secret states be Q_s = { q_0,q_1,q_2} and the set of non-secret states be Q_ns = { q_3}. Consider the observation function θ: ∀ u ∈ (ℕ^3)^*, θ(u) = ϵ. According to Definition <ref>, the parametric DES E_P is current-state opaque if and only if the non-secret behavior is non-empty, i.e., the state q_3 is reachable from the initial state q_0. According to Fig. <ref>, the data strings (i.e., the sequence of configurations) that can reach the state q_3 from the initial state q_0 have the form of v = a_1a_2… a_2*na_2*n+1, n ≥ 1, and q_3 is reachable if and only if v satisfies the following formulae: a_1∈φ^ini, a_2*n+1∈φ^fin, a_2*j+1∉φ^fin, j ∈ [1:n-1], and for i ∈ [1:n], (a_2*i-1,a_2*i) ∈φ^step and (a_2*i,a_2*i+1) ∈φ^eq. For such sequence v satisfying aforementioned formulae, there exists a one-to-one corresponding sequence w=a_1a_2a_4 … a_2*(n-1)a_2*n, n ≥ 1, where a_1 and a_2*n are, respectively, the initial and final configurations, and each pair of adjacent configurations satisfies the predicate φ^step. This means that w is exactly the evolution sequence of configurations during the execution of program P, i.e., the 2CM with program P halts if and only if there exists such sequence w. By Lemma <ref>, the halting problem of 2CMs is undecidable, which implies the undecidability of the existence of such w, and further implies the undecidability of the existence of such v. Hence, the reachability of state q_3 in E_P is undecidable, and so is the current-state opacity of E_P. Therefore, the current-state opacity of EFAs is undecidable in general. The initial-state opacity of EFAs is undecidable in general. First of all, we construct the EFA E_P = { Q={ q_0, …,q_4}, Σ = {σ_1, … ,σ_5}, X=ℕ^3, Q_0={q_0,q_4}, Y=ℕ^3, Y_0=φ^ini , R } for a 2CM with program P. In EFA E_P, the predicates φ^ini, φ^eq, φ^step and φ^fin, and update function ξ^sto have the same definitions as themselves in E_P (shown in Fig. <ref>). Let the set of secret initial states be Q_s = { q_4} and the set of non-secret initial states be Q_ns = { q_0}. Consider the observation function θ: θ(u) = u, u ∈ (ℕ^3)^*. Under these settings, we have the following fact. ⋃_v ∈ L_Q_s^Q(E_P)θ(𝔇(v)) = ⋃_v ∈ L_{q_4}^{q_4}(E_P)θ(𝔇(v)) = (ℕ^3)^*. That is, the set of observations for secret behavior is the universal set (ℕ^3)^*. According to Definition <ref>, E_P is initial-state opaque if and only if the set of the observations for non-secret behavior is also the universal set (ℕ^3)^*, i.e., ⋃_u ∈ L_Q_ns^Q(E_P)θ(𝔇(u)) = ⋃_u ∈ L_{q_0}^{q_0,q_1,q_2,q_3}(E_P)θ(𝔇(u)) = (ℕ^3)^*. In order to investigate the validness of Equation (<ref>), we construct a new EFA E_P from E_P by removing state q_4 and its corresponding transitions, and adding a state q_5 and two corresponding transitions (i.e., the transitions denoted by dotted-arrow in Fig. <ref>). In the new EFA E_P, we have the fact that the disjunction of the predicates in the transitions originating the same state is equal to the true predicate ⊤, e.g., for state q_2, (φ^fin∧φ^eq ) ∨ (φ^fin∧φ^eq) ∨ ( φ^eq) = ⊤. Hence, we have the fact that ⋃_u ∈ L_{q_0}^{q_0,q_1,q_2,q_3,q_5}(E_P)θ(𝔇(u)) = (ℕ^3)^*. According to Equation (<ref>), it is obvious that Equation (<ref>) holds if and only if the state q_5 is not reachable in E_P. Notice that the reachability of q_5 in E_P is identical to the reachability of q_3 in E_P (shown in Fig. <ref>), which has been proved to be undecidable in Theorem <ref>. Hence, the validness of Equation (<ref>) is undecidable, and so is the initial-state opacity of EFA E_P. Therefore, the initial-state opacity of EFAs is undecidable in general. The infinite-step opacity of EFAs is undecidable in general. We consider the same EFA E_P with the same secret states, non-secret states and the observation function as that in Theorem <ref>. By Definition <ref>, E_P is infinite-step opaque if and only if the state q_3 is reachable from the initial state q_0, which has been proved to be undecidable in Theorem <ref>. Therefore, infinite-step opacity of E_P is undecidable, and infinite-step opacity of EFAs is undecidable in general. As mentioned before, EFAs model is a quite powerful tool to simulate the interactions between a system and its environment. However, the coexistence of event and state parameters in the predicates complicates this model and make the properties of opacity undecidable. Hence, it is necessary to consider the EP-EFAs model where the state parameter is removed. § OPACITY OF EP-EFAS In this section, we investigate the current-state opacity, initial-state opacity and infinite-step opacity of EP-EFAs. We present the verification algorithms for these opacity properties firstly, and then analyze the complexity of these algorithms. §.§ Current-State Opacity of EP-EFAs In fact, Definition <ref> implies that the current-state opacity holds if and only if for any observation, the intruder cannot determine the system is in the secret states. For the convenience of demonstrating this issue, we present the following notion. Given the EP-EFA S= (Q, Σ, X, Q_0, T), the state estimation function Est^S: Θ(S) → 2^Q is defined as follows: for any observation w ∈Θ(S), Est^S(w) = { q ∈ Q | ∃ q_0∈ Q_0, u ∈ L(S), s.t. q_0 q ∧ w = θ(𝔇(u)) }. For classic DESs, the state estimations can be calculated by constructing a special automaton: observer <cit.>. Inspired by this idea, we present an algorithm (Algorithm <ref>) to construct the symbolic observer Obs(S) = { Q^obs, q^obs_0, T^obs} for the EP-EFA S= (Q, Σ, X, Q_0, T). The symbolic observer Obs(S) is a special EP-EFA without event tags. In the following, we would like to prove that the verification of current-state opacity for the EP-EFA S can be realized by means of its symbolic observer Obs(S). Firstly, we present three necessary Lemmas. Given an EP-EFA S = (Q, Σ, X, Q_0, T) with the set of secret states Q_s, the set of non-secret states Q_ns, and the observation function θ. S is current-state opaque w.r.t. Q_s, Q_ns and θ, if and only if for any observation w ∈Θ(S), Est^S(w) ∩ Q_s≠∅ ⇒ Est^S(w) ∩ Q_ns≠∅. (⇐) Given any u ∈ L_Q_0^Q_s(S). Let w = θ(𝔇(u)). This implies Est^S(w) ∩ Q_s≠∅. Thus, we have Est^S(w) ∩ Q_ns≠∅, which means that there exist q_0∈ Q_0, a non-secret state q∈ Q_ns and a parameterized string v, such that q_0q, and w = θ(𝔇(v)). This further implies that v ∈ L_Q_0^Q_ns(S) and θ(𝔇(u)) = θ(𝔇(v)). According to Definition <ref>, S is current-state opaque. (⇒) Given any observation w ∈Θ(S) satisfying Est^S(w) ∩ Q_s≠∅. Est^S(w) ∩ Q_s≠∅ means there exists a parameterized string u ∈ L_Q_0^Q_s(S) such that w = θ(𝔇(u)). Since S is current-state opaque, there exists v ∈ L_Q_0^Q_ns(S) such that θ(𝔇(v)) = θ(𝔇(u)) = w. This means that there exist q_0∈ Q_0 and q∈ Q_ns such that q_0q, which implies that q∈ Est^S(w). Thus Est^S(w) ∩ Q_ns≠∅. Lemma <ref> implies that the verification of current-state opacity can be realized by going through all the possible state estimations. The following two Lemmas further prove that the states of the symbolic observer are exactly all the state estimations. Given an EP-EFA S = (Q, Σ, X, Q_0, T) and its symbolic observer Obs(S) = { Q^obs, q^obs_0, T^obs} constructed by Algorithm <ref>. For any observation w ∈Θ(S), Est^S(w) is the state reachable from q^obs_0 by w in Obs(S). Firstly, we claim that Obs(S) constructed by Algorithm <ref> is deterministic, i.e., given an observation, there exists only one reachable state in Q^obs. This is because Equation (<ref>) implies that if idx1 ≠ idx2, then ψ_idx1∧ψ_idx2 =, and thus no observation unit can simultaneously satisfy two different symbolic transitions originating from the same state q^obs of Obs(S). Secondly, we prove this Lemma by induction on the number of observation units in w. Let |w|_u = n. The base case is n = 0, i.e., w = ϵ. It is sufficient to show Est^S(ϵ) = q^obs_0. If q ∈ Q_0, obviously we have q ∈ Est^S(ϵ) and q ∈ q^obs_0. The remainder is to show Est^S(ϵ) \ Q_0 = q^obs_0\ Q_0. According to Equations (<ref>-<ref>), q q∈T means there exists a symbolic transition q q in S, such that φ holds for certain k unobservable event parameters or k=0. Therefore, by Equation (<ref>), a state q_n∈ q^obs_0\ Q_0, if and only if there exist a sequence of transitions q_0q_1…q_n in S, q_0∈ Q_0, where each predicate φ_i holds for k_i, i ∈ [1:n], unobservable event parameters or k_i = 0. This is equivalent to saying that there exists a parameterized string u, θ(𝔇(u)) = ϵ, and q_0 q_n by Definition <ref>, which also means that q_n∈ Est^S(ϵ) by Equation (<ref>). Thus the base case holds. The induction hypothesis is that for all observation w, |w|_u≤ n, Est^S(w) is reachable by w in Obs(S). We need to show that for any observation unit w = ⟨ a_1… a_k⟩, k ≥ 1, such that ww∈Θ(S), Est^S(ww) is reached by ww from q^obs_0 in Obs(S). This is equivalent to show that Est^S(ww) is reachable by w from state Est^S(w) due to the fact that the observer Obs(S) is deterministic. Since the observation function θ is static, we can reformulate Est^S(ww) as follows. Est^S(ww) = {q | q q∧ q ∈ Est^S(w) ∧w = θ( 𝔇(u)) }. Taking Est^S(w) as the q^obs in Equation (<ref>), then T^k_Est^S(w) is the set of observable transitions that originate from one of the states in Est^S(w) and contain k observable parameters. Suppose idx⊆ [1:|T^k_Est^S(w)|] is the only nonempty index set such that the observation unit w = ⟨ a_1… a_k⟩ satisfies ψ_idx (the existence follows from the fact that ww∈Θ(S) and the uniqueness follows from Equation (<ref>)). According to Equations (<ref>,<ref>), we obtain Est^S(ww) = q^obs. By Algorithm <ref>, we have Est^S(w) q^obs∈ T^obs, and thus Est^S(w) Est^S(ww) ∈ T^obs, which implies Est^S(ww) is reached from q^obs_0 by ww in Obs(S). This completes the proof of the induction step. Given an EP-EFA S = (Q, Σ,X, Q_0, T) and its symbolic observer Obs(S) = { Q^obs, q^obs_0, T^obs}. We have L(Obs(S)) = Θ(S). We prove this Lemma by induction on the number of observation units in w ∈ L(Obs(S)). Let |w|_u = n. The base case is n =0, i.e., w = ϵ. Obviously ϵ∈ L(Obs(S)) and ϵ∈Θ(S). Thus the base case holds. The induction hypothesis is that w ∈ L(Obs(S)) ⇔ w ∈Θ(S) holds for any observation w, |w|_u≤ n. Then we need to show for each observation unit w=⟨ a_1,…,a_k⟩, ww∈ L(Obs(S)) ⇔ ww∈Θ(S). Suppose q^obs is reached by w from q^obs_0 in Obs(S). By Lemma (<ref>) and Equation (<ref>), for each q_i∈ q^obs, there exists an initial state q_0^i∈ Q_0 such that q_0^i q_i, θ(𝔇(u_i)) = w. By Equations (<ref>-<ref>), ww∈ L(Obs(S)) holds if and only if there exists a nonempty index set idx such that ψ_idx holds for w. This is equivalent to saying that there exists at least an observable transition (t_i=q_iq_i) ∈T_q^obs^k, q_i∈ q^obs, w∈φ_i, i ∈idx, which further means there exists a parameterized event u_i such that q_iq_i and θ(𝔇(u_i)) = w by Equations (<ref>, <ref>). Therefore, ww∈ L(Obs(S)) holds if and only if there exists u_i such that q^i_0q_i, θ(𝔇(u_i)) = w, θ(𝔇(u_i)) = w, which means ww∈θ(𝔇(u_iu_i)) ∈Θ(S). This completes the proof of the induction step. Lemma <ref> implies that the state estimation for each observation is contained in the state space of the symbolic observer Obs(S). Lemma <ref> further implies that only the observations can reach the states of Obs(S). Hence, the state space of Obs(S) are exactly all the state estimations of S. Therefore, by Lemmas (<ref>, <ref>, <ref>), we have the following theorem. Given an EP-EFA S = (Q, Σ, X, Q_0, T) with the set of secret states Q_s, the set of non-secret states Q_ns, and the observation function θ. Let Obs(S) = { Q^obs, q^obs_0, T^obs} be the symbolic observer constructed by Algorithm <ref>. S is current-state opaque w.r.t. Q_s, Q_ns and θ, if and only if for any q^obs∈ Q^obs, q^obs∩ Q_s≠∅⇒ q^obs∩ Q_ns≠∅. The verification of current-state opacity and the construction of the symbolic observer (Algorithm <ref>) have the same complexity, as checking the validness for Equation (<ref>) can be finished during the construction of Obs(S). Suppose the EP-EFA S= (Q, Σ, X, Q_0, T) with K step-length has N states and M symbolic transitions. Assume that g(z) is the cost of checking satisfiability of the predicate with z free variables in the Boolean algebra. In Step 1) of Algorithm <ref>, we have |T| ≤ M*(K+1), and for each symbolic transition with l step-length, l+1 predicates are checked for satisfiability. Thus, the complexity of Step 1) is at most M*(K+1)*g(K). In Step 2), for each T^k_q^obs, there are 2^|T^k_q^obs|-1 combined predicates that need to be checked for satisfiability. Hence, there are at most ∑_q^obs∈ Q^obs∑_k=1^K(2^|T^k_q^obs|-1) predicates are checked for satisfiability. For a given q^obs, we have ∑_k=1^K|T^k_q^obs| ≤ |T|, and by this equation, we can prove ∑_k=1^K (2^|T^k_q^obs|-1) < 2^|T|≤ 2^M*(K+1). Since |Q^obs| ≤ 2^N, the complexity of Step 2) of Algorithm <ref> is at most g(K) * 2^N*2^M*(K+1). Therefore, the complexity of the verification of current-state opacity is g(K) * 2^N+M*K. As aforementioned, the EP-EFAs model can address many complex data and operations via the symbolic transitions. However, for the simplicity to demonstrate the obtained results, the following illustrative examples only consider integer arithmetic. Consider an EP-EFA S with X = ℕ shown in Fig. <ref>, where the set of secret states Q_s = {q_2} and the set of non-secret states Q_ns = Q\ Q_s. Suppose that the observation function θ is obtained by the X-predicate ϑ(x) def= [x ≥ 5 ]. Firstly, we construct the observable transitions as follows. T_t_1= { q_0 q_1}. T_t_2= { q_0 q_3; q_0 q_3}. T_t_3= { q_1 q_2 ; q_1 q_2}. T_t_4= { q_3 q_4 ; q_3 q_4}. T_t_5= { q_2 q_2 ; q_2 q_2}. T_t_6= { q_4 q_4 ; q_4 q_4}. Secondly, we have q^obs_0 = {q_0, q_1}, and obtain the corresponding set as follows. T^2_q^obs_0 = { q_0 q_3 ; q_1 q_2}. T^1_q^obs_0 = { q_1 q_2; q_0 q_3}. For T^2_q^obs_0, the set of satisfiable combined predicates are as follows. Ψ(T^2_q^obs_0) = {ψ_{1} = [x_1≥ 5 ∧ x_2≥ 5 ∧ x_2≠ x_1 +1]; ψ_{1,2} = [x_1≥ 5 ∧ x_2≥ 5 ∧ x_2 = x_1 +1] }. Through the transitions guarded with ψ_{1} and ψ_{1,2}, the states { q_3, q_4} and {q_2, q_3, q_4} are, respectively, generated and put into Q^obs. In addition, the corresponding symbolic transitions are put into T^obs. For T^1_q^obs_0, the set of satisfiable combined predicates are as follows. Ψ(T^1_q^obs_0) = {ψ_{1} = [x_1 = 5 ]; ψ_{2} = [x_1 > 5 ]}. Through the transitions guarded with ψ_{1} and ψ_{2}, the states { q_2}, { q_3, q_4} are, respectively, generated and the former is put into Q^obs. Meanwhile, the corresponding symbolic transitions are put into T^obs. For other unvisited states in Q^obs, we do the same things as that for state q^obs_0. Finally, we obtain the symbolic observer, as shown in Fig. <ref>, where Q^obs = {{ q_0,q_1}, {q_2}, { q_3,q_4}, { q_2,q_3,q_4}, {q_4}, { q_2,q_4}}. For the state {q_2}∈ Q^obs, we have { q_2}∩ Q_s≠∅ and { q_2}∩ Q_ns = ∅. Hence by Theorem <ref>, S is not current-state opaque w.r.t. {q_2}, {q_0,q_1,q_3,q_4} and θ. §.§ Initial-State Opacity of EP-EFAs The coding manners of the secret behavior in current-state opacity and initial-state opacity are reverse. According to this property, we transform the verification of initial-state opacity into the verification of current-state opacity for EP-EFAs. Firstly, we define the reverse operations for parameterized strings, data strings and symbolic transitions. Given a parameterized string u = σ_1⟨ a_1^1, a_1^2, …, a_1^k_1⟩ σ_2⟨ a_2^1, a_2^2, …, a_2^k_2⟩ … σ_n⟨ a_n^1, a_n^2, …, a_n^k_n⟩, its reverse is u^rdef=σ_n⟨ a_n^k_n, …, a_n^2, a_n^1⟩ … σ_2⟨ a_2^k_2, …, a_2^2,a_2^1⟩ σ_1⟨ a_1^k_1, …, a_1^2, a_1^1⟩. For a data string d = 𝔇(u), the reverse of d is d^rdef=𝔇(u^r). For a symbolic transition t = q q, the reverse of t is defined as t^rdef=q q, where the predicate φ^r is obtained from φ by changing the name of the free variable x^i_σ to x^k+1 -i_σ, i ∈ [1:k], e.g., the reverse of the X^4-predicate φ = [x^1_σ > x^3_σ∧ x^2_σ≠ x^4_σ] is φ^rdef= [x^4_σ > x^2_σ∧ x^3_σ≠ x^1_σ]. By the aforementioned definitions, we have d ∈φ if and only if d^r∈φ^r. Given an EP-EFA S = (Q, Σ, X, Q_0, T). The reverse of S is defined as S^r = (Q, Σ, X, Q_0^r, T^r), where the set of initial states is Q_0^r = Q and the set of symbolic transitions is T^r = {t^r|t∈ T}. Definition <ref> generalizes the notion of reverse automata <cit.>, which has been widely used in many fields. In particular, by constructing the observer for reverse finite automata, Wu et al. <cit.> proposed an approach to verify the initial-state opacity for classic DESs. The following proposition follows from the definitions of the reverse operations, symbolic transitions, languages and observations. Given a transition t, a parameter tuple d, a parameterized string u, an observation w, and an EP-EFA S = (Q, Σ, X, Q_0, T) and its reverse S^r = (Q, Σ, X, Q_0^r, T^r). The following equations hold. 1) d ∈ prd(t) ⟺ d^r∈ prd(t^r), where the pdc(t) and pdc(t^r) denote the predicates of t and t^r, respectively. 2) q q⟺q q. 3) u ∈ L_Q_1^Q_2(S) ⟺ u^r∈ L_Q_2^Q_1(S^r). 4) w = θ(𝔇(u)) ⟺ w^r = θ(𝔇(u^r)). Given an EP-EFA S = (Q, Σ, X, Q_0, T) with the set of secret initial states Q_s⊆ Q_0, the set of non-secret initial states Q_ns⊆ Q_0, and observation function θ. The reverse of S is S^r = (Q, Σ, X, Q_0^r, T^r) where Q_0^r = Q. S is initial-state opaque w.r.t. Q_s, Q_ns and θ, if and only if S^r is current-state opaque w.r.t. Q_s, Q_ns and θ. By Definition <ref>, S is initial-state opaque w.r.t. Q_s, Q_ns and θ, if and only if (∀ u ∈ L^Q_Q_s(S)) (∃ v ∈ L^Q_Q_ns(S)) θ(𝔇(u))=θ(𝔇(v)). This is equivalent to (∀ u^r∈ L^Q_s_Q(S^r)) (∃ v^r∈ L^Q_ns_Q(S^r)) θ(𝔇(u^r)) = θ(𝔇(v^r)) by Proposition <ref>. This means S^r is current-state opaque w.r.t. Q_s, Q_ns and θ according to Definition <ref>. Theorem <ref> implies that the verification of initial-state opacity can be efficiently reduced to the verification of current-state opacity. Since the reverse SPA-EFA S^r has the same scale as S, the complexity of the verification of initial-state opacity is also g(K) * 2^N+M*K. Consider the EP-EFA S shown in Fig. <ref> with the same observation function θ as that in Example <ref>. Suppose the set of initial states is Q_0 = {q_0, q_1, q_2}, and the secret initial states and non-secret initial states are Q_s = { q_2} and Q_ns = { q_0, q_1}, respectively. For the reverse SPA-EFA S^r = (Q, Σ, X, Q, T^r), we construct the symbolic observer Obs(S^r) = { Q^obs_r, q^obs_0, T^obs_r} according to Algorithm <ref>. For the initial state of the observer q^obs_0 = Q, we obtain the subsets of observable transitions as follows. T^2_q^obs_0 = { q_3 q_0 ; q_2 q_1}. T^1_q^obs_0 = { q_2 q_1; q_3 q_0; q_4 q_3; q_2 q_2; q_4 q_4}. For T^2_q^obs_0, the set of satisfiable combined predicates are as follows. Ψ(T^2_q^obs_0) = {ψ_{1} = [x_1≥ 5 ∧ x_2≥ 5 ∧ x_1≠ x_2 +1]; ψ_{1,2} = [x_1≥ 5 ∧ x_2≥ 5 ∧ x_1 = x_2 +1] }. Through the transitions guarded with the above predicates, the states { q_0} and {q_0, q_1} are generated and put into Q^obs_r. For T^1_q^obs_0, the set of satisfiable combined predicates are as follows. Ψ(T^1_q^obs_0) = {ψ_{1,3,4,5} = [x_1 = 5 ]; ψ_{2,3,4,5} = [x_1 = 6 ]; ψ_{2,4,5} = [x_1≥ 7 ]; }. Through the transitions guarded with the above predicates, the states { q_0, q_1, q_2, q_3,q_4} and { q_0, q_2, q_3, q_4} are generated and put into Q^obs_r. Similarly, we handle other unvisited states in Q^obs_r, and obtain the symbolic observer Obs(S^r), shown in Fig. <ref>, where Q^obs_r = {{q_0,q_1,q_2,q_3,q_4}, {q_0,q_2,q_3,q_4}, {q_0,q_1}, {q_0}}. Notice that for each state q^obs_r∈ Q^obs_r, q^obs_r∩ Q_s≠∅ always implies q^obs_r∩ Q_ns≠∅, thus S^r is current-state opaque w.r.t. {q_2}, {q_0,q_1} and θ. By Theorem <ref>, S is initial-state opaque w.r.t. {q_2}, {q_0,q_1} and θ. §.§ Infinite-Step Opacity of EP-EFAs Yin et al. <cit.> presented an ingenious method to verify the infinite-step opacity of FAs by combining the observers of the obverse and reverse automata (called two-way observers in <cit.>). Following this idea, we have the theorem as follows. Given an EP-EFA S = (Q, Σ, X, Q_0, T) with the set of secret states Q_s, the set of non-secret states Q_ns, and the observation function θ. The reverse of S is S^r = (Q, Σ, X, Q_0^r, T^r) where Q_0^r = Q. S is infinite-step opaque w.r.t. Q_s, Q_ns and θ, if and only if (∀ w ∈Θ(S))(∀w^r∈Θ(S^r)) [Est^S(w) ∩ Est^S^r(w^r) ∩ Q_s ≠∅⇒ Est^S(w) ∩ Est^S^r(w^r) ∩ Q_ns≠∅]. By Equation (<ref>), Est^S(w) and Est^S^r(w^r) are {q∈ Q | q_0q∧ q_0∈ Q_0∧ w = θ(𝔇(u))} and {q∈ Q | q q∧ q ∈ Q ∧w^r = θ(𝔇(u^r))}, respectively, and the latter further implies that {q∈ Q | q q ∧ q ∈ Q ∧w = θ(𝔇(u))} by Proposition <ref>. Therefore, Est^S(w) ∩ Est^S^r(w^r) ∩ Q^s and Est^S(w) ∩ Est^S^r(w^r) ∩ Q^ns, respectively, are equivalent to A={ q∈ Q_s | q_0q∧q q ∧ q_0∈ Q_0∧ w = θ(𝔇(u)) ∧w = θ(𝔇(u)) }, and B={ q∈ Q_ns | q_0q∧q q ∧ q_0∈ Q_0∧ w = θ(𝔇(v)) ∧w = θ(𝔇(v)) }. To complete the proof, it is sufficient to show the equivalence between Equations (<ref>) and (<ref>). Firstly, we prove that Equation (<ref>) implies Equation (<ref>). For any uu∈ L(S) satisfying u ∈ L_Q_0^Q_s(S), let w_1 = θ(𝔇(u)), w_1 = θ(𝔇(u)), and then we have w_1∈Θ(S) and w_1^r∈Θ(S^r). Taking the w_1 and w_1 here as the w and w in Equation (<ref>), then we have A ≠∅. By Equation (<ref>), we have B ≠∅. which implies that there exists vv∈ L(S) satisfying v∈ L_Q_0^Q_ns(S), such that θ(𝔇(u)) = θ(𝔇(v)) and θ(𝔇(u)) = θ(𝔇(v)). This means that Equation (<ref>) holds. Secondly, we prove Equation (<ref>) implies Equation (<ref>). For any w ∈Θ(S) and w^r∈Θ(S^r) satisfying A ≠∅, we have uu∈ L(S), such that u ∈ L^Q_s_Q_0(S), w = θ(𝔇(u)) and w = θ(𝔇(u)). By Equation (<ref>), there exists vv∈ L(S) such that v ∈ L^Q_ns_Q_0(S), θ(𝔇(u))=θ(𝔇(v)) and θ(𝔇(u))=θ(𝔇(v)), which implies B ≠∅. Therefore, Equation (<ref>) holds. According to Lemmas <ref>, <ref>, the state space of the observer of an EP-EFA are exactly the set of state estimations. By Theorem <ref>, the verification of infinite-step opacity can be realized by going through the state spaces of Obs(S) and Obs(S^r). Hence, we have the following algorithm (Algorithm <ref>) to verify the infinite-step opacity of EP-EFAs. As discussed before, the complexity of step 1) and step 2) of Algorithm <ref> is g(K) * 2^N+M*K. Since |Q^obs| ≤ 2^N and |Q^obs_r| ≤ 2^N, the complexity of Step 3) of Algorithm <ref> is 4^N. Therefore, the complexity of the verification of infinite-step opacity is g(K) * 2^N+M*K + 4^N. Consider the EP-EFA shown in Fig. <ref>, where the set of secret states Q_s ={ q_3} and the set of non-secret states Q_ns = { q_4}. The Obs(S) and Obs(S^r) have been calculated in Examples <ref> and <ref>, as shown in Fig. <ref> and Fig. <ref>, respectively. Notice that q^obs∩ q^obs_r∩{q_3}≠∅ implies q^obs∩ q^obs_r∩{q_4}≠∅ for all the pairs of states (q^obs,q^obs_r) ∈ Q^obs× Q^obs_r. Therefore, S is infinite-step opaque w.r.t. {q_3}, {q_4} and θ. § CONCLUSION In this paper, we have investigated two parametric DESs models, i.e., EFAs and EP-EFAs, and then established a preliminary opacity theory for parametric DESs, which lays a foundation to analyze the opacity for complex systems. The parametric DESs well extends the classic DESs by means of the symbolic transitions carrying predicates over the infinite parameter space. The parametric DESs can efficiently represent and process many real-world data with the help of SMT solvers. It has been illustrated that the coexistence of state and event parameters in the predicates not only enhances the parametric model but also complicates it. Specifically, we have proved that EFAs model is more expressive than EP-EFAs model, and also proved that the opacity properties of EFAs are undecidable in general. In addition, EP-EFAs model reduces the complexity of EFAs by removing the state parameter, which makes its opacity properties decidable. We have provided the verification algorithms for the current-state opacity, initial-state opacity and infinite-step opacity of EP-EFAs model, and discussed the complexity of these algorithms. One of the future work is to investigate the opacity enforcement of parametric DESs. Another work worthy of further investigation is to explore a more powerful parametric model whose opacity properties are still decidable. § ACKNOWLEDGMENTS This work is supported by the National Natural Science Foundation of China (Grant No. 61876195), the Natural Science Foundation of Guangdong Province of China (Grant No. 2022A1515011136), the Special Projects in Key Fields Foundation of the Department of Education of Guangdong Province of China (Grant No. 2021ZDZX1043), Guangxi Science and Technology Project (No. Guike AD23026227) and the Project Improving the Basic Scientific Research Ability of Young and Middle-aged Teachers in Guangxi Universities of China (Grant No. 2021KY0591). 1 IEEEtran desbook C. G. Cassandras and S. Lafortune, Introduction to Discrete Event Systems, 2nd Ed., New York, NY, USA: Springer, 2008. opacity-review R. Jacob, J. J. Lesage, and J. M. Faure, “Overview of discrete event systems opacity: models, validation, and quantification," Annual Reviews in Control, vol. 41, pp. 135-146, 2016. l-opacity F. Lin, “Opacity of discrete event systems and its applications," Automatica, vol. 47, no. 3, pp. 496-503, 2011. cso A. Saboori and C. N. Hadjicostis, “Notions of security and opacity in discrete event systems," in Proceeding of the 46th IEEE Conference on Decision and Control, New Orleans, LA, USA, 2007, pp. 5056-5061. iso A. Saboori and C. N. Hadjicostis, “Verification of initial-state opacity in security applications of DES," in Proceedings of the 9th International Workshop on Discrete Event Systems, Göteborg, Sweden, 2008, pp. 328-333. ifo Y. Wu and S. Lafortune, “Comparative analysis of related notions of opacity in centralized and coordinated architectures," Discrete Event Dynamic Systems, vol. 23, no. 3, pp. 307-339, 2013. infinite X. Yin and S. Lafortune, “A new approach for the verification of infinite-step and K-step opacity using two-way observers," Automatica, vol. 80, pp. 162-171, 2017. pre-opacity S. Yang and X. Yin, “Secure your intention: on notions of pre-opacity in discrete-event systems," IEEE Transactions on Automatic Control, DOI:10.1109/TAC.2022.3210148, 2022. supervisory-enforcement1 J. Dubreil, P. Darondeau, and H. Marchand, “Supervisory control for opacity," IEEE Transactions on Automatic Control, vol. 55, no. 5, pp. 1089-1100, 2010. supervisory-enforcement2 Y. Xie, X. Yin, and S. Li, “Opacity enforcing supervisory control using non-deterministic supervisors," IEEE Transactions on Automatic Control, DOI: 10.1109/TAC.2021.3131125, 2021. output-enforcement1 X. Yin, S. Li , “Synthesis of dynamic masks for infinite-step opacity," IEEE Transactions on Automatic Control, vol. 65, no. 4, pp. 1429-1441, 2020. output-enforcement2 C. Keroglou and S. Lafortune, “Embedded insertion functions for opacity enforcement ," IEEE Transactions on Automatic Control, vol. 66, no. 9, pp. 4184-4191, 2021. output-enforcement3 X. Li , C. N. Hadjicostis and Z. Li, “Extended insertion functions for opacity enforcement in discrete-event systems," IEEE Transactions on Automatic Control, vol. 67, no. 10, pp. 5289-5303, 2022. timed-opacity F. Cassez, “The dark side of timed opacity," in Advances in Information Security and Assurance (Lecture Notes in Computer Science), Berlin, Germany: Springer, vol. 5576, 2009, pp. 21-30. timed-opacity2 L. Wang, N. Zhan, and J. An, “The opacity of real-time automata," IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 37, no. 11, pp. 2845-2856, 2018. network-opacity1 J. Yang, W. Deng, D. Qiu, and C. Jiang, “Opacity of networked discrete event systems," Information Sciences, vol. 543 pp. 328-344, 2021. network-opacity2 Z. Zhang, S. Shu, C. Xia, Networked opacity for finite state machine with bounded communication delays, Information Sciences, vol. 572, pp. 57-66, 2021. petri-opacity1 Y. Tong, Z. Li, C. Seatzu, and A. Giua, “Verification of state-based opacity using Petri nets," IEEE Transactions on Automatic Control, vol. 62, no. 6, pp. 2823-2837, 2017. petri-opacity2 X. Cong, M. Fanti, A. Mangini, and Z. Li, “On-line verification of current-state opacity by Petri nets and integer linear programming," Automatica, vol. 94, pp. 205-213, 2018. petri-opacity3 Y. Dong, Z. Li, and N. Wu, “Symbolic verification of current-state opacity of discrete event systems using Petri nets," IEEE Transactions on Systems, Man, and Cybernetics: Systems, DOI:10.1109/TSMC. 2022.3151695, 2022. opacity-cps1 X. Yin, M. Zamani, and S. Liu, “On approximate opacity of cyber-physical systems," IEEE Transactions on Automatic Control, vol. 66, no. 4, pp. 1630-1645, 2021. opacity-cps2 S. Liu, A. Trivedi, X. Yin, and M. Zamani, “Secure-by-construction synthesis of cyber-physical systems," Annual Reviews in Control, vol. 53, pp. 30-50, 2022. probilistic-opacity1 A. Saboori and C. N. Hadjicostis, “Current-state opacity formulations in probabilistic finite automata," IEEE Transactions on Automatic Control, vol. 59, no. 1, pp. 120-133, 2014. probilistic-opacity2 X. Yin, Z. Li, W. Wang, and S. Li, “Infinite-step opacity and K-step opacity of stochastic discrete-event systems," Automatica, vol. 99, pp. 266-274, 2019. fuzzy-opacity1 W. Deng, D. Qiu, and J. Yang, “Opacity measures of fuzzy discrete event systems," IEEE Transactions on Fuzzy Systems, vol. 29, no. 9, pp. 2612-2622, 2021. fuzzy-opacity2 W. Deng, D. Qiu, and J. Yang, “Fuzzy infinite-step opacity measure of discrete event systems and its applications," IEEE Transactions on Fuzzy Systems, vol. 30, no. 3, pp. 885-892, 2022. efa1 Y. Chen and F. Lin, “Modeling of discrete event systems using finite state machines with parameters." in Proceedings of 2000 IEEE International Conference on Control Applications (CCA), Anchorage, Alaska, USA, 2000, pp. 941-946. efa2 L. Ouedraogo, R. Kumar, R. Malik, and K. Åkesson, “Nonblocking and safe control of discrete-event systems modeled as extended finite automata," IEEE Transactions on Automatation Science and Engineering, vol. 8, no. 3, pp. 560-569, 2011. efa3 M. A. Goorden, M. Fabian, J. M. Mortel-Fronczak et al., “Compositional coordinator synthesis of extended finite automata," Discrete Event Dynamic Systems, vol. 31, no. 3, pp. 317-348, 2021. learning S. Cassel, F. Howar, B. Jonsson, and B. Steffen, “Learning extended finite state machines," Formal Aspects of Computing, vol. 28, no. 2, pp. 233-263, 2016. esfa L. D'Antoni and M. Veanes, “Extended symbolic finite automata and transducers," Formal Methods in System Design, vol. 47, no. 1, pp. 93-119, 2015. sft M. Veanes, P. Hooimeijer, B. Livshits et al., “Symbolic finite state transducers: algorithms and applications," in Proceedings of the 39th Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, Philadelphia PA, USA, 2012, pp. 137-150. infinite-alphabet1 Abdulla, Parosh Aziz, et al. “General decidability theorems for infinite-state systems," in Proceedings 11th Annual IEEE Symposium on Logic in Computer Science. New Brunswick, NJ, USA, pp. 313-321, 1996. infinite-alphabet2 Segoufin L, Segoufin L, “Automata and logics for words and trees over an infinite alphabet," in Computer Science Logic: 20th International Workshop, Szeged, Hungary, Springer Berlin Heidelberg, pp. 41-57, 2006. infinite-alphabet3 F. Neven, T. Schwentick, and V. Vianu, “Finite state machines for strings over infinite alphabets," ACM Transactions on Computational Logic, vol. 5, no. 3, pp. 403-435, 2004. solver2 C. Barrett and C. Tinelli, “Satisfiability modulo theories," in Handbook of Model Checking, Cham, Switzerland: Springer, 2018, pp. 305-343. book-computation S. Michael, Introduction to the Theory of Computation, 3rd Ed., Boston, MA, USA: Cengage Learning, 2012. CM M. Minsky, Computation: Finite and Infinite Machines, 1st Ed., Englewood Cliffs, N. J., USA: Prentice-Hall, 1967. [ < g r a p h i c s > ] Weilin Deng received the B.S. and M.S. degrees in computer science from South China University of Technology, Guangzhou, China, in 2003 and 2008, respectively, and the Ph.D. degree in computer software and theory from Sun Yat-Sen University, Guangzhou, China, in 2016. From 2016 to 2019, he was an associate research fellow with Sun Yat-Sen University. He is currently an associate professor with Guangdong University of Finance. His current research interests include discrete-event systems, fuzzy/probabilistic systems and computations, and theoretical computer science. He is the author or co-author of more than 20 peer-review papers published in various academic journals and conferences, including IEEE TAC, IEEE TFS, IEEE CDC, INT J CONTROL and Information Sciences. [ < g r a p h i c s > ] Daowen Qiu received the M.S. degree in mathematics from Jiangxi Normal University, Nanchang, China, in 1993 and the Ph.D. degree in mathematics from Sun Yat-Sen University, Guangzhou, China, in 2000. During 2000 and 2001, he was a Postdoctoral Researcher in computer science with Tsinghua University, Beijing, China. Since August 2002, he has been associated with Sun Yat-Sen University, and then a Full Professor of computer science in May 2004. His current research interests include quantum computing, discrete-event systems, fuzzy and probabilistic computation, and he has focused on models of quantum and probabilistic computation, quantum information. He is the author or co-author of more than 160 peer-review papers published in various academic journals and conferences, including Information and Computation, Artificial Intelligence, Journal of Computer and System Sciences, Theoretical Computer Science, IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS PART B, IEEE TRANSACTIONS ON AUTOMATIC CONTROL, IEEE TRANSACTIONS ON FUZZY SYSTEMS, Physical Review A, Quantum Information and Computation, Journal of Physics A, and Science in China. He is an editor of Theoretical Computer Science. [ < g r a p h i c s > ] Jingkai Yang received the B.S. and M.S. degrees in mathematics from Guangxi Normal University, Guilin, China, in 2006 and 2009, respectively, and the Ph.D. degree in computer science and technology from Sun Yat-Sen University, Guangzhou, China, in 2022. He is currently an associate professor with Yulin Normal University. His main research interests include opacity analysis, supervisory control and failure diagnosis of discrete-event systems.
http://arxiv.org/abs/2307.04064v1
20230709000056
Local null controllability of a class of non-Newtonian incompressible viscous fluids
[ "Pitágoras de Carvalho", "Juan Límaco", "Denilson Menezes", "Yuri Thamsten" ]
math.AP
[ "math.AP", "math.OC", "35K55, 76D55, 93B05, 93C10" ]
UESPI]P. Carvalho [email protected] IMEUFF]J. Límaco [email protected] IMEUFF]D. Menezes [email protected] IMEUFF]Y. Thamsten [email protected] [UESPI]Departamento de Matemática, Universidade Estadual do Piauí, Teresina, PI, Brasil [IMEUFF]Instituto de Matemática e Estatística, Universidade Federal Fluminense, Niterói, RJ, Brasil We investigate the null controllability property of systems that mathematically describe the dynamics of some non-Newtonian incompressible viscous flows. The principal model we study was proposed by O. A. Ladyzhenskaya, although the techniques we develop here apply to other fluids having a shear-dependent viscosity. Taking advantage of the Pontryagin Minimum Principle, we utilize a bootstrapping argument to prove that sufficiently smooth controls to the forced linearized Stokes problem exist, as long as the initial data in turn has enough regularity. From there, we extend the result to the nonlinear problem. As a byproduct, we devise a quasi-Newton algorithm to compute the states and a control, which we prove to converge in an appropriate sense. We finish the work with some numerical experiments. Null controllability, shear dependent viscosity, nonlinear partial differential equations, non-Newtonian fluids. [2010] 35K55, 76D55, 93B05, 93C10. § INTRODUCTION Let us fix an integer N ∈{ 2,3 }, and let us take a non-empty, open, connected, and bounded subset Ω of ℝ^N with a smooth boundary ∂Ω, and a real number T>0. Henceforth, we write Q:= ]0,T[×Ω, and Σ := [0,T]×∂Ω. In general, we understand all of the derivatives figuring in this work in the distributional sense. We interpret the set Ω as a region occupied by the particles of a fluid with a velocity field y. We represent its pressure by p, whereas v stands for a distributed control which acts as a forcing term through a given open set ω⋐Ω. We assume ω≠∅. The model comprising the subject of the current investigation is the following: [ D y/Dt - ∇·𝒯(y,p) = χ_ω v, in Q, ∇· y = 0, in Q, y = 0, on Σ, y(0) = y_0, in Ω. ] Above, the function χ_ω denotes the indicator function of ω, we define the material derivative as Dy/Dt := y_t + ( y·∇) y, the stress tensor, 𝒯, is given by 𝒯(y,p) := -p I + ν(∇ y) ∇ y, ν(∇ y) := ν_0 + ν_1 |∇ y|^r , in such a way that the constitutive law for the deviatoric stress tensor reads as ν(∇ y)∇ y := ( ν_0 + ν_1 |∇ y|^r) ∇ y, where |∇ y| := [ ∑_i,j=1^N ( ∂_j y_i)^2 ]^1/2. We remark that the three constants ν_0, ν_1, and r appearing above are strictly positive, typically with ν_0 ≫ν_1, although this assumption is not necessary in this work. Therefore, we are focusing on the class of power-law shear-dependent fluids. Pioneers in the study of the system (<ref>)-(<ref>) were O. A. Ladyzhenskaya and J.-L. Lions, see <cit.>. Particularly, let us introduce the usual spaces we use in the mathematical analysis of fluid dynamics, i.e., H := { y ∈ L^2(Ω)^N : ∇· y = 0 in Ω, y· n = 0 on ∂Ω} and V := {y ∈ H^1_0(Ω)^N : ∇· y = 0 in Ω}, where n denotes the outward unit normal on ∂Ω. Then, the results <cit.> (cf. <cit.>) imply the following: Let us suppose that r > N/2 - 1. as well as y_0 ∈ H and χ_ω v ∈ L^q^'(0,T; V^'), where 1/q + 1/q^' = 1, for q := r+2. Then, the problem (<ref>)-(<ref>) admits a unique solution (y,p) such that y ∈ L^r+2(0,T;V) ∩ L^∞(0,T;H) and p ∈ L^2(Q). For r=1 and N=3, the system (<ref>)-(<ref>) is the simple turbulence model of Smagorinsky, see <cit.>. Since then, gradient-dependent (or shear-dependent) viscosity models of incompressible viscous fluids have attracted considerable attention from the mathematical, physical, and engineering communities. Some other works investigating the well-posedness for the model (<ref>)-(<ref>) under consideration are <cit.>. The paper <cit.> studies the energy dissipation for the Smagorinsky model. For the investigation of some regularity properties of solutions of (<ref>)-(<ref>), see <cit.> and the references therein. On the one hand, the Navier-Stokes (NS) system of equations (corresponding to formally replacing ν_1 = 0 in (<ref>)) is deeply relevant, not only in mathematics, but for physics, engineering, and biology, see <cit.>. For standard well-posedness results, which are now classic, see <cit.>. However, even with a great effort of researchers, among the main longstanding open problems are the questions about global existence or finite-time blow-up of smooth solutions in dimension three of the incompressible Navier-Stokes (or else the Euler) equations. The system (<ref>)-(<ref>) is a generalization of the Navier-Stokes equations. From a practical perspective, as <cit.> points out, every fluid which solutions of NS decently models is at least as accurately described by those of (<ref>)-(<ref>). On the other hand, for real-world problems, the advantage of considering the more general fluids of power-law type is not slight. In effect, as <cit.> describes, practitioners employed them to investigate problems in chemical engineering of colloids, suspensions, and polymeric fluids, see <cit.>, in ice mechanics and glaciology, see <cit.>, in blood-rheology, see <cit.>, and also in geology, see <cit.>, to name a few instances. We briefly describe the physical meanings of the constants ν_0, ν_1, and r. Firstly, ν_0 stands for the kinematic viscosity of the fluid. If the physical variables are nondimensionalized, then ν_0^-1 is the Reynolds number of the fluid. Secondly, we can conceive the constants ν_1 and r in light of the kinetic theory of gases and the definition of a Stokesian fluid, see <cit.>. For instance, from the point of view of turbulence modeling, we have ν_1 = C_0ℓ^2, where C_0 is a model parameter and ℓ≪ 1 is a mixing length, see <cit.>. In the latter perspective, a possible derivation of the model stands on the Boussinesq assumption for the Reynolds stress, further stipulating that the eddy viscosity ν_t takes the particular form ν_t = ν_1 |∇ y|^r, see <cit.>. The term ν_t given by (<ref>) leads to a stabilizing effect by increasing the viscosity for a corresponding increase in the velocity field gradient, see the discussion in <cit.>; hence, we call these fluids shear-thickening. From the viewpoint of control theory, <cit.> establishes the local null controllability for the Navier-Stokes equations under no-slip boundary conditions; later developments worth mentioning are, e.g, <cit.>. For the study of the Ladyzhenskaya-Smagorinsky model, see <cit.>. The paper <cit.> deals with a similar one-dimensional problem. Regarding local exact controllability properties for scalar equations having a locally nonlinear diffusion, some advances are <cit.>. However, although the diffusion coefficients can be functions of the state (in the case of <cit.> in a simplified form), the methods used in these works seem not enough to tackle the situation in which these coefficients depend on the gradient of the controlled solution. Furthermore, the assumptions they make rule out more general diffusions with power-law type nonlinearities. In the present work, we can circumvent all of these difficulties. The notion of controllability we consider in this paper is defined as follows. We say that (<ref>)-(<ref>) is locally null-controllable at time T>0 if there exists η>0 such that, for each y_0 ∈[H^5(Ω)∩ V]^N satisfying the compatibility conditions Ay_0,A^2y_0 ∈[H^1_0(Ω)]^N, as well as y_0_H^5(Ω)^N < η, we can find v ∈ L^2(]0,T[×ω)^N for which the corresponding velocity field y of (<ref>)-(<ref>) satisfies y(T,x) = 0 for almost every x ∈Ω. We now state the main theoretical result we establish in this paper. Let us suppose r ∈{1,2} or r ⩾ 3. For each T>0, the system (<ref>)-(<ref>) is locally null-controllable at time T. Although we stated Theorem <ref> in terms of weak solutions, our methodology yields smooth controls and transient trajectories for the nonlinear system (<ref>)-(<ref>). Namely, we will be able to prove that there is a control parameter v such that ρ_4 v, (ζ v)_t, ζΔ v, ( ζ v_t )_t, ζΔ v_t, ζ D^4 v ∈ L^2(Q)^N, with a corresponding trajectory y satisfying ρ_6∇ y, ρ_7 y_t, ρ_7 Δ y, ρ_8∇ y_t, ρ_9y_tt, ρ_9Δ y_t, ρ_10∇ y_tt, ρ_10D^3 y_t, ρ_9 D^4 y, ρ_11y_ttt, ρ_11Δ y_tt∈ L^2(Q)^N, ρ_6 y, ρ_7 ∇ y, ρ_8y_t, ρ_9Δ y, ρ_9 ∇ y_t, ρ_9 D^3 y, ρ_10y_tt, ρ_10Δ y_t, ρ_11∇ y_tt∈ L^∞(0,T; L^2(Ω)^N), for appropriate time-dependent positive weights ρ_4, ρ_6, ρ_7, ρ_8, ρ_9, ρ_10, ρ_11, ζ, ζ which blow up exponentially as t↑ T. For more details and the proofs, we refer to Sections <ref> and <ref>. Of course, there is a trade-off between such regularity and our requirements on the initial datum. We will comment upon questions that are related to this relation on Section <ref>. We will prove Theorem <ref> with the aid of a local inversion-to-the-right theorem. Namely, we will introduce Banach spaces Y and Z (we provide the details in the second subsection of Section <ref>) as well as a mapping H: Y → Z, such that a solution (y,p,v) of the equation H(y,p,v) = (0,y_0), for a given initial data y_0 meeting the assumptions of Theorem <ref>, is a solution of the control problem, i.e., a tuple subject to (<ref>)-(<ref>) and (<ref>). We will use the inversion theorem to guarantee the existence of a local right inverse of H. For proving that H is well-defined, as well as that it enjoys suitable regularity properties, the key steps are novel high-order weighted energy estimates for a control and the solution of the linearization of the system (<ref>)-(<ref>) around the zero trajectory. Taking advantage of the invertibility properties of DH(0,0,0), we construct the following algorithm allowing the computation of a tuple (y,p,v) solving (<ref>)-(<ref>) and (<ref>). The following local convergence result for Algorithm 1 holds. There exist a small enough constant η > 0, as well as appropriate Banach spaces Y and Z,[We provide, in the second subsection of Section <ref>, the explicit definitions of both Y and Z.] such that, if y_0_H^5(Ω)^N < η, with y_0 satisfying the compatibility conditions of Definition <ref>, then it is possible to find κ∈]0,1[ with the following property: the relations (y^0,p^0,v^0) ∈ Y and (y^0,p^0,v^0)-(y,p,v)_Y < κ, imply the existence of θ∈]0,1[ for which (y^n+1,p^n+1,v^n+1) - (y,p,v)_Y ⩽θ(y^n,p^n,v^n)-(y,p,v)_Y, for all n⩾ 0. In particular, (y^n,p^n,v^n) → (y,p,v) in Y. Here, we fix some notations that we will use throughout the whole paper. Firstly, C denotes a generic positive constant that may change from line to line within a sequence of estimates. In general, C depends on Ω, ω, T, ν_0, ν_1, and r. In case C begins to depend on some additional quantity a (or we want to emphasize some dependence), we write C=C(a). We will also write, for every integer k⩾ 0, |D^k y| := [∑_i=1^N ∑_|α|=k(∂^α y_i)^2 ]^1/2, where we used the standard multi-index notation above. We denote the standard norm of L^2(Ω)^N by ·. Finally, we set D^k y := | D^k y|. We finish this introductory section outlining the structure of the remainder of the work. * In Section 2, we study the linearization of (<ref>)-(<ref>) around the zero trajectory — it is a forced Stokes system. With the aid of a global Carleman estimate, we can to show that this system is null controllable. Assuming sufficiently regular initial data, we employ a bootstrapping argument to deduce higher regularity for the control, taking advantage of its characterization via Pontryagin's minimum principle. The higher control regularity naturally leads to higher regularity of the velocity field.. * In Section <ref>, we use a local inversion-to-the-right theorem for mappings between Banach spaces to show that the model (<ref>)-(<ref>) is locally null controllable. * It is in Section 4 that we prove Theorem <ref>. Then, we conduct some numerical experiments to illustrate our theoretical findings. * Finally, we conclude the work in Section <ref> with some comments and perspectives. § STUDY OF THE LINEARIZED PROBLEM §.§ Some previous results Our aim in the present Section is to establish the null controllability of the linear system: [ Ly + ∇ p = χ_ω v + f, in Q, ∇· y = 0, in Q, y = 0, on Σ, y(0) = y_0, in Ω, ] In (<ref>), we have written Ly := y_t - ν_0 Δ y. We achieve this result via a suitable Carleman inequality for the adjoint system of (<ref>); upon writing L^*φ := -φ_t - ν_0 Δφ, it reads [ L^*φ + ∇π = g, in Q, ∇·φ = 0, in Q, φ = 0, on Σ, φ(T) = φ^T, in Ω. ] In the present subsection, we fix notations that we will employ henceforth. Let us consider ω_1 ⋐ω, with ω_1 ≠∅. For the proof of the following lemma, see <cit.>. There is a function η^0 ∈ C^2(Ω) satisfying η^0 >0 in Ω, η^0 = 0 on ∂Ω, |∇η^0| > 0 on Ω\ω_1. We take l ∈ C^∞([0,T]) with l(t) ⩾ T^2/4 on [0,T/2], l(t) = t(T-t), on [T/2,T]. We define γ(x) := e^λ(η^0(x) +mη^0_∞), α(x) := e^5/4λ mη^0_∞ - e^λ(η^0(x) + mη^0_∞), γ_1 := min_Ωγ, γ_2 := max_Ωγ, α_1 := min_Ωα, α_2 := max_Ωα, and γ := γ/l^4, α := α/l^4. Given C>1, m>4, there exists λ_0=λ_0(m,C)>0 such that α_2 ⩽ Cα_1, for all λ⩾λ_0. For s,λ>0, we write I(s,λ,φ) := s^3λ^4 ∫_Q e^-2sαγ^3|φ|^2d(t,x) + sλ^2∫_Q e^-2sαγ |∇φ|^2 d(t,x) + s^-1∫_Q e^-2sαγ^-1(|φ_t|^2 + |Δφ|^2 ) d(t,x). We are ready to recall the Carleman inequality that is the key to study the null controllability of the linear system (<ref>). There exist positive constants s, λ and C depending solely on Ω and ω for which the relations g ∈ L^2(Q)^N, φ^T ∈ H, λ⩾λ and s ⩾s(T^4 + T^8) imply [ I(s,λ,φ) ⩽ C(1+T^2)(s^15/2λ^20∫_Q e^-4sα_1 +2sα_2(γ_2/l^4)^15/2|g|^2d(t,x); + s^16λ^40∫_0^T ∫_ω_1 e^-8sα_1 + 6sα_2(γ_2/l^4)^16|φ|^2dx dt ), ] where φ is the solution of (<ref>) corresponding to g and φ^T. As a consequence, we get the following Observability Inequality. With the notations of Proposition <ref> (possibly enlarging s, λ and C, the latter now depending on T), we have φ(0)^2 ⩽ C(s^15/2λ^20∫_Q e^-4sα_1 +2sα_2γ_2^15/2|g|^2d(t,x) + s^16λ^40∫_0^T ∫_ω_1 e^-8sα_1 + 6sα_2γ_2^16|φ|^2dx dt ). From now on, we fix λ = λ and s=s. Moreover, in view of Remark <ref>, given γ > 0, we can take λ = λ(γ) large enough in such a way that α_2 < (1+γ)α_1. Whenever we need (<ref>) in subsequent estimates, for a suitable positive real number γ, we will assume it holds in all that follows. For p,q,r ∈ℝ, we introduce the weights μ_p,q,r(t):= exp{psα_1 l^-4(t) }exp{qsα_2 l^-4(t) } l^r(t). Regarding these weights, it is valuable to note: Let p,p_1,p_2,q,q_1,q_2,r,r_1,r_2 be nine real numbers. (a) One has the equality μ_p_1,q_1,r_1μ_p_2,q_2,r_2 = μ_p_1+p_2,q_1+q_2,r_1+r_2. In particular, for integral k, μ_p,q,r^k = μ_kp,kq,kr. (b) There exists a constant C>0 such that |d/dtμ_p,q,r|⩽ Cμ_p,q,r-5, |d/dt(μ_p,q,r^2) |⩽ C μ_p,q,r-5/2^2. (c) There exists a constant C>0 such that μ_p_1,q_1,r_1⩽ Cμ_p_2,q_2,r_2 if, and only if, p_1α_1 + q_1α_2 = p_2α_1 + q_2α_2 and r_1 ⩾ r_2, or p_1α_1 + q_1α_2 < p_2α_1 + q_2α_2. We define the weights ρ_0 := μ_0,1,6, ρ_1 := μ_0,1,2, ρ_2 := μ_0,1,-2, ρ_3 := μ_2,-1,15, and ρ_4 := μ_4,-3,32. With these notations, we can gather Proposition <ref> and Corollary <ref> together, resulting in the following statement. There is a constant C=C(Ω,ω,s,λ,m,T)>0 such that the solution φ of (<ref>) corresponding to g ∈ L^2(Q)^N and φ^T ∈ H satisfies [ φ(0)^2 + ∫_Q[ ρ_0^-2|φ|^2 + ρ_1^-2|∇φ|^2 + ρ_2^-2(|φ_t|^2 + |Δφ|^2 )]d(t,x); ⩽ C(∫_Qρ_3^-2|g|^2d(t,x) + ∫_0^T ∫_ω_1ρ_4^-2|φ|^2 dx dt ). ] §.§ Null controllability of the linear system We suppose y_0 ∈ H, ρ_0 f ∈ L^2(Q)^N. Then there exist controls v ∈ L^2(]0,T[×ω)^N such that the state y of (<ref>) corresponding to v, f and y_0 satisfies ∫_Q ρ_3^2|y|^2d(t,x) + ∫_0^T∫_ωρ_4^2|v|^2dx dt ⩽ Cκ_0(y_0,f), where κ_0(y_0,f) := y_0_H^2 + ∫_Q ρ_0^2|f|^2d(t,x). In particular, y(T) = 0 almost everywhere in Ω. We define P_0 := { (w,σ) ∈ C^2(Q)^N+1 : ∇· w ≡ 0, w|_Σ≡ 0, ∫_Ωσ dx = 0 }, we take χ∈ C^∞_c(ω), with 0 ⩽χ⩽ 1, χ|_ω_1≡ 1, and we consider on P_0 the continuous bilinear form b((w,σ),(w,σ)) := ∫_Q {ρ_3^-2(L^*w +∇σ)·(L^* w + ∇σ) + χρ_4^-2w·w} d(t,x). By Corollary <ref>, b is an inner product on P_0. Let us denote P:= P_0^b(·,·), i.e., P is the completion of P_0 under the norm induced by b(·,·). We also deduce, from the corollary we just mentioned, that that the linear form Λ : (w,σ) ∈ P ⟼∫_Ω y_0· w(0) dx + ∫_Q f· w d(t,x) ∈ℝ is continuous, with |Λ(w,σ)| ⩽ Cκ_0(y_0,f)^1/2b((w,σ),(w,σ))^1/2. The Riesz representation theorem guarantees the existence of a unique (φ,π) ∈ P for which Λ(w,σ) = b((w,σ),(φ,π)) (for all (w,σ) ∈ P). Upon taking (w,σ) = (φ,π) above, we get b((φ,π),(φ,π)) = Λ(φ,π) ⩽ Cκ_0(y_0,f)^1/2b((φ,π),(φ,π))^1/2, whence b((φ,π),(φ,π)) ⩽ Cκ_0(y_0,f). Let us set y:= ρ_3^-2(L^*φ + ∇π), z:= ρ_4^-2φ, v:= -χ z. We observe that (v) ⊆ω, that (y,v) is a solution of (<ref>) corresponding to the datum y_0 and f, and applying Corollary <ref> once more, ∫_Q ρ_3^2|y|^2d(t,x) + ∫_0^T ∫_ωρ_4^2|v|^2 dx dt ⩽ Cb((φ,π),(φ,π)) ⩽ Cκ_0(y_0,f). This proves the theorem. §.§ Weighted energy estimates Along this subsection, we let y_0 ∈ H, ρ_0 f ∈ L^2(Q)^N, and let us denote by (v,y) the control-state pair constructed in the proof of Theorem <ref>. Let us define ρ_6 := μ_1,-1/2,35/2 and ρ_7 := μ_1,-1/2,20. We have sup_[0,T]( ∫_Ωρ_6^2 |y|^2 dx) + ∫_Q ρ_6^2 |∇ y|^2 d(t,x) ⩽ Cκ_0(y_0,f), and, if y_0 ∈ H^1_0(Ω)^N, then ∫_Q ρ_7^2(|y_t|^2+|Δ y|^2 )d(t,x) + sup_[0,T](∫_Ωρ_7^2|∇ y|^2 dx ) ⩽ Cκ_1(y_0,f), where κ_1(y_0,f) := y_0_H^1_0(Ω)^N^2 + ∫_Qρ_0^2 |f|^2d(t,x). For each n ⩾ 1, let v_n(t, ·), f_n(t, ·) and y_0,n(·) be the projections of of v(t, ·), f(t, ·) and y_0(·) in the first n eigenfunctions for the Stokes operator A: D(A) → H, respectively. Let us denote by y_n the corresponding solution for the finite dimensional approximate forced Stokes system. For simplicity, unless we state otherwise, we omit the subscript n throughout the current proof. Moreover, we emphasize that we can take all of the constants C appearing below to be independent of n. Using ρ_6^2y as a test function in system (<ref>), and doing some integrations by parts, we derive the identity 1/2d/dt(∫_Ωρ_6^2 |y|^2dx ) + ν_0 ∫_Ωρ_6^2 |∇ y|^2 dx = ∫_ωρ_6^2 v· y dx + ∫_Ωρ_6^2 f· y dx + 1/2∫_Ωd/dt(ρ_6^2 ) |y|^2dx. From (<ref>) and Remark <ref>, item (c), we have ρ_6 ⩽ Cρ_4 ⩽ Cρ_3 ⩽ Cρ_0, whence ∫_ωρ_6^2 v· y dx ⩽ C(∫_ωρ_4^2 |v|^2 dx + ∫_Ωρ_3^2 |y|^2 dx ), and ∫_Ωρ_6^2 f· y dx ⩽ C(∫_Ωρ_0^2 |f|^2 dx + ∫_Ωρ_3^2 |y|^2 dx ). From Remark <ref>, item (b), we have |d/dt(ρ_6^2)| ⩽ Cρ_3^2, from where it follows that ∫_Ωd/dt( ρ_6^2 )|y|^2dx ⩽ C∫_Ωρ_3^2|y|^2 dx. Using (<ref>), (<ref>) and (<ref>) in (<ref>), and applying Gronwall's inequality together with (<ref>), we infer (<ref>). Henceforth, we will tacitly apply (<ref>) and Remark <ref>. Now, we use ρ_7^2(y_t-ν_0 A y) as a test function in (<ref>), from where we easily derive that [ ∫_Ωρ_7^2(|y_t|^2 + ν_0^2|Δ y|^2 )dx + 1/2d/dt(∫_Ωρ_7^2|∇ y|^2 dx ) = ∫_ωρ_7^2 v·(y_t-ν_0A y)dx; + ∫_Ωρ_7^2 f·(y_t-ν_0A y)dx + 1/2∫_Ωd/dt( ρ_7^2 )|∇ y|^2 dx. ] We observe that, for any ϵ>0, ∫_ωρ_7^2 v·(y_t-ν_0A y)dx ⩽C/ϵ∫_ωρ_4^2|v|^2dx + Cϵ[∫_Ωρ_7^2(|y_t|^2 + |Δ y|^2)dx ], ∫_Ωρ_7^2 f·(y_t-ν_0A y)dx ⩽C/ϵ∫_Ωρ_0^2|f|^2dx + Cϵ[∫_Ωρ_7^2(|y_t|^2 + |Δ y|^2)dx ], ∫_Ωd/dt( ρ_7^2)|∇ y|^2 dx ⩽ C∫_Ωρ_6^2 |∇ y|^2 dx. We take ϵ sufficiently small, in such a way that that the terms involving y in (<ref>) and (<ref>) are absorbed by the left-hand side of (<ref>). Also, from (<ref>) and (<ref>), the time integral of the third term in the right-hand side of (<ref>) is bounded by Cκ_0(y_0,f). Thus, it suffices to apply Gronwall's Lemma to conclude (<ref>) for the Galerkin approximates y_n instead of the actual solution y. Employing standard limiting arguments, as n →∞, we conclude that (<ref>) does hold for the actual solution y. (a) If ζ := μ_-1,1,0, then ζ v ∈ L^2(0,T;H^2(ω)∩ H^1_0(ω))∩ C([0,T];V), (ζ v)_t ∈ L^2(]0,T[×ω)^N, with the estimate ∫_0^T ∫_ω[|(ζ v)_t|^2 + |ζΔ v|^2 ]dx dt + sup_[0,T]ζ v_V^2 ⩽ Cκ_0(y_0,f). (b) Let us also assume that y_0 ∈ H^1_0(Ω)^N. For ζ := μ_-1,1,5, we have the memberships (ζ v_t)_t ∈ L^2(]0,T[×ω)^N, ζ v_t ∈ L^2(0,T;[H^2(ω)∩ H^1_0(ω)]^N), ζ v ∈ L^2(0,T;[H^4(ω)∩ H^1_0(ω)]^N), and the following inequality holds ∫_0^T ∫_ω[|(ζ v_t)_t|^2 + |ζΔ v_t|^2 + |ζΔ^2v|^2 ] dx dt ⩽ Cκ_1(y_0,f). (a) For p,q,r ∈ℝ, we notice that L^*(μ_p,q,rz) = -d/dt(μ_p-8,q+6,r-64)φ + μ_p-4,q+4,r-34y - μ_p-8,q+6,r-64∇π. Choosing p=-1, q=1, and r=0, it follows that | d/dt(μ_p-8,q+6,r-64)| ⩽ Cρ_0^-1, μ_p-4,q+4,r-34⩽ Cρ_3, μ_p-8,q+6,r-64⩽ C. Thus, u:= ζ z and π := μ_-9,7,-64π solve the Stokes equation Lu + ∇π = h, in Q, ∇· u = 0, in Q, u = 0, on Σ, u(T) = 0, in Ω, where h := -d/dt(μ_-9,7,-64)φ + μ_-5,5,-34y ∈ L^2(Q)^N. By standard regularity results for solutions of the Stokes system, we can infer the stated regularity for ζ v = -χ u. (b) As in the previous item, for p,q,r ∈ℝ, we derive [ L^*(μ_p,q,rz_t) = -φd/dt[μ_p,q,rd/dt(μ_-8,6,-64) ] + yμ_p+4,q-2,r+64d/dt(μ_-8,6,-64); - d/dt(μ_p-8,q+6.r-64)φ_t + μ_p-4,q+4,ry_t + μ_p-8,q+6,r-64d/dt(μ_4,-2,64) y; - μ_p-8,q+6,r-64∇π_t. ] For the choice p=-1, q=1, r=5, it is straightforward to check the inequalities |d/dt[μ_p,q,rd/dt(μ_-8,6,-64) ] | ⩽ Cμ_p-8,q+7,r-58ρ_0^-1 = Cμ_-9,8,-53ρ_0^-1⩽ Cρ_0^-1, |μ_p+4,q-2,r+64d/dt(μ_-8,6,-64) | ⩽ Cμ_p-14,q+6,r-152ρ_3 = Cμ_-15,7,-147ρ_3 ⩽ Cρ_3, |d/dt(μ_p-8,q+7,r-64) | ⩽ Cμ_p-8,q+7,r-71ρ_2 = Cμ_-9,8,-66ρ_2 ⩽ Cρ_2, μ_p-4,q+4,r = μ_p-6,q+5,r-20ρ_7 = μ_-7,6,-15ρ_7 ⩽ Cρ_7, |μ_p-8,q+6,r-64d/dt(μ_4,-2,64) | ⩽ Cμ_p-6,q+5,r-37ρ_3 = Cμ_-7,6,-32ρ_3 ⩽ Cρ_3, and μ_p-8,q+6,r-64 = μ_-9,7,-59⩽ C. We can conclude by arguing similarly as in the first two memberships and corresponding estimates. The third ones are obtained doing the same analysis for the term L^*(ζΔ z). Let us set ρ_8 := ζ = μ_-1,1,0 and ρ_9 := μ_-1,1,5/2. Supposing y_0 ∈ H^2(Ω)^N∩ V, Ay_0 ∈[H^1_0(Ω)]^N, ρ_8f_t ∈ L^2(Q)^N, we have the following estimates: sup_[0,T](∫_Ωρ_8^2|y_t|^2dx ) + ∫_Q ρ_8^2|∇ y_t|^2 d(t,x) ⩽ Cκ_2(y_0,f). If furthermore y_0 ∈ H^3(Ω)^N, f(0) ∈ H^1_0(Ω)^N, ∫_Q ρ_9^2(|y_tt|^2 + |Δ y_t|^2 )d(t,x) + sup_[0,T][∫_Q ρ_9^2(|∇ y_t|^2 + |Δ y|^2 )dx ] ⩽ Cκ_3(y_0,f), where κ_2(y_0,f):= y_0_H^2(Ω)^N^2 + ∫_Qρ_0^2 |f|^2d(t,x) + ∫_Q ρ_8^2|f_t|^2 d(t,x) and κ_3(y_0,f) := y_0_H^3(Ω)^N^2 + ∫_Qρ_0^2 |f|^2d(t,x) + ∫_Q ρ_8^2|f_t|^2 d(t,x) + f(0)_H^1_0(Ω)^N^2. We establish the current estimates by following the same approach as in the proof of Lemma <ref>. Here, we begin by differentiating the system (<ref>) with respect to time, and we use ρ_8^2 y_t as a test function: 1/2d/dt(∫_Ωρ_8^2|y_t|^2 dx ) + ν_0 ∫_Ωρ_8^2 |∇ y_t|^2 dx = ∫_ωρ_8^2 v_t· y_t dx + ∫_Ωρ_8^2 f_t· y_t dx + 1/2∫_Ωd/dt(ρ_8^2)|y_t|^2 dx. We note that | d/dt(ρ_8^2) | ⩽ Cρ_7^2, ρ_8 ⩽ Cρ_7 ⩽ Cρ_4; hence, ∫_ωρ_8^2 v_t· y_t dx ⩽ C[∫_ω(|ρ_4 v|^2 + |(ζ v)_t|^2 )dx + ∫_Ωρ_7^2 |y_t|^2 dx ], ∫_Ωρ_8^2 f_t · y_t dx ⩽ C(∫_Ωρ_8^2 |f_t|^2 dx + ∫_Ωρ_7^2|y_t|^2 dx ). By Lemmas <ref> and <ref>, ∫_Qρ_7^2 |y_t|^2 d(t,x) + ∫_0^T∫_ω(|ρ_4 v|^2 + |(ζ v)_t|^2 )dx dt ⩽κ_1(y_0,f), so that by using (<ref>) and (<ref>) in (<ref>), then integrating in time and applying of Gronwall's lemma, it follows that sup_[0,T](∫_Ωρ_8^2|y_t|^2 dx) + ∫_Q ρ_8^2|∇ y_t|^2 d(t,x) ⩽ C (y_t(0)_L^2(Q)^N^2 +∫_Q ρ_8^2|f_t|^2d(t,x) . + κ_1(y_0,f) ). It is simple to infer the subsequent estimate: y_t(0)_L^2(Q)^N^2 ⩽y_0_H^2(Ω)^N^2 + f(0)_L^2(Q)^N^2 + v(0)_L^2(]0,T[×ω)^N^2 ⩽κ_2(y_0,f). Relations (<ref>) and (<ref>) imply (<ref>). Next, we use ρ_9^2(y_tt-ν_0A y_t) as a test function in the system (<ref>) differentiated with respect to time to deduce [ ∫_Ωρ_9^2 (|y_tt|^2 + ν_0^2|Δ y_t|^2 )dx + ν_0d/dt( ∫_Ωρ_9^2 |∇ y_t|^2 dx ) = ∫_ωρ_9^2 v_t· (y_tt-ν_0A y_t) dx; + ∫_Ωρ_9^2 f_t· (y_tt-ν_0A y_t)dx + ν_0∫_Ωd/dt(ρ_9^2 )|∇ y_t|^2 dx ] We observe that ∫_Ωd/dt(ρ_9^2) |∇ y_t|^2dx ⩽ C∫_Ωρ_8^2|∇ y_t|^2 dx, and for each ϵ > 0, ∫_ωρ_9^2 v_t· (y_tt-ν_0A y_t)dx ⩽ C[ 1/ϵ∫_ωζ^2 |v_t|^2 dx + ϵ∫_Ωρ_9^2( |y_tt|^2 + |Δ y_t|^2 ) dx ], as well as ∫_Ωρ_9^2f_t · (y_tt-ν_0A y_t)dx ⩽ C[1/ϵ∫_Ωρ_8^2|f_t|^2 dx + ϵ∫_Ωρ_9^2(|y_tt|^2 + |Δ y_t|^2)dx ]. We fix a sufficiently small ϵ, whence the second terms within the brackets in the right-hand sides of (<ref>) and (<ref>) are absorbed by the left-hand side of (<ref>). Then, using (<ref>) in (<ref>), we infer [ ∫_Ωρ_9^2 (|y_tt|^2 + |Δ y_t|^2 )dx + d/dt( ∫_Ωρ_9^2 |∇ y_t|^2 dx ) ⩽ C(∫_ωζ^2 |v_t|^2 dx + ∫_Ωρ_8^2 |f_t|^2dx; + ∫_Ωρ_8^2 |∇ y_t|^2 dx ). ] Employing Gronwall's lemma in (<ref>), we obtain ∫_Q ρ_9^2 (|y_tt|^2 + |Δ y_t|^2)d(t,x) + sup_[0,T](∫_Ωρ_9^2|∇ y_t|^2 dx ) ⩽ C(∇ y_t(0)_L^2(Q)^N^2 + κ_2(y_0,f) ). We easily establish, with the aid of item (a) of Lemma <ref>, that ∇ y_t(0)_L^2(Q)^N^2 ⩽ C(y(0)_H^3(Q)^N^2 + ∇ v(0)_L^2(Q)^N^2 + ∇ f(0)_L^2(Q)^N^2 ) ⩽ Cκ_3(y_0,f), whence ∫_Q ρ_9^2 (|y_tt|^2 + |Δ y_t|^2)d(t,x) + sup_[0,T](∫_Ωρ_9^2|∇ y_t|^2 dx ) ⩽ Cκ_3(y_0,f). Finally, we use ρ_9^2Δ y_t in the undifferentiated partial differential equation of system (<ref>) as a test function to get [ ∫_Ωρ_9^2 |∇ y_t|^2 dx + ν_0/2d/dt(∫_Ωρ_9^2 |Δ y|^2 dx ); = ∫_ωρ_9^2 v·Δ y_t dx + ∫_Ωρ_9^2 f·Δ y_t dx + ν_0/2∫_Ωd/dt(ρ_9^2)|Δ y|^2 dx; ⩽ C(∫_ωρ_4^2 |v|^2 dx + ∫_Ωρ_0^2 |f|^2 dx + ∫_Ωρ_9^2 |Δ y_t|^2 dx + ∫_Ωρ_7^2|Δ y|^2 dx). ] We use Gronwall's lemma in (<ref>), in such a way that sup_[0,T](∫_Ωρ_9^2 |Δ y|^2 dx ) ⩽ Cκ_3(y_0,f). From the estimates (<ref>) and (<ref>), together with the compatibility condition Ay_0 ∈[H^1_0(Ω)]^N, we derive (<ref>). We write ρ_10:= ζ = μ_-1,1,5 and ρ_11 := μ_-1,1,15/2. Let us assume that y_0 ∈ H^4(Ω)^N∩ V, Ay_0, A^2y_0 ∈[H^1_0(Ω)]^N, ρ_9 Δ f ∈ L^2(Q)^N, ρ_10 f_tt∈ L^2(Q)^N, ρ_10f_t ∈ L^2(0,T; H^1_0(Ω)^N), f(0) ∈[H^2(Ω)∩ H^1_0(Ω)]^N and f_t(0)∈ L^2(Ω). Then, the following estimate holds [ sup_[0,T][∫_Ωρ_10^2 (|y_tt|^2 + |Δ y_t|^2 )dx + ρ_9 y_H^3(Ω)^N^2 ]; + ∫_Q (ρ_10^2 |∇ y_tt|^2 +ρ_10^2|D^3 y_t|^2 + ρ_9^2|D^4 y|^2 ) d(t,x) ⩽ Cκ_4(y_0,f). ] If, furthermore, y_0 ∈ H^5(Ω)^N, f(0) ∈ H^3(Ω)^N, Af(0) ∈ V, and f_t(0) ∈ H^1_0(Ω)^N, then sup_[0,T](∫_Ωρ_11^2 |∇ y_tt|^2 dx ) + ∫_Qρ_11^2(|y_ttt|^2 + |Δ y_tt|^2 )d(t,x) ⩽ C κ_5(y_0,f). where we have written κ_4(y_0,f) := ∫_Q(ρ_9^2 |Δ f|^2 +ρ_10^2|∇ f_t|^2 + ρ_10^2|f_tt|^2) d(t,x) + y_0_H^4(Ω)^N^2 + f(0)_H^2(Ω)^N^2 + f_t(0)_L^2(Ω)^N^2 + κ_3(y_0,f), κ_5(y_0,f) := y_0_H^5(Ω)^N^2 + f(0)_H^3(Ω)^N^2 +f_t(0)_H^1_0(Ω)^N^2 + κ_4(y_0,f). Again, we proceed in the same framework as in the proof of Lemma <ref>. We begin by applying the Stokes operator A on the equation of system (<ref>), and then use -ρ_9^2A^2 y as a test function: [ ν_0∫_Ωρ_9^2|Δ^2 y|^2 dx = -∫_ωρ_9^2 Δ v · A^2 y dx - ∫_ΩΔ f · A^2 y dx + ∫_Ωρ_9^2 Δ y_t · A^2 y dx; ⩽ C(∫_ωζ^2|Δ v|^2 dx + ∫_Ωρ_9^2 |Δ f|^2 dx + ∫_Ωρ_9^2 |Δ y_t|^2 dx ) + 1/2∫_Ωρ_9^2 |Δ^2 y|^2 dx, ] We integrate (<ref>) with respect to time, whence ∫_Q ρ_9^2 |Δ^2 y|^2 d(t,x) ⩽ Cκ_4(y_0,f). We can now easily argue that, under suitable limiting arguments (having in view the compatibility conditions we required in the statement of the present lemma), Eq. (<ref>) yields the corresponding estimate for the solution of (<ref>). We observe that the relations ρ_9A y_t ∈ L^2(Q)^N and ρ_9A^2 y ∈ L^2(Q)^N imply ρ_9 y ∈ L^∞(0,T; H^3(Ω)^N), with sup_[0,T]ρ_9 y_H^3(Ω)^N^2 ⩽ C∫_Q ρ_9^2 ( |Δ y_t|^2 + |Δ^2 y|^2 )d(t,x) ⩽ Cκ_4(y_0,f). In the differential equation of system (<ref>) differentiated once with respect to time, we use the test function ρ_10^2 A^2 y_t: 1/2d/dt(∫_Ωρ_10^2 |Δ y_t|^2 dx ) + ν_0∫_Ωρ_10^2 |∇Δ y_t|^2 dx = ∫_ωρ_10^2 ∇ v_t : ∇Δ y_t dx + ∫_Ωρ_10^2 ∇ f_t : ∇Δ y_tdx + 1/2∫_Ωd/dt( ρ_10^2 ) |Δ y_t|^2 dx. For ϵ > 0, [ ∫_ωρ_10^2 ∇ v_t : ∇Δ y_tdx ⩽ C_ϵ∫_ωζ^2 |∇ v_t|^2 dx + ϵ∫_Ωρ_10^2|∇Δ y_t|^2 dx; ⩽ C_ϵ∫_ω( |(ζv_t)_t|^2 + |(ζ v)_t|^2 + ρ_4^2|v|^2)dx + ϵ∫_Ωρ_10^2 |∇Δ y_t|^2 dx, ] [ ∫_Ωρ_10^2 ∇ f_t: ∇Δ y_t dx ⩽ C_ϵ∫_Ωρ_10^2 |f_tt|^2 dx + ϵ∫_Ωρ_10^2 |∇Δ y_t|^2 dx , ] ∫_Ωd/dt( ρ_10^2)|Δ y_t|^2 dx ⩽ C ∫_Ωρ_9^2 |Δ y_t|^2, and [ Δ y_t(0)^2 ⩽ C(y_0_H^4(Ω)^N^2 + Δ v(0)_L^2(]0,T[×ω)^N^2 + Δ f(0)^2 ); ⩽ Cκ_4(y_0,f). ] Therefore, by taking ϵ sufficiently small, and using (<ref>)-(<ref>) in (<ref>), we deduce sup_[0,T](∫_Ωρ_10^2 |Δ y_t|^2dx ) + ∫_Q ρ_10^2 |∇Δ y_t|^2 d(t,x) ⩽ Cκ_4(y_0,f). Next, we differentiate the equation of system (<ref>) twice with respect to time and we use the test function ρ_10^2 y_tt: 1/2d/dt(∫_Ωρ_10^2 |y_tt|^2 dx ) + ν_0∫_Ωρ_10^2 |∇ y_tt|^2 dx = ∫_ωρ_10^2 v_tt· y_tt dx + ∫_Ωρ_10^2 f_tt· y_ttdx + 1/2∫_Ωd/dt( ρ_10^2 ) |y_tt|^2 dx. We have [ ∫_ωρ_10^2 v_tt· y_ttdx ⩽ C(∫_ωζ^2 |v_tt|^2 dx + ∫_Ωρ_9^2|y_tt|^2 dx ); ⩽ C[ ∫_ω( |(ζv_t)_t|^2 + |(ζ v)_t|^2 + ρ_4^2|v|^2)dx + ∫_Ωρ_9^2 |y_tt|^2 dx], ] [ ∫_Ωρ_10^2 f_tt· y_tt dx ⩽ C(∫_Ωρ_10^2 |f_tt|^2 dx + ∫_Ωρ_9^2 |y_tt|^2 dx ), ] ∫_Ωd/dt( ρ_10^2)|y_tt|^2 dx ⩽ C ∫_Ωρ_9^2 |y_tt|^2, and [ y_tt(0)^2 ⩽ C(y_0_H^4(Ω)^N^2 + Δ v(0)_L^2(]0,T[×ω)^N^2 + Δ f(0)^2 .; . + v_t(0)_L^2(]0,T[×ω)^N^2 + f_t(0)^2 ); ⩽ Cκ_4(y_0,f). ] Using (<ref>), (<ref>) and (<ref>) in (<ref>), then integrating in time and using (<ref>), we infer sup_[0,T](∫_Ωρ_10^2|y_tt|^2 dx ) + ∫_Q ρ_10^2 |∇ y_tt|^2 d(t,x) ⩽ Cκ_4(y_0,f). Estimates (<ref>), (<ref>), (<ref>) and (<ref>) are enough to conclude (<ref>). Now, we use ρ_11^2(y_ttt - ν_0 A y_tt) as a test function in the equation of system (<ref>) twice differentiated in time, reaching [ ∫_Ωρ_11^2|y_ttt|^2 dx + ν_0^2 ∫_Ωρ_11^2|Δ y_tt|^2 dx + ν_0 d/dt(∫_Ωρ_11^2 |∇ y_tt|^2 dx ); = ∫_ωρ_11^2 v_tt· (y_ttt-ν_0A y_tt) dx + ∫_Ωρ_11^2 f_tt· (y_ttt-ν_0 A y_tt)dx; + ν_0 ∫_Ωd/dt(ρ_11^2)|∇ y_tt|^2 dx. ] For ϵ > 0, [ ∫_ωρ_11^2 v_tt· (y_ttt-ν_0A y_tt) dx ⩽ C[1/ϵ∫_ω(|(ζv_t)_t|^2 + |(ζ v)_t|^2 + ρ_4^2|v|^2)dx; + ϵ∫_Ωρ_11^2( |y_ttt|^2 + |Δ y_tt|^2 )dx ], ] ∫_Ωρ_11^2 f_tt· (y_ttt-ν_0 A y_tt)dx ⩽ C[1/ϵ∫_Ωρ_10^2 |f_tt|^2 dx + ϵ∫_Ωρ_11^2( |y_ttt|^2 + |Δ y_tt|^2 )dx], and we also notice that ∫_Ωd/dt(ρ_11^2)|∇ y_tt|^2 dx ⩽ C ∫_Ωρ_10^2 |∇ y_tt|^2 dx. We easily check that ∇ y_tt(0)^2 ⩽ Cκ_5(y_0,f). As in the proof of (<ref>), inequalities (<ref>)-(<ref>), we can infer from (<ref>), through an adequate choice of a small positive ϵ, and the aid of the previous estimates, the subsequent inequality ∫_Q ρ_11^2 ( |y_ttt|^2 +|Δ y_tt|^2) d(t,x) + sup_[0,T](∫_Ωρ_11^2 |∇ y_tt|^2 dx ) ⩽ Cκ_5(y_0,f). Estimate (<ref>) is precisely (<ref>); hence, we have finished the proof of the present result. § NULL CONTROLLABILITY OF THE MODEL (<REF>) §.§ Local right inversion theorem It is possible to find a proof of the subsequent result in <cit.>. This is the inversion theorem that we will use to obtain our local null controllability result. Let Y and Z be two Banach spaces, and H : Y → Z be a continuous function, with H(0) = 0. We assume that there are three constants δ,η^',M > 0 and a continuous linear mapping Λ from Y onto Z with the following properties: (i) For all e ∈ Y, we have e_Y ⩽ MΛ(e)_Z; (ii) The constants δ and M satisfy δ < M^-1; (iii) Whenever e_1,e_2 ∈ B_Y(0;η^'), the inequality H(e_1) - H(e_2) - Λ(e_1-e_2)_Z ⩽δe_1-e_2_Y holds. Then, whenever k ∈ B_Z(0;η), the equation H(e) = k has a solution e ∈ B_Y(0;η^'), where η:= (M^-1 - δ)η^'. A typical way of verifying condition (iii) is through the remark presented below. Let Y and Z be two Banach spaces, and let us consider a continuous mapping H:Y→ Z, with H(0)=0, of class C^1(Y,Z). Then it has property (iii), for any positive δ subject to (ii), as long as we take η^' as the continuity constant of DH ∈ℒ(Y,ℒ(Y,Z)) at the origin of Y. §.§ The setup Let us set X_1 := L^2(Q;ρ_3^2)^N, X_2 := L^2(]0,T[×ω; ρ_4^2)^N. We define [ Y := { (y,p,v) ∈ X_1 × L^2(Q) × X_2 : y_t ∈ L^2(Q)^N, ∇ y ∈ L^2(Q)^N× N,; (ζ v)_t, ζΔ v, (ζ v_t)_t, ζΔ v_t, ζD^4 v ∈ L^2(]0,T[×ω)^N,; for f:= Ly + ∇ p - χ_ω, ρ_0 f, ρ_8 f_t, ρ_9 Δ f, ρ_10f_tt∈ L^2(Q)^N,; ρ_10f_t ∈ L^2(0,T; H^1_0(Ω)^N), f(0) ∈[H^3(Ω)∩ H^1_0(Ω)]^N,; Af(0) ∈ H^1_0(Ω)^N, f_t(0) ∈ H^1_0(Ω)^N, y|_Σ≡ 0, ∇· y ≡ 0,; y(0) ∈[H^5(Ω)∩ V]^N, Ay(0), A^2y(0) ∈ H^1_0(Ω)^N, ∫_Ω p dx = 0 }. ] We consider on Y the norm [ (y,p,v)_Y^2 := ∫_Q(ρ_3^2|y|^2 + ρ_0^2 |f|^2+ ρ_8^2 |f_t|^2 + ρ_9^2 |Δ f|^2 + ρ_10^2|∇ f_t|^2 + ρ_10^2|f_tt|^2 )d(t,x); + ∫_0^T ∫_ω(ρ_4^2|v|^2 + |(ζ v)_t|^2 + |ζΔ v|^2 + |(ζ v_t)_t|^2 + |ζΔ v_t|^2 + |D^4( ζ v )|^2 ) dx dt; + f(0)_[H^3(Ω)∩ H^1_0(Ω)]^N^2 + f_t(0)_H^1_0(Ω)^N^2 + y(0)_H^5(Ω)^N^2, ] where in (<ref>) we have written f:= Ly + ∇ p - χ_ω v. Then, endowing the space Y with ·_Y renders it a Banach space. Now, we put [ F := { f ∈ L^2(Q)^N : ρ_0 f, ρ_8 f_t, ρ_9 Δ f, ρ_10f_tt∈ L^2(Q)^N, ρ_10f_t ∈ L^2(0,T; H^1_0(Ω)^N),; f(0) ∈[H^3(Ω)∩ H^1_0(Ω)]^N, f_t(0) ∈ H^1_0(Ω)^N }, ] f_F^2 := ∫_Q(ρ_0^2 |f|^2+ ρ_8^2 |f_t|^2 + ρ_9^2 |Δ f|^2 + ρ_10^2|∇ f_t|^2 + ρ_10^2|f_tt|^2 )d(t,x) + f(0)_[H^3(Ω)∩ H^1_0(Ω)]^N^2 + f_t(0)_H^1_0(Ω)^N^2, and also consider the space of initial conditions G := { y_0 ∈ H^5(Ω)^N ∩ V : Ay_0, A^2y_0 ∈ H^1_0(Ω)^N }, with the same topology as H^5(Ω)^N∩ V. Then, we define Z := F × G, The space Z with the natural product topology is also a Banach space. Finally, we define the mapping H : Y → Z by H(y,p,v) := (Dy/Dt - ∇·𝒯(y,p) - χ_ω v , y(0)). §.§ Three lemmas and the conclusion The mapping H : Y → Z is well-defined, and it is continuous. We write H(y,p,v) = (H_1(y,p,v),H_2(y,p,v)), where H_1(y,p,v) := Dy/Dt - ∇·𝒯(y,p) - χ_ω v; H_2(y,p,v) := y(0). There is nothing to prove about H_2, since it is cleary linear and continuous. We will consider only the mapping H_1 in what follows. We decompose H_1(y,v) = h_1(y,p,v) + h_2(y,p,v) + h_3(y,p,v), where h_1(y,p,v):= y_t -ν(0)Δ y + ∇ p - χ_ω v, h_2(y,p,v):= - ∇·[(ν(∇ y)-ν(0))∇ y ], h_3(y,p,v)=(y·∇) y. By the definition of the norm of F, it follows promptly that h_1(y,p,v)_F < ∞. Next, we will prove that the quantity h_2(y,p,v)_F is finite. CLAIM 1: Δ h_2(y,p,v)_9 < ∞. We notice that [ |Δ h_2(y,p,v)| ⩽ C[ (r+1)r|r-1||∇ y|^r-2|D^2 y|^3 + (r+1)r|∇ y|^r-1|D^2 y||D^3 y|+(r+1)|∇ y|^r|D^4 y|]; = C(D_1,1 + D_1,2 + D_1,3). ] In the case r=1, the term D_1,1 vanishes and thus |Δ h_2| is bounded by C(D_1,2 + D_1,3). Otherwise, assuming r ⩾ 2, we have [ ∫_Q ρ_9^2 D_1,1^2 d(t,x) ⩽ C(r)∫_0^T ρ_9^2 ∫_Ω |∇ y|^2(r-2) |D^2 y|^6 dx dt; ⩽ C(r)∫_0^T ρ_9^2D^3 y^2(r-2)D^2 y_L^6(Ω)^6 dt; ⩽ C(r) ∫_0^T ρ_9^2D^3 y^2(r-2)y_H^3(Ω)^6 dt; ⩽ C(r) ∫_0^T ρ_9^2D^3 y^2(r+2) dt; ⩽ C(r)( sup_[0,T]ρ_9D^3 y)^2(r+2)∫_Q ρ_9^-2(r+1) d(t,x) < ∞. ] In the above equations, we used the continuous immersions: H^2(Ω) ↪ L^∞(Ω); H^1(Ω) ↪ L^6(Ω). These are valid for N ⩽ 3, see <cit.>, and we will use them tacitly henceforth. Now, we obtain the estimate for D_1,2: [ ∫_Q ρ_9^2 D_1,2^2 d(t,x) ⩽ C(r) ∫_0^T ρ_9^2 ∫_Ω |∇ y|^2(r-1)|D^2 y|^2|D^3 y|^2 dx dt; ⩽ C(r) ∫_0^T ρ_9^2D^3 y^2rD^4 y^2 dt; ⩽ C(r) (sup_[0,T]ρ_9D^3 y)^2r∫_Q ρ_9^2|D^4 y|^2 d(t,x) < ∞. ] Likewise, we show D_1,3 to be finite, since [ ∫_Q ρ_9^2 D_1,3^2 d(t,x) ⩽ C(r) ∫_0^T ρ_9^2 ∫_Ω |∇ y|^2r |D^4 y|^2dx dt; ⩽ C(r) ∫_0^T ρ_9^-2r(ρ_9D^3 y)^2r(ρ_9D^4 y)^2 dt; ⩽ C(r) sup_[0,T]ρ_9D^3 y^2r∫_Q ρ_9^2 |D^4 y|^2 d(t,x) < ∞. ] CLAIM 2: ∂_t^2 h_2(y,p,v)_10 < ∞. We begin with the pointwise estimate, [ |∂_t^2 h_2(y,p,v)| ⩽ C[ (r+1)r|r-1||∇ y|^r-2|∇ y_t|^2|Δ y| + (r+1)r|∇ y|^r-1|∇ y_tt||Δ y|; + (r+1)r|∇ y|^r-1|∇ y_t||Δ y_t| +(r+1)|∇ y|^r|Δ y_tt|]; = C(D_2,1+D_2,2+D_2,3+D_2,4). ] As in the previous claim, if r=1, then D_2,1≡ 0. For r ⩾ 2, the next estimate is valid: [ ∫_Q ρ_10^2 D_2,1^2 d(t,x) ⩽ C(r) ∫_0^T ρ_10^2 ∫_Ω |∇ y|^2(r-2)|∇ y_t|^4|Δ y|^2 dx dt; ⩽ C(r) ∫_0^T ρ_9^-2rρ_9 D^3 y^2(r-2)ρ_9 D^4 y^2 ρ_10Δ y_t^4 dt; ⩽ C(r) (sup_[0,T]ρ_9D^3 y)^2(r-2)(sup_[0,T]ρ_10Δ y_t)^4 ∫_Q ρ_9^2|D^4 y|^2 d(t,x); < ∞. ] Proceeding similarly, we prove the remaining inequalities: [ ∫_Q ρ_10^2 D_2,2^2 d(t,x) ⩽ C(r) ∫_0^T ρ_10^2 ∫_Ω |∇ y|^2(r-1)|∇ y_tt|^2|Δ y|^2 dx dt; ⩽ C(r) ∫_0^T ρ_10^2ρ_9^-2rρ_11^-2ρ_9 D^3 y^2(r-1)ρ_9 D^4 y^2 ρ_11∇ y_tt^2 dt; ⩽ C(r) (sup_[0,T]ρ_9D^3 y)^2(r-1)(sup_[0,T]ρ_11∇ y_tt)^2 ∫_Q ρ_9^2|D^4 y|^2 d(t,x); < ∞; ] [ ∫_Q ρ_10^2 D_2,3^2 d(t,x) ⩽ C(r) ∫_0^T ρ_10^2 ∫_Ω |∇ y|^2(r-1) |∇ y_t|^2|Δ y_t|^2 dx dt; ⩽ C(r) ∫_0^T ρ_9^-2(r-1)ρ_10^-2ρ_9 D^3 y^2(r-1)ρ_10 D^3 y_t^2 ρ_10Δ y_t^2 dt; ⩽ C(r) (sup_[0,T]ρ_9D^3 y)^2(r-1)(sup_[0,T]ρ_10Δ y_t)^2 ∫_Q ρ_10^2|D^3 y_t|^2 d(t,x); < ∞; ] [ ∫_Q ρ_10^2 D_2,4^2 d(t,x) ⩽ C(r) ∫_0^T ρ_10^2 ∫_Ω |∇ y|^2r |Δ y_tt|^2 dx dt; ⩽ C(r) ∫_0^T ρ_10^2ρ_9^-2rρ_11^-2ρ_9 D^3 y^2rρ_11Δ y_tt^2 dt; ⩽ C(r) (sup_[0,T]ρ_9D^3 y)^2r∫_Q ρ_11^2|Δ y_tt|^2 d(t,x); < ∞. ] This finishes the proof of the second claim. CLAIM 3: |∂_t ∇ h_2(y,p,v)| _10 < ∞. As before, we begin by considering the pointwise estimate: [ |∂_t ∇ h_2(y,p,v)| ⩽ C[(r+1)r|r-1||∇ y|^r-2|∇ y_t||Δ y| + (r+1)r|∇ y|^r-1|Δ y_t| .; .+ (r+1)|∇ y|^r|D^3 y_t| ]; = C(D_3,1 + D_3,2 + D_3,3). ] Again, if r=1, then we need not consider D_3,1, since it vanishes. For r⩾ 2, [ ∫_Q ρ_10^2 D_3,1^2 d(t,x) ⩽ C(r)∫_0^T ρ_10^2 ∫_Ω |∇ y|^2(r-2)|∇ y_t|^2 |Δ y|^2 dx dt; ⩽ C(r) ∫_0^T ρ_10^2 ρ_9^-2rρ_9D^3 y^2(r-2)ρ_9D^4 y^2ρ_9∇ y_t^2 dt; ⩽ C(r) (sup_[0,T]ρ_9D^3 y)^2(r-2)(sup_[0,T]ρ_9D^4 y)^2∫_Q ρ_9^2 |∇ y_t|^2 d(t,x); < ∞, ] [ ∫_Q ρ_10^2 D_3,2^2 d(t,x) ⩽ C(r)∫_0^T ρ_10^2∫_Ω |∇ y|^2(r-1)|Δ y_t|^2 dx dt; ⩽ C(r) ∫_0^T ρ_9^-2(r-1)ρ_9D^3 y^2(r-1)ρ_10Δ y_t^2 dt; ⩽ C(r)(sup_[0,T]ρ_9D^3 y^2 )^2(r-1)(sup_[0,T]ρ_10Δ y_t)^2 < ∞, ] and [ ∫_Q ρ_10^2 D_3,3^2 d(t,x) ⩽ C(r)∫_0^Tρ_10^2 ∫_Ω |∇ y|^2r|D^3 y_t|^2dx dt; ⩽ C(r)∫_0^T ρ_9^-2rρ_9D^3 y^2rρ_10D^3 y_t^2dt; ⩽ C(r)(sup_[0,T]ρ_9D^3 y)^2r∫_Q ρ_10^2|D^3 y_t|^2 d(t,x) <∞. ] These inequalities confirm the third claim. The remaining terms composing the F-norm of h_2(y,p,v), h_2(y,p,v)_F, are norms of lower order derivatives of it, compared to the ones considered above, in adequate weighted L^2 spaces. Therefore, these terms are even easier to handle. A similar remark is also true for h_3(y,p,v)_F. In addition, we can show the continuity of H via estimates which are very similar to the ones that we carried out in the claims above; hence, we omit these computations. This ends the proof of the Lemma. The mapping H is strictly differentiable at the origin of Y, with derivative DH(0,0,0) = Λ∈ℒ(Y,Z) given by Λ· (y,p,v) = ( y_t - ν_0 Δ y + ∇ p - χ_ω v, y(0)) = (Λ_1· (y,p,v),Λ_2· (y,p,v)). In fact, H is of class C^1(Y,Z) and, for each (y,p,v) ∈ Y, its derivative DH(y,p,v) ∈ℒ(Y,Z) is given by DH(y,p,v)· (y,p,v) = (Λ_1(y,p,v)· (y,p,v) , Λ_2 · (y,p,v) ), where we have written Λ_1(y,p,v)· (y,p,v) := Λ_1· (y,p,v) - rν_1∇·[ χ_y |∇y|^r-2∇y : ∇ y ∇y + |∇y|^r∇ y ] + (y ·∇) y + (y·∇) y , ∇y : ∇ y := ( ∇y^⊺∇ y ), χ_y is the indicator function of the set {∇y≠ 0}. We will only prove the first claim, i.e., that H is strictly differentiable at the origin (0,0,0) ∈ Y, with DH(0,0,0) being onto Z. There is no additional difficulty to prove the lemma in its full force. We write H = (H_1,H_2) as in (<ref>) of Lemma <ref>. Again, it is only necessary to investigate H_1, since H_2 is linear and continuous, and therefore C^∞. Given (y,p,v),(y,p,v) ∈ Y, we note that H_1(y,p,v) - H_2(y,p,v) - Λ_1 · (y-y, p - p,v-v) = -ν_1 D_1 + D_2, where D_1 := ∇·(|∇y|^r ∇y - |∇ y|^r∇ y ), D_2 := (y·∇)y - (y ·∇) y. Let us take two positive real numbers, ϵ and δ, and we suppose (y,p,v)_Y ⩽δ, (y,p,v)_Y ⩽δ. We must show that we can take δ = δ(ϵ) such that H_1(y,p,v) - H_2(y,p,v) - Λ_1 · (y-y, p - p,v-v)_F ⩽ϵ(y-y, p - p, v-v)_Y. We assume, without loss of generality, that δ < 1. It is enough to show that ν_1D_1_F + D_2_F ⩽ϵ(y-y, p-p, v- v)_Y, for a suitable δ = δ(ϵ). To begin with, we observe that [ |Δ D_1| ⩽ C (r+1)r[|r-1|||∇y|^r-2 - |∇ y|^r-2||D^2y|^3; + |r-1||∇ y|^r-2(|D^2y|^2 + |D^2 y|^2 )|∇y -∇ y|; + |r-1|(|∇y|^r-2 + |∇ y|^r-2)|∇y -∇ y||D^2y||D^3y|; + |∇ y|^r-1|D^2y-D^2 y||D^3y| +|∇ y|^r-1|D^2 y||D^3(y-y)|; + |∇y|^r-1|∇(y-y)||D^4 y| + |∇ y|^r|D^4(y-y)| ]; = C(r+1)r(D_1,1 + ⋯ + D_1,7). ] If r=1, then D_1,1≡ D_1,2≡ D_1,3≡ 0, whereas for r=2 we also have D_1,1≡ 0. If r ⩾ 3, we follow estimates similar to the ones we developed in Lemma <ref>, and make use of the immersions we described there, in such a way that [ ∫_Q ρ_9^2 D_1,1^2 d(t,x); ⩽ C(r)∫_0^T ρ_9^2D^3 (y-y)^2(D^3 y^2(r-3) + D^3 y^2(r-3))D^2 y_L^6(Ω)^6 dt; ⩽ C(r) ∫_0^T ρ_9^2D^3 (y-y)^2(D^3 y^2(r-3) + D^3 y^2(r-3))D^3y^6 dt; = C(r) ∫_0^T ρ_9^-2rρ_9D^3 (y-y)^2(ρ_9D^3 y^2(r-3) + ρ_9D^3 y^2(r-3))ρ_9D^3y^6 dt; ⩽ C(r)δ^2r(y-y,p - p, v-v)_Y^2. ] Next, for r⩾ 2, [ ∫_Q ρ_9^2 D_1,2^2 d(t,x); ⩽ C(r)∫_0^T ρ_9^2D^3 y^2(r-2)D^3(y-y)^2(D^2y_L^4(Ω)^4 + D^2 y_L^4(Ω)^4 )dt; ⩽ C(r)∫_0^Tρ_9^2D^3 y^2(r-2)D^3(y-y)^2(D^3 y^4 + D^3 y^4 )dt; ⩽ C(r)δ^2r(y-y, p-p, v-v)_Y^2, ] [ ∫_Q ρ_9^2D_1,3^2d(t,x); ⩽ C(r)∫_0^Tρ_9^2(D^3 y^2(r-2) + D^3 y^2(r-2))D^3(y-y)^2D^4 y^2D^3 y^2 dt; ⩽ C(r)δ^2r(y-y,p-p,v-v)_Y^2. ] Now, for every r ⩾ 1, [ ∫_Q ρ_9^2D_1,4^2d(t,x) ⩽ C(r)∫_0^T ρ_9^2D^3 y^2(r-1)D^4(y-y)^2D^3y^2 dt; ⩽ C(r)δ^2r(y-y,p-p,v-v)_Y^2, ] [ ∫_Q ρ_9^2D_1,5^2d(t,x) ⩽ C(r)∫_0^T ρ_9^2 D^3 y^2(r-1)D^4 y^2D^3 (y-y)^2 dt; ⩽ C(r) δ^2r(y-y,p-p, v-v)_Y^2, ] [ ∫_Q ρ_9^2D_1,6^2d(t,x) ⩽ C(r)∫_0^T ρ_9^2D^3 y^2(r-1)D^3(y-y)^2D^4y^2 dt; ⩽ C(r)δ^2r(y-y,p-p, v-v)_Y^2, ] [ ∫_Q ρ_9^2D_1,7^2d(t,x) ⩽ C(r)∫_0^T ρ_9^2D^3 y^2rD^4(y-y)^2 dt; ⩽ C(r)δ^2r(y-y,p-p, v-v)_Y^2. ] Summing up, the computations we carried out above yield Δ D_1_9 ⩽ C(r)δ^r(y-y,p-p,v-v)_Y. We can treat the remaining terms composing the F-norm of D_1 likewise, as we argued in Lemma <ref>. Dealing with D_2 is even simpler, since it involves lower order derivatives of y. In this way, we deduce that ν_1D_1_F + D_2_F ⩽ C(r)δ(y-y,p-p,v-v)_Y. Thus, it suffices to take any positive δ < min(1,ϵ/C(r)) in order to finish the proof. The linear operator DH(0,0,0) : Y → Z is continuous and onto. Furthermore, there exists a constant M>0 such that (y,p,v)_Y ⩽ MDH(0,0,0)· (y,p,v)_Z The continuity of DH(0,0,0) follows promptly from the definition of the norms of Y and Z. As for the surjectiveness of this mapping, let us consider (f,y_0) ∈ Z. We take (y,p,v) as the state-pressure-control tuple given by Theorem <ref>. By the estimates we proved in subsection <ref>, namely (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), and (<ref>), together with Lemma <ref>, the membership (y,p,v) ∈ Y is valid. Moreover, DH(0,0,0)· (y,p,v) = (y_t - ν_0Δ y +∇ p - χ_ω v, y(0)) = (f,y_0), where the last equality holds by the choice of (y,p,v); hence, DH(0,0,0) is onto Z. By the aforementioned estimates, (<ref>) follows easily. This establishes the lemma. §.§ Proof of Theorem <ref> According to Lemmas <ref>, <ref> and <ref>, it is licit to apply Theorem <ref>. This result allows us to deduce the existence of η > 0 such that, for each (f,y_0) ∈ Z subject to (f,y_0)_Z < η, the equation H(y,p,v) = (f,y_0) has a solution (y,p,v) ∈ Y which satisfies (y,p,v)_Y < B η, for a suitable constant B > 0 which is independent of η. Explicitly, we can take B := (M^-1 - δ)^-1, where M>0 is given by Lemma <ref> (cf. (<ref>)), and where we select the positive constant δ < M^-1 such that H satisfies condition (iii) of Theorem <ref>. Such a constant δ does in fact exist by Lemma <ref>. In particular, taking f≡ 0, inequality (<ref>) reads y_0_H^5(Ω)^N < η. Since (y,p,v) ∈ Y, we have (<ref>), and alonside (<ref>), we see that (y,p,v) does solve (<ref>). § NUMERICAL ANALYSIS §.§ Proof of the convergence of the algorithm The proof of this result is straightforward once we have established Lemmas <ref> and <ref>. We present it here for completeness. Firstly, we observe that Lemma <ref> ensures that (y^n+1,p^n+1,v^n+1) is well-defined in terms of (y^n,p^n,v^n), since in this lemma we showed that DH(0,0,0) is bijective. Furthermore, we have DH(0,0,0)^-1_ℒ(Z,Y)⩽ M, according to the notations of this lemma. Next, we take y_0 ∈ G, with y_0_H^5(Ω)^N < η, and we let (y,p,v) ∈ Y be the solution of H(y,p,v) = (0,y_0). We also consider 0<ϵ < (2M)^-1. By Lemma <ref>, there exists δ >0 such that the relations (y,p,v)∈ Y and (y,p,v) ∈ Y, (y-y, p-p, v-v)_Y ⩽δ imply DH(y,p,v) - DH(y,p,v)_ℒ(Y,Z)⩽ϵ. Shrinking η, if necessary, we can assume η⩽δ. Employing Lemma <ref> once more, we find κ = κ(y,p,v) ∈]0,1[ such that (y,p,v) ∈ Y and (y-y,p-p,v-v)_Y ⩽κ together imply H(y,p,v) - H(y,p,v) - DH(y,p,v)· (y-y,p-p,v-v)_Z ⩽ϵ(y-y,p-p,v-v)_Y. We write e^n := (y^n, p^n, v^n) - (y,p,v), and let us assume e^0_Y ⩽κ. By the algorithm, e^n+1 = - DH(0,0,0)^-1[H(y^n,p^n,v^n)-H(y,p,v) - DH(y,p,v) · e^n ] - DH(0,0,0)^-1[DH(y,p,v) - DH(0,0,0) ]· e^n, whence e^n+1_Y ⩽ M {H(y^n,p^n,v^n)-H(y,p,v) - DH(y,p,v)e^n. .+ [DH(y,p,v) - DH(0,0,0) ]· e^n}. Assuming inductively that e^n_Y ⩽κ, which holds true for n=0, it follows that e^n+1_Y ⩽ 2Mϵe^n_Y. Thus, we also have e^n+1_Y ⩽κ. By induction, it follows that e^n_Y⩽κ, for every n; hence, it is always possible to pass from (<ref>) to (<ref>). Let us take θ := 2Mϵ. Applying inequality (<ref>) iteratively in n, we conclude that e^n_Y ⩽θ^ne_0_Y. This proves Theorem <ref>. §.§ Implementation of the algorithm To implement the fixed-point numerical algorithm, we proceed in two steps. Firstly, it is necessary to implement a solver for the control problem of the forced Stokes system. We begin with the variational problem (<ref>) and adequately reformulate it to achieve a mixed formulation, as in <cit.>. Below, we recall the main ideas for N=2. After treating the linear problem, we iterate it by updating the source term according to our algorithm. Under the notations of the proof of Theorem <ref> (see (<ref>)), we define u := ρ_3^-1(L^*φ + ∇π), m := ρ_4^-1φ, and k := ρ_4^-1π. Let us introduce the spaces Z := { (m^',k^') : m^'∈ L^2(0,T; H^1_0(Ω)^2, m^'_t ∈ L^2(Q)^2, k^'∈ L^2(0,T; H^1(Ω)), ∫_Ω k^' dx = 0 a.e. }, and W:= L^2(Q)^2 × Z, M := L^2(0,T;H^1_0(Ω)^2)× L^2(Q), as well as the bilinear forms b_1 : W × W →ℝ, B,B_1 : W × M →ℝ by b_1((u,m,k),(u^', m^', k^')) := ∫_Q {u · u^' + χ m · m^'}d(t,x), B((u,m,k),(λ,μ)) := ∫_Q {λ·[u+ρ_3^-1(ρ_4 m)_t + ∇(ρ_4 k ) ] - ∇( ρ_3^-1λ): ∇(ρ_4 m ) } d(t,x) and B_1((u,m,k),(λ,μ)) = B((u,m,k),(λ,μ)) - ∫_Q ρ_3^-1μ∇·(ρ_4 m ) d(t,x). The last element we introduce is the linear form Λ : W →ℝ, which is given by ⟨Λ, (u,m,k) ⟩ := ∫_Q ρ_4 f m d(t,x) + ∫_Ω (ρ_4 m)(0)y_0 dx. We reformulate problem (<ref>) as: find (u,m,k) ∈ W and multipliers (λ,μ) ∈ M such that b_1((u,m,k),(u^',m^',k^')) + B_1((u^',m^',k^'),(λ,μ)) = ⟨Λ, (u^',m^',k^') ⟩, for all (u^',m^',k^') ∈ W, B_1((u,m,k),(λ^',μ^')) = 0, for all (λ^',μ^') ∈ M. After we solve it, we recover the control and corresponding velocity field of the linear control problem (<ref>) via v = - χρ_4^-1 m and y = ρ_3^-1 u. If we assume that Ω is polygonal, it is simple to find finite dimensional approximations W_h and M_h of the spaces W and M. §.§ A numerical experiment In the sequel, we will employ the FreeFem++ library of C++; see <http://www.freefem.org/ff++> for more informations. In Table <ref>, we describe the datum we used to apply the quasi-Newton method for (<ref>). We illustrate in Figure <ref> the 2D mesh of Ω, and the 3D mesh of the cylinder Q. In Figure <ref>, we show both components of the initial state y(0) = y_0. Our stopping criterion is y^n+1-y^n_L^2(Q)/y^n_L^2(Q)⩽ϵ, with ϵ = 10^-8. We took as the initial guess (y^0,p^0,v^0) = (0,0,0). We attained convergence after six iterates, with a rate of 4.68. We begin to illustrate the overall behavior of the control and of the state we computed through the plots of some cross-sections in space of them. On the one hand, for the control, we plot the x_1 = 0.9 and x_1 = 2.1 cuts in Figures <ref> and <ref>, respectively. On the other hand, we provide the surfaces comprising the values of the state components, relative to these cuts, in Figures <ref> and <ref>. The time evolution of the norms of the control and of the corresponding state is what we illustrate in Figure <ref>. It corroborates our theoretical findings as these norms decay exponentially. To further illustrate the control, we provide a surface of its values at initial time in Figure <ref>. Then, we give some insight into the dynamics of the problem by showcasing some heat maps of the control and of its corresponding state. Namely, in Figure <ref>, we illustrate the control at time t=0.15 — it is already considerably small, as we would expect from Figure <ref>. For several times, viz., for each t∈{0.15, 0.25, 0.35, 0.45}, we give a heat map of the first (respectively, second) component of the velocity field in Figure <ref> (respectively, Figure <ref>). § COMMENTS AND PERSPECTIVES §.§ On the constitutive law for the shear stress Upon scrutinizing the proof of Lemmas <ref> and <ref>, we conclude that they still hold for any function ν : ℝ^N× N→ℝ in (<ref>) having the following properties: * ν⩾ν_0, for some constant ν_0>0; * ν is of class C^3( ℝ^N× N\{ 0 }); * There exists r>0 such that |D^k ν(A)|⩽ C(1 + |A|^(r-k)^+), for k=0,1,2,3, and for every A ∈ℝ^N× N\{ 0}. With Lemmas <ref> and <ref> at hand, we can follow the remaining steps towards the main result, i.e., Theorem <ref>, in the same manner as we proceeded in Section <ref>. This more general class of constitutive laws includes the one determining the reference model of this paper, namely, ν(A) := ν_0 + ν_1|A|^r, when r∈{ 1, 2 } or r⩾ 3. An example of another class of functions ν for which the properties we stated above hold are ν(A) := ν_0 (1 + ν_1 |A|^2 )^r/2, r ∈{1,2}∪[3,∞[. §.§ On the use of the gradient instead of the deformation tensor We can replace the gradient of the velocity field in (<ref>) with the deformation tensor, Dy = ( ∇ y + ∇ y^T)/2, without losing any of the results we established. From a practical viewpoint, this form of the model is more realistic. Analyzing the estimates we carried out throughout the present work, it is easy to see that the techniques we employed work just as well under this substitution. In particular, we notice the new framework shares the linearization around the zero trajectory with the one we studied in Section <ref>. Using the estimates developed there, alongside Korn-type inequalities, we can prove all of the corresponding results in Sections <ref> and <ref> for this alternate version of the model (<ref>)-(<ref>). §.§ On extensions of Theorem <ref> and some related open questions Boundary controllability. We remark that a corresponding boundary local null controllability result follows from Theorem <ref>. In effect, let us assume that the initial data y_0 belongs to H^5_0(Ω)∩ V, being sufficiently small in the (strong) topology of this space, and that we act with a control on a smooth portion γ of the boundary ∂Ω (with γ≠∂Ω and γ≠∅). We can employ standard geometrical arguments to extend Ω to an open region Ω, with a smooth boundary ∂Ω, and in a way that ∂Ω\γ⊂∂Ω. Acting distributively over ω:= Ω\Ω, with y_0 extended to zero outside of Ω, we obtain a control v∈ L^2(]0,T[ ×ω) driving the corresponding state y to zero at time T. A boundary control for the original problem is y|_[0,T]×γ. Local controllability to trajectories. Regarding the local exact controllability to trajectories, there are two key aspects to investigate. Firstly, we must prove a global Carleman inequality, analogous to Proposition <ref>, but for the adjoint system of the linearization around the given trajectory, cf. Lemma <ref>. Secondly, we have to extend the estimates of Section <ref> for this linearized problem. These endeavors are not straightforward, whence we leave this question open for future investigations. On the restrictions on the exponent r. We notice that the estimates of Section <ref> are not immediately extensible for the values of r > 0 outside of {1,2}∪[3,∞[. However, we conjecture that our main result (viz., Theorem <ref>) is still in force for these values of r. A possible way to establish this is to parametrically regularize the function ν around zero, and attentively keep track of the regularization parameters throughout the estimates. We leave this question open here. Requirements on the initial datum. Through another regularization argument, we possibly could require a less restrictive topology for the initial datum in the main theorem. Namely, if we assume y_0 ∈ H only, we ought to carry out estimates for the uncontrolled problem (corresponding to (<ref>) with v≡ 0) to show that there exists t_0 ∈]0,T[ for which y(t_0,·)_H^5(Ω)^N∩ V⩽η, as long as y_0_H is sufficiently small. We choose not to delve in the technicalities of these estimates here (see <cit.> for the application of such an argument in the case of the Navier-Stokes equations with the Navier boundary condition). However, we emphasize that this is a non-trivial task. Thus, assuming this is valid, Theorem <ref> asserts that there exists a control v ∈ L^2(]t_0,T[×ω) driving y(t_0,·) to zero at time T. From the exponential decay of solutions, see <cit.>, this argument immediately provides a large-time global null controllability result. Remarks on other boundary conditions. We observe that, if instead of no-slip boundary conditions, we assume Navier boundary conditions, the method of <cit.>, used for the Navier-Stokes equations, may apply to the current model. If we manage to deal with the additional terms figuring in the expansions we must make after an appropriate time rescaling, especially the boundary layers, we should obtain a small-time global exact controllability to trajectories result (under Navier boundary conditions). Alternatively, if we consider the model (<ref>)-(<ref>) with Ω = 𝕋 (the N-dimensional torus) and periodic boundary conditions, then we can easily conduct the regularizing argument for the initial datum we outlined above, whence we can prove large-time global null controllability for this model — we omit the details here. Stabilization results. It might be that, for ν_1 > 0, an appropriate use of the stabilizing effect of the power-law model makes it easier to establish stabilization results for this class of non-Newtonian fluids. In this way, we propose that our current contributions could bridge such results with global null controllability ones. We remark that, even for the Navier-Stokes equations (corresponding to ν_1 = 0) under no-slip boundary conditions, whether global null controllability holds is an open problem. We suggest that such results for (<ref>)-(<ref>) (with ν_1 > 0) could provide insight on this important open question. apalike
http://arxiv.org/abs/2307.04655v1
20230710155459
Dark Matter in Fractional Gravity II: Tests in Galaxy Clusters
[ "Francesco Benetti", "Andrea Lapi", "Giovanni Gandolfi", "Balakrishna S. Haridasu", "Luigi Danese" ]
astro-ph.CO
[ "astro-ph.CO" ]
§ INTRODUCTION Galaxy clusters constitute the largest bound structures in the Universe, with dark matter (DM) masses M∼ 10^14–15 M_⊙ and sizes extending out to R∼ a few Mpcs. Most of the baryons are in the form of a hot diffuse gas, referred to as the intracluster medium (ICM), with a mass ratio over the DM very close to the cosmic fraction Ω_b/Ω_M≈ 0.16 <cit.>. The density n(r) and temperature T(r) distributions of the ICM throughout the cluster can be probed thanks to the copious X-ray powers emitted by the ICM via thermal Bremsstrahlung and high-excitation lines <cit.>. The inferred high average temperatures k_B T∼ several keVs and low average number densities make the ICM the best plasma in the Universe ever, with thermal to electrostatic energy ratios k_B T/e^2 n^1/3∼ 10^12. In addition, the pressure distribution p(r) can be probed thanks to the Sunyaev–Zel'dovich (SZ; <cit.>) effect, arising when the hot ICM electrons Compton upscatter the CMB photons crossing the cluster, tilting the latter's black-body spectrum toward high energies. In the microwave band, such a tilt mimics a diminution of the CMB temperature proportional to the Comptonization parameter y ∝∫ dℓ p(r), which encompasses the line-of-sight integral of the pressure profile. Combining X-ray and SZ data allows one to reconstruct the ICM thermodynamic profiles throughout most of the cluster volume, from the center to a few times R_500 or even beyond the virial boundaryHereafter, R_Δ indicates the radius where the average DM density is Δ times the critical density ρ_ c(z) at the redshift z of the cluster.. [-25]In massive and sufficiently relaxed clusters, the ICM is expected to settle in hydrostatic equilibrium within the overall gravitational potential well mainly provided by the DM component. Under this assumption, the gas density profile reconstructed from X-rays and the gas pressure profile from SZ data can be combined to probe the shape of the DM gravitational potential and check whether this is consistent with the DM density run extracted from N-body simulations in the ΛCDM cosmology. This is the rationale of many investigations aimed at exploiting galaxy clusters to probe modified gravity scenarios <cit.>, which have been developed to solve cosmological problems such as the origin of dark energy <cit.>, and/or to alleviate small-scales issues of the standard cold DM paradigm <cit.>. In the latter vein, a prototypical example of such theories is the modified Newtonian dynamics (MOND) framework, which was originally designed to explain galactic dynamics through a modification of Newtonian gravity (or, more generally, Newton’s second law) that comes into action at accelerations well below a definite universal threshold; in its original formulation, DM was not included, and baryons were the only source of the gravitational field. Although MOND can properly fit galactic RCs <cit.>, its performances at the scales of galaxy clusters are somewhat debated <cit.>. More connected with the present work, in the last few years, various authors have put forward the idea that fractional calculus (i.e., the field of mathematics dealing with differentiation and integration of noninteger order) could be exploited to formulate modified gravity theories <cit.>. A relevant example is the theory of Newtonian fractional–dimensional gravity by <cit.>, which introduces a generalized law of Newtonian gravity in a spatial dimension smaller than three, representing the local effective Hausdorff dimension of the matter distribution. Another approach by <cit.> relies on multifractional spacetimes with variable Hausdorff and spectral dimensions directly inspired from quantum gravity theories. The framework by <cit.> directly modifies the Laplacian operator in the Poisson equation to alter the dynamics followed by a test particle in a given gravitational well; a similar route is followed by <cit.>, using fractional Fourier derivatives. All these theories adopt a MONDian viewpoint where DM is not present, and the galaxy kinematics is interpreted as a pure geometrical effect. Recently, in <cit.>, we suggested that the DM component itself may originate fractional gravity. In such a framework, the DM component exists, but the gravitational potential associated to its density distribution is determined by a modified Poisson equation including fractional derivatives (i.e., derivatives of noninteger type), which are meant to describe nonlocal effects; as such, this scenario is substantially different from the above theories where baryonic matter emulates DM-like effects via modifications of gravity. In <cit.>, we showed that DM in fractional gravity worked very well for reproducing the kinematics of disk-dominated galaxies, especially dwarfs. In addition, we found preliminary evidence that the strength of fractional effects tends to weaken toward more massive systems; however, the latter finding is still subject to large uncertainties since the rotation curves of massive spirals were not probed out to radii large enough for the DM contribution to clearly emerge. In the present work, we aim to extend our previous investigation to much larger scales and test fractional gravity in galaxy clusters. Our aim is twofold: (i) perform an independent sanity check that it can accurately describe the distributions of the ICM in clusters; (ii) derive a clear-cut trend for the strength of its effects over an extended DM mass range, from dwarf galaxies to galaxy clusters. To this purpose, we forward model the density and pressure distributions of the ICM, working out the hydrostatic equilibrium equation in fractional gravity. Such theoretical framework is then compared with data from the XMM-Newton Cluster Outskirts Project (X-COPSee <https://dominiqueeckert.wixsite.com/xcop/about-x-cop>.; <cit.>), which consists of 12 clusters with well-observed X-ray and SZ data, providing density and pressure profiles over an extended radial range of ∼0.2-2 Mpc. We then perform a Bayesian analysis of the thermodynamic profiles of the X-COP sample and infer constraints on the fractional gravity parameters, for individual clusters and also for clusters stacked together. The structure of the paper is straightforward: in Section <ref>, we describe our methods and analysis; in Section <ref>, we present and discuss our results; in Section <ref>, we summarize our findings and highlight future perspectives. Throughout the work, we adopt the standard, flat ΛCDM cosmology <cit.> with rounded parameter values: a matter density Ω_M ≈ 0.3, a baryon density Ω_b ≈ 0.05, the Hubble constant H_0 = 100 h km s^-1 Mpc^-1, with h≈ 0.7. § THEORETICAL BACKGROUND AND DATA ANALYSIS In this section, we recall the basics of the fractional gravity framework, illustrate how this can be exploited to derive the pressure profile of the ICM in hydrostatic equilibrium, and describe our Bayesian analysis to constrain the fractional gravity parameters. §.§ DM in Fractional Gravity The density distribution of virialized halos for collisionless DM as extracted from N-body simulations in the standard ΛCDM model is routinely described via the Navarro–Frenk–White profile <cit.>: ρ(r) = ρ_s r_s^3/r (r+r_s)^2 , where r_s is a scale radius and ρ_s a characteristic density. The associated cumulative mass is given by M(<r)=4π ∫_0^r dr' r'^2 ρ(r')=M_s [ln(1+r/r_s)-r/r_s/1+r/r_s] , with M_s≡ 4π ρ_s r_s^3. In the standard (Newtonian) case, the potential Φ_ N(r) associated to a given density distribution ρ(r) is computed from the Poisson equation supplemented with appropriate boundary conditions (usually taken as a vanishing potential at infinity): ΔΦ_ N(𝐫)=4π G ρ(𝐫) where Δ is the Laplacian operator; this is an inherently local equation, in that the potential at a point depends only on the value of the density there. For the spherically symmetric NFW profile, one easily finds that Φ_ N(r) = -G M_s/r log(1+r/r_s) ; from the above expressions of the mass and potential, it is straightforward to verify that | dΦ_ N/ dr|=G M(<r)/r^2, as a direct consequence of Birkhoff's theorem. In fractional gravity, the potential Φ_ F(r) is instead derived from the modified Poisson equation <cit.> (-Δ)^s Φ_ F (𝐫) = -4π G ℓ^2-2s ρ(𝐫) where (-Δ)^s is the fractional Laplacian operator (see <cit.> for details), s∈ [1,3/2] is the fractional index (this range of values for s is required to avoid divergences; see Appendix A in <cit.>), and ℓ is a fractional length scale that must be introduced for dimensional reasons. At variance with the standard case, the fractional Laplacian is inherently nonlocal; the index s measures the strength of this nonlocality, while the length scale ℓ can be interpreted as the typical size below which gravitational effects are somewhat reduced and above which they are instead amplified by nonlocality (around r≈ℓ, the dynamics is almost unaffected and indistinguishable from the standard case). In <cit.>, we solved the fractional Poisson equation sourced by the NFW density distribution. For s∈ [1,3/2), the solution reads Φ_ F(r) = -G M_s/r_s 1/2^2s √(π) (ℓ/r_s)^2-2s Γ(3/2-s)/Γ(s+1) r_s/r {2π s/sin(2π s) [(1+r/r_s)^2s-2.. -.. (1-r/r_s)^2s-2]+(r/r_s)^2s/1-(r/r_s)^2 [(1+r/r_s) _2F_1(1,1,2s+1,r/r_s) .. + ..(1-r/r_s) _2 F_1(1,1,2s+1,-r/r_s)-4s/2s-1] }    ,    s∈[1,3/2) with Γ(s) = ∫_0^∞ dx x^s-1 e^-x being the Euler Gamma function and _2F_1(a,b,c;x) =∑_k=0^∞ (a)_k (b)_k x^k/(c)_k k! being the ordinary hypergeometric function in terms of the Pochammer symbols (q)_k defined as (q)_0=1 and (q)_k=q (q+1) … (q+k-1); plainly, Φ_ F(r) for s=1 coincides with the usual expression Φ_ N(r) of Equation (<ref>). For the limiting case s=3/2, the computation requires some principal-value regularization and the solution reads Φ_ F(r) = -G M_s/ℓ 1/π r_s/r {2 r/r_s [log(r/r_s)-1]-(1+r/r_s) log(r/r_s) log(1+r/r_s) . +. (r/r_s-1) Li_2(1-r/r_s)-(1+r/r_s) Li_2(-r/r_s) + π^2/6}    ,     s=3/2 with Li_2(x)=∑_k=1^∞ x^k/k^2 being the dilogarithm function. Being a nonlocal framework, in fractional gravity, the Birkhoff theorem does not hold, but one can insist in writing | dΦ_ F/ dr| = G M_F(<r)/r^2 in terms of an effective mass M_F(<r), which plainly will be a function of the fractional gravity parameters s and ℓ. We illustrate the effective mass profile in Figure <ref>, suitably normalized so as to remove the dependence of dimensional quantities (including ℓ), for different values of the fractional index s. With s increasing from unity, the effective mass profile steepens: in the inner region, a uniform sphere behavior (corresponding to a cored density profile) tends to be enforced, while in the outskirts the effective profile resembles that of an isothermal sphere. Note that all the normalized mass profiles intersect at very close values of r/r_s; more in detail, the profile with a given s crosses the one with s=1 at r/r_s≈ 1.58 for s=1.1, at r/r_s≈ 1.49 for s=1.3, and at r/r_s≈ 1.36 for s=1.5; plainly, in log scale, all these points appear clustered around log r/r_s≈ 0.15 and are barely discernible by eye. To have a quantitative grasp on the overall effect of fractional gravity, consider the s=3/2 case where the effective mass can be computed in terms of a relatively simple analytical expression; it reads M_ F(<r) = M_s r_s/π ℓ {2 r/r_s [log(r/r_s)-1]-log(r/r_s) log(1+r/r_s). - .Li_2(1-r/r_s)-Li_2(-r/r_s) +π^2/6}    ,     s=3/2 , and it is easily found to behave as M_ F(<r)∝ [1-3 log (r/r_s)] r^3 for r≪ r_s and as M_ F(<r)∝ r ln(r/r_s) for r≫ r_s; besides minor logarithmic corrections, the overall behavior is very similar to that of a cored isothermal sphere. §.§ Forward Modeling of the ICM Thermodynamics Assuming hydrostatic equilibrium and spherical symmetry, the distribution of the ICM in the overall gravitational potential well is ruled by the equation 1/ρ_ gas dP_ gas/ dr=-| dΦ dr| , where Φ=Φ_ DM+Φ_ gas is the total gravitational potential with main contributions from DM and gas, ρ_ gas is the gas mass density, and P_ gas is the gas pressure. One can conveniently write ρ_ gas = μ m_p n_ gas in terms of the mean molecular weight μ≈ 0.6 and of the gas number density n_ gas, which is in turn easily related to the electron density by the expression n_ gas≈ 1.8 n_e, applying for a fully ionized plasma at high temperatures and a subsolar chemical composition typical of the ICM. The observed electron density profile n_e(r) of individual clusters inferred from X-ray observations is often empirically rendered by the (simplified version of the) Vikhlinin profile <cit.> n_e(r) = n_0 (r/r_c)^-α/2 [1+(r/r_t)^-ϵ/6]/[1+(r/r_c)^2]^3 β/2-α/4 ; where n_0 is the central density, r_c and r_t are a core and a transition radius (r_c<r_t), α, β, and ϵ<5 are three slopes characterizing the inner, intermediate, and outer radial behavior. The gas mass can then be computed as M_ gas(<r)=4π ∫_0^r dr' r'^2 ρ_ gas(r') and the gas contribution to the hydrostatic balance is fully specified by |dΦ_ gas/ dr| = G M_ gas(<r)/r^2. As to the DM contribution, we can exploit the results of the previous section and write | dΦ_ DM/ dr| = G M_ F(<r)/r^2 in terms of the fractional gravity's effective mass M_ F(<r) illustrated in Figure <ref>, which depends on the parameters s and ℓ; in the standard case (corresponding to s=1), this is just the DM mass profile of Equation (<ref>). The mass profile is also a function of the NFW scale radius r_s and mass M_s; for the present analysis, it is convenient to trade off these parameters for the mass M_500 and the concentration c_500≡ R_500/r_s at the reference radius R_500. The conversion between these variables can be performed easily using the relations M_500=4π 500 ρ_ c(z) R_500^3/3 and M_500=M_s/[ln(1+c_500)-c_500/(1+c_500)] stemming from the definition of R_500 and from the adopted NFW mass distribution. Then, the solution to the hydrostatic equilibrium equation is given by P_ gas(r) = - 1.8 μ m_p ∫_r^∞ dr' n_e(r') G [M_ F(<r')+M_ gas(<r')]/r'^2 , where the zero pressure at infinity has been taken as a boundary condition.Note that in computing the overall gravitational potential Φ, we have neglected the stellar contribution Φ_⋆, mainly originated by the brightest central galaxy; this would add a term dΦ_⋆/ dr = G M_⋆(<r)/r^2 to the integrand on the right-hand side of Equation (<ref>). For the X-COP cluster sample exploited in this work (stellar profiles were available for 5 out of 12 clusters), the related contribution has been shown by <cit.> to become relevant only for r≲ 0.02 R_500∼ 20 kpc and as such can barely influence the innermost available data point of the pressure profile; as a consequence, our results were negligibly affected, as we also checked numerically. Observationally, X-ray surface brightness and spectroscopic data can probe the electron density n_e(r) and the temperature T_ gas(r) profiles, whence the pressure profile P_ gas(r)∝ n_ gas(r) T_ gas can be derived, although sensitivity and background issues make such a determination robust only in the region out to R_500. In the outskirts, SZ observations can complement X-ray data in probing the pressure profile, though with some caveats about conversion from line-of-sight-integrated to spherically averaged quantities. The rationale of the above forward modeling of the hydrostatic equilibrium is to test the fractional gravity parameters entering in the effective mass profile M_ F(<r) by simultaneously fitting the observed electron density profile via Equation (<ref>) and the observed pressure profile via Equation (<ref>). §.§ Bayesian Data Analysis We tested the fractional gravity framework by exploiting the X-COP sample <cit.> of 12 massive galaxy clusters. The clusters are in the redshift range 0.04≲ z ≲ 0.1 and feature typical sizes R_500∼ 1–1.5 Mpc and masses M_500∼ 10^14–10^15 M_⊙. The X-COP clusters were selected to allow a robust reconstruction of the electron density and gas pressure profiles out to R_200 via a joint analysis of high-quality X-ray data from XMM-Newton and of high signal-to-noise SZ observations from Planck. Another important property of the sample is that the hydrostatic equilibrium holds to a high accuracy, with at most mild levels on nonthermal pressure support in the outermost regions, as demonstrated by the analysis of <cit.>; this is particularly important, since nonthermal effects can appreciably affect the mass estimation in the outer regions <cit.> and potentially induce spurious effects in constraining modified gravity parameters <cit.>. All in all, X-COP is currently the largest cluster sample available so far for robust mass-modeling studies over an extended radial range, and as such it has been extensively exploited to probe modified-gravity scenarios <cit.>. To estimate the parameters θ_ F≡ (s,ℓ,c_500,M_500) describing the effective mass profile M_ F(<r), alongside with those θ_n_e≡(n_0,α,β,ϵ,r_c,r_t) describing the electron density profile n_e(r), we adopted a Bayesian framework and built the joint log-likelihood logℒ(θ) = logℒ_P_ gas(θ_ F,θ_n_e) + logℒ_n_e(θ_n_e) . Each term in the log-likelihood reads logℒ(θ)=-χ^2(θ)/2, where the chi-square χ^2(θ)=∑_i [ℳ(θ,r_i)-𝒟(r_i)]^2/σ_𝒟^2(r_i) was obtained by comparing our empirical model expectations ℳ(θ,r_i) to the data values 𝒟(r_i) with their uncertainties σ_𝒟(r_i), summing over the different radial coordinates r_i of the data (approximately 65 points for n_e and 20 points per P_ gas, with small variations around these numbers from cluster to cluster); note that for the pressure data from SZ observations, we took into account the full covariance matrix. We adopted flat priors π(θ) on all the parameters; specifically, for those entering the effective mass profile in fractional gravity we took s∈ [1,3/2], logℓ (Mpc) ∈ [-3,3], log c_500∈ [-2,2], log M_500 (M_⊙)∈ [13,16]. We then sampled the parameter posterior distributions 𝒫(θ) ∝ℒ(θ) π(θ) via the MCMC Python package  <cit.>, running it with 10^4 steps and 200 walkers for every individual cluster; each walker was initialized with a random position uniformly sampled from the (flat) priors. After checking the auto-correlation time, we removed the first 20% of the flattened chain to ensure burn-in; the typical acceptance fractions of the various runs were in the range 30–40%. § RESULTS In Figures <ref> and <ref>, we illustrate the outcome of the fitting procedure on the 12 individual pressure and density profiles of the X-COP sample. In each panel, the best fit (solid lines) and the 2σ credible intervals sampled from the posterior (shaded areas) are shown. The reduced χ_r^2 value of the joint fit to the pressure and density profiles is also reported in Figure <ref>. Overall, the fits in the fractional gravity framework are very good. In a few cases (such as A3266 and A2319), the reduced χ_r^2 is somewhat large, but this should not raise any alarm, since the outcome is caused by some peculiar feature in the density profile reconstructed from X-ray data (oscillation in the data points at intermediate radii) or because of some outlier data in the pressure profile reconstructed from SZ (especially in the innermost or outermost radii); note that we retained all data points in our analysis, including them in the reduced χ_r^2 computation. In Figure <ref>, we illustrate the MCMC posterior distributions for two representative clusters in the sample, namely A2255 and ZW1215; for clarity, we restricted the plot to the subspace of parameters entering the effective mass profile. Magenta/contour lines display the results in our fiducial setup, where no mass prior was imposed; the white cross marks the best-fit value of the parameters. The corner plots illustrate a clear degeneracy between the fractional length-scale parameter ℓ and the DM mass M_500. This is somewhat expected since the effective mass profile entering the hydrostatic equilibrium equation scales like M_500 ℓ^2-2s. Therefore, it is possible to obtain the same normalization of the pressure profile, at a given density profile, by changing M_500 and ℓ in the same direction. Since s does not deviate much from unity, the ℓ dependence is weak, implying that to compensate a rather small change in mass requires a substantial variation in ℓ; on the other hand, this is also at the origin of the rather loose constraints that can be derived on the parameter ℓ with the present cluster sample. The situation is expected to improve if a mass prior from other probes such as weak lensing (WL) is introduced in the analysis. However, one must be careful and use WL mass estimates that are independent from assumptions on the shape of the lensing potential; this is because in fractional gravity, the lensing potential corresponding to a given mass distribution would be different from the standard case, thus causing an inconsistency. For five X-COP clusters (A85, A1795, A2029, A2142, and ZW1215), such nonparametric WL mass determinations are available in the literature <cit.>.Actually, in principle, fractional gravity can also alter somewhat the total depth of the gravitational potential, thus biasing the overall WL mass estimates; however, given that the fractional gravity masses estimated without WL prior and the Newtonian ones are consistent with each other within 2σ (see fifth and last column in Table <ref>), we ignored such a small bias and used the Newtonian WL masses as prior, with their uncertainties, in the fractional gravity analysis. The outcome of exploiting the WL mass prior on the marginalized distributions of the parameters is illustrated by the cyan contours/lines in Figure <ref>. The DM mass posterior estimate of ZW1215 is made considerably more precise, and as a consequence of the above degeneracy, the estimate of the fractional length scale ℓ is also appreciably tightened. In any case, the posterior distributions on all the parameters for the analysis without and with the WL mass prior are consistent within 1σ. We also tested the performance of fractional gravity by stacking the X-COP data of all the clusters in the sample. Specifically, we built stacked electron density and pressure profiles by normalizing the individual profiles of the 12 clusters at a reference radius R_500, by co-adding them in radial bins of normalized radii r/R_500, and by computing the corresponding mean and standard deviation. The outcome of this procedure is illustrated in Figure <ref>: the crosses mark the stacked profiles, and for reference, the gray lines show the individual ones. All in all, the fractional gravity frameworks fit the stacked profiles to a remarkable accuracy. Figure <ref> summarizes the posterior distributions of the fractional index s, fractional length scale ℓ, concentration c_500, and DM mass M_500. Table <ref> reports the marginalized posterior estimates (mean and 1σ credible intervals) of these parameters for all the individual X-COP clusters (including the WL mass prior when available), and for the stacked sample. On average, it is seen that the deviations of the fractional index s from unity are modest in clusters, and this originates rather loose constraints on the length scale ℓ. The inferred values of the DM mass M_500 and concentration c_500 are reasonable and consistent with that estimated by a variety of other methods in standard gravity <cit.>; we also checked that the same agreement applied for the gas fraction, as expected given the very good fits to the gas density profiles. In Figure <ref>, we checked the concentration vs. the DM mass relation for the X-COP sample in fractional gravity. To fairly compare with the relation expected from N-body simulations in the ΛCDM framework, we converted our fitting variables c_500 and M_500 at a reference radius R_500 to the corresponding values c_200 and M_200 at R_200; this is a trivial rescaling given the adopted NFW density profile. In Figure <ref>, we show as filled magenta circles the outcome for individual X-COP clusters and with a magenta cross that for the stacked sample. It is seen that the estimates of c_200 and M_200 in fractional gravity are fairly consistent in shape and scatter with the concentration vs. mass relation extracted from N-body simulations <cit.>. In passing, we note that the clusters A644, A1644, A2255, and A2319 have been shown not to favor an NFW mass profile, but rather a cored Burkert-like one (e.g., a Burkert, Hernquist, or pseudo-isothermal distribution) <cit.>. When forward modeling the pressure profiles in standard gravity with the NFW density distribution, this causes inconsistent results (especially in mass and concentration values) and/or a poor fit <cit.>. Contrariwise, such values and fits in fractional gravity stay reasonable and good, since the mass profile entering the hydrostatic equilibrium equation is not the true NFW mass, but the effective mass, which, as mentioned in Section <ref>, mirrors that of a cored profile. For these four clusters, we also checked that using a cored Burkert-like density distributions in place of the NFW one as an input in our fractional gravity framework did not substantially improve the fits to pressure profiles, and rather forced the fractional index to values s≈ 1 compatible with pure Newtonian gravity. In fact, fractional gravity actually reconciles the NFW density distribution from simulations with the observed galactic dynamics, which are empirically described via cored, Burkert-like profiles. Moreover, A2319 have been shown to be characterized by an appreciable nonthermal support in the outskirts <cit.>, which causes some difficulties in forward modeling and fitting the pressure profiles via the usual hydrostatic equilibrium equation in standard gravity. Instead, curiously, in fractional gravity, the fits stay good, suggesting that such a nonlocal framework may constitute an effective rendition for the effects of a nonthermal support on the pressure distribution. In Figure <ref>, we explore the scaling of the fractional gravity parameters with the DM mass. For this purpose, we put together the analysis of the X-COP clusters from this work, and the constraints coming from the fitting of stacked galaxy rotation curves by <cit.>. These joint datasets covered six orders of magnitude in DM mass from M_200∼ 10^9 M_⊙ to 10^15 M_⊙. As to the fractional index s, we confirmed the decreasing trend with the DM mass, passing from values around s≈ 1.4 in dwarf galaxies, to s∼ 1.2-1.3 in intermediate mass galaxies, to s∼ 1.1 in massive galaxies and clusters. We described the s vs. M_200 relation by a linear fit (dashed line) with shape s = a + b (log M_200(M_⊙)-11) via an orthogonal distance regression (ODR) algorithm that took into account the error bars on both axis; we obtained the best-fit parameters a = 1.24± 0.02 and b = -0.057± 0.006 and a reduced χ_r^2≈ 1.87; a nonlinear fit (solid line) s = (5/4) + (1/4) tanh[c (log M_200 (M_⊙) - d)] interpolating between asymptotic values s = 1 and 1.5 at small and large masses yielded the best-fit parameters c = -0.39± 0.06, d = 10.76± 0.25 and a reduced χ_r^2≈ 1.34. As to the fractional length scale, there was an increasing trend with the DM mass, extending the finding by <cit.> at the cluster scales. We fit the ℓ vs. M_200 relation with a linear shape ℓ (Mpc) = a + b (log M_200 (M_⊙) -11) via an ODR algorithm, to obtain the best-fit parameters a = -2.66 ± 0.09, b = 0.66 ± 0.06 and a reduced χ_r^2≈ 1.09. This relation was somewhat steeper than the scaling with the DM mass of the NFW scale radius r_s, in such a way that in dwarf galaxies ℓ/r_s≈ 0.25 but this ratio increased to around one at the cluster scales. § SUMMARY AND OUTLOOK Extending the analysis carried out by <cit.> on galactic scales, in this paper, we tested fractional gravity in galaxy clusters. Our aim was twofold: (i) to perform an independent sanity check that fractional gravity can accurately describe such large and massive structures; (ii) to derive a clear-cut trend for the strength of fractional gravity effects in systems with different DM masses. To fulfill this program, we forward modeled the density and pressure distributions of the intracluster medium (ICM), working out the hydrostatic equilibrium equation in fractional gravity. Then, we performed a Bayesian analysis of the X-COP galaxy cluster sample to infer constraints on the fractional gravity parameters for individual clusters and also by stacking them. We found that fractional gravity performed remarkably well in modeling the ICM profiles for the X-COP sample. We also checked that the relationship between the concentration of the DM profile and the DM mass still remained consistent with the expectations of N-body simulations in the ΛCDM framework. Finally, we confirmed the weakening of the fractional gravity effects toward more massive systems and derived the overall scaling of the fractional gravity parameters from dwarf galaxies to massive clusters, over six orders of magnitude in DM mass. Such an overall trend implies that fractional gravity can substantially alleviate the small-scale issues of the standard DM paradigm, while remaining successful on large cosmological scales. In future work, we plan to investigate a theoretical explanation for the empirical scaling of fractional gravity parameters with the DM mass. Hints may come from the connection of these parameters with different MONDian and fractional gravity theories, as partly explored by <cit.>. In fact, it has been pointed out that all these frameworks are characterized by an index (in our case s) interpolating between the Newtonian and a MOND-like regime, and by a length scale ℓ∼√(G M/a_0) that dimensionally can be written in terms of a MOND-like characteristic acceleration scale a_0 and of the system's mass M (baryons in the basic MOND theory, total mass dominated by DM in our case). However, the empirical scaling between ℓ∝ M^∼ 2/3 found here is barely consistent with this law within the uncertainties, and an ab initio explanation of the inverse dependence of s with mass M is difficult to be envisaged even in simple terms. This indicates that some crucial ingredient is missing to build a robust theoretical background behind DM in fractional gravity. On the observational side, the present work clearly shows that whatever the ultimate origin of the fractional gravity behavior in the DM component is, most of its effects manifest in small DM masses; according to the canonical structure formation scenario, these objects must have formed at early cosmic times. Therefore, in the near future, we plan to look for signs of fractional gravity via kinematic (and possibly gravitational lensing) observations of low-mass galaxies at intermediate/high redshift, and of their relics in the local Universe; this could shed light on the mechanisms responsible for the origin and the emergence of fractional gravity across cosmic times. Conceptualization: A.L. and L.D.; methodology: F.B. and G.G.; validation: F.B., and B.S.H.; writing: A.L. All authors have read and agreed to the published version of the manuscript. This work was partially funded from the projects: “Data Science methods for MultiMessenger Astrophysics & Multi-Survey Cosmology” funded by the Italian Ministry of University and Research, Programmazione triennale 2021/2023 (DM n.2503 dd. 9 December 2019), Programma Congiunto Scuole; EU H2020-MSCA-ITN-2019 n. 860744 BiD4BESt: Big Data applications for black hole Evolution STudies; PRIN MIUR 2017 prot. 20173ML3WW, Opening the ALMA window on the cosmic evolution of gas, stars, and supermassive black holes; Fondazione ICSC, Spoke 3 Astrophysics and Cosmos Observations; National Recovery and Resilience Plan (Piano Nazionale di Ripresa e Resilienza, PNRR) Project ID CN-00000013 “Italian Research Center on High-Performance Computing, Big Data and Quantum Computing” funded by MUR Missione 4 Componente 2 Investimento 1.4: Potenziamento strutture di ricerca e creazione di “campioni nazionali di R&S (M4C2-19)”—Next Generation EU (NGEU). N/A We thank the anonymous referees for useful suggestions. We acknowledge C. Baccigalupi and P. Salucci for illuminating discussions. The authors declare no conflict of interest. -0cm References [custom] 999 Benetti23Benetti, F.; Lapi, A.; Gandolfi, G.; Salucci, P.; Danese, L. Dark Matter in Fractional Gravity I: Astrophysical Tests on Galactic Scales. Astrophys. J. 2023, 949, 65. Planck20introAghanim, M. et al. [Planck Collaboration]. Planck 2018 results. I. Overview and the cosmological legacy of Planck. Astron. Astrophys. 2020, 641, 1. Sarazin88Sarazin, C.L. X-ray Emission from Clusters of Galaxies; Cambridge University Press: Cambridge, UK, 1988. Cavaliere13Cavaliere, A.; Lapi, A. The astrophysics of the Intracluster Plasma. Phys. Rep. 1988, 533, 69–94 Sunyaev80Sunyaev, R.A.; Zeldovich, I.B. Microwave background radiation as a probe of the contemporary structure and history of the universe. Annu. Rev. Astron. Astrophys. 1980, 18, 537–560. Rephaeli95Rephaeli, Y. Comptonization of the Cosmic Microwave Background: The Sunyaev-Zeldovich Effect. Annu. Rev. Astron. Astrophys. 1995, 33, 541–580. Terukina14Terukina, A.; Lombriser, L.; Yamamoto, K.; Bacon, D.; Koyama, K.; Nichol, R.C. Testing chameleon gravity with the Coma cluster. J. Cosmol. Astropart. Phys. 2014, 4, 13. Wilcox15Wilcox, H.; Bacon, D.; Nichol, R.C.; Rooney, P.J.; Terukina, A.; Romer, A.K.; Koyama, K.; Zhao, G.-B.; Hood, R.; Mann, R.G.; et al. The XMM Cluster Survey: Testing chameleon gravity using the profiles of clusters. Mon. Not. R. Astron. Soc. 2015, 452, 1171–1183. Sakstein16Sakstein, J.; Wilcox, H.; Bacon, D.; Koyama, K.; Nichol, R.C. Testing gravity using galaxy clusters: New constraints on beyond Horndeski theories. J. Cosmol. Astropart. Phys. 2016, 7, 19. Haridasu21Haridasu, B.S.; Karmakar, P.; De Petris, M.; Cardone, V.F.; Maoli, R. Testing generalized scalar-tensor theories of gravity with clusters of galaxies. Phys. Rev. D 2021, submitted, arXiv:2111.01101. Harikumar22Harikumar, S.; Biesiada, M. Moffat's modified gravity tested on X-COP galaxy clusters. Eur. Phys. J. C 2022, 82, 241. Boumechta23Boumechta, Y.; Haridasu, B.S.; Pizzuti, L.; Butt, M.A.; Baccigalupi, C.; Lapi, A. Constraining Chameleon screening using galaxy cluster dynamics. Phys. Rev. D 2023, submitted, arXiv:2303.02074. Gandolfi23Gandolfi, G.; Haridasu, B.S.; Liberati, S.; Lapi, A. Looking for Traces of Non-minimally Coupled Dark Matter in the X-COP Galaxy Clusters Sample. Astrophys. J. 2023, in press. Clifton12Clifton, T.; Ferreira, P.G.; Padilla, A,; Skordis, C. Modified gravity and cosmology. Phys. Rep. 2012, 513, 1–189. Nojiri17Nojiri, S.; Odintsov, S.D.; Oikonomou, V.K. Modified gravity theories on a nutshell: Inflation, bounce and late-time evolution. Phys. Rep. 2017, 692, 1–104. Saridakis21Saridakis, E.N.; Lazkoz, R.; Salzano, V.; Moniz, P.V.; Capozziello, S; Beltran J.J.; De Laurentis, M.; Olmo, G.J. Modified Gravity and Cosmology: An Update by the CANTATA Network. arXiv 2021 arXiv:2105.12582. Milgrom83Milgrom, M. A modification of the Newtonian dynamics as a possible alternative to the hidden mass hypothesis. Astrophys. J. 2017, 270, 365–370. Famaey12Famaey, B.; McGaugh, S.S. Modified Newtonian Dynamics (MOND): Observational Phenomenology and Relativistic Extensions. Liv. Rev. Relat. 2012, 15, 10. Verlinde17Verlinde, E.P. Emergent Gravity and the Dark Universe. Sci. Post. Phys. 2017, 2, 16. Yoon23Yoon, Y.; Park, J.-C.; Hwang, H.S. Understanding galaxy rotation curves with Verlinde's emergent gravity. Class. Quant. Grav. 2023, 40, 02LT01. Gandolfi21Gandolfi, G.; Lapi, A.; Liberati, S. Self-gravitating Equilibria of Non-minimally Coupled Dark Matter Halos. Astrophys. J. 2021, 910, 76. Gandolfi22Gandolfi, G.; Lapi, A.; Liberati, S. Empirical Evidence of Nonminimally Coupled Dark Matter in the Dynamics of Local Spiral Galaxies? Astrophys. J. 2022, 929, 48. deBlok98de Blok, W.J.G.; McGaugh, S.S. Testing Modified Newtonian Dynamics with Low Surface Brightness Galaxies: Rotation Curve FITS. Astrophys. J. 1998, 508, 132. Sanders02Sanders, R.H.; McGaugh, S.S. Modified Newtonian Dynamics as an Alternative to Dark Matter. Annu. Rev. Astron. Astrophys. 2002, 40, 263–317. Clowe96Clowe, D.; Bradac, M.; Gonzalez, A.H.; Markevitch, M.; Randall, S.W.; Jones, C.; Zaritsky, D. A Direct Empirical Proof of the Existence of Dark Matter. Astrophys. J. 1996, 648, L109. Angus06Angus, G.W.; Famaey, B.; Zhao, H.S. Can MOND take a bullet? Analytical comparisons of three versions of MOND beyond spherical symmetry. Mon. Not. R. Astron. Soc. 2006, 371, 138–146. Calcagni22Calcagni, G.; Varieschi, G.U. Gravitational potential and galaxy rotation curves in multi-fractional spacetimes. J. High Energy Phys. 2022, 8, 24. Calcagni13Calcagni, G. Multi-scale gravity and cosmology. J. Cosmol. Astropart. Phys. 2013, 12, 41. Varieschi20Varieschi, G.U. Newtonian Fractional-Dimension Gravity and MOND. Found. Phys. 2020, 50, 1608–1644. Varieschi21Varieschi, G.U. Newtonian fractional-dimension gravity and rotationally supported galaxies. MNRAs 2021, 503, 1915–1931. Giusti20aGiusti, A. MOND-like fractional Laplacian theory. Phys. Rev. D 2020, 101, 124029. Giusti20bGiusti, A.; Garrappa, R.; Vachon, G. On the Kuzmin model in fractional Newtonian gravity. EPJP 2020, 135, 798. Garcia22Garcia-Aspeitia, M.A.; Fernandez-Anaya, G.; Hernandez-Almada, A.; Leon, G.; Magana, J. Cosmology under the fractional calculus approach. Mon. Not. R. Astron. Soc. 2022, 517, 4813–4826. Borjon22Borjon-Espejel, S.; Escalante-Martinez, J.E.; Padilla-Longoria, P. Newtonian gravity and MOND: A fractional Fourier approach. Indian Journ. Phys. 2022, 96, 3405–3411. Eckert17Eckert, D.; Ettori, S.; Pointecouteau, E.; Molendi, S.; Paltani, S.; Tchernin, C. The XMM cluster outskirts project (X-COP). Astron. Nachr. 2017, 338, 293–298. Ettori19Ettori, S.; Ghirardini, V.; Eckert, D.; Pointecouteau, E.; Gastaldello, F.; Sereno, M.; Gaspari, M.; Ghizzardi, S.; Roncarelli, M.; Rossetti, M. Hydrostatic mass profiles in X-COP galaxy clusters. Astron. Astrophys. 2019, 621, 39. Ghirardini19Ghirardini, V.; Eckert, D.; Ettori, S.; Pointecouteau, E.; Molendi, S.; Gaspari, M.; Rossetti, M.; De Grandi, S.; Roncarelli, M.; Bourdin, H.; et al. Universal thermodynamic properties of the intracluster medium over two decades in radius in the X-COP sample. Astron. Astrophys. 2022, 621, 41. Eckert22Eckert, D.; Ettori, S.; Pointecouteau, E.; van der Burg, R.F.J.; Loubser, S.I. The gravitational field of X-COP galaxy clusters. Astron. Astrophys. 2022, 662, 123. Aghanim20Aghanim, M. et al. [Planck Collaboration]. Planck 2018 results. VI. Cosmological parameters. Astron. Astrophys. 2020, 641, A6. Navarro97Navarro, J.F.; Frenk, C.S.; White, S.D.M. A Universal Density Profile from Hierarchical Clustering. Astrophys. J. 1997, 490, 493. Vikhlinin06Vikhlinin, A.; Kravtsov, A.; Forman, W.; Jones, C.; Markevitch, M.; Murray, S.S.; Van Speybroeck, L. Chandra Sample of Nearby Relaxed Galaxy Clusters: Mass, Gas Fraction, and Mass-Temperature Relation. Astrophys. J. 2006, 640, 691. Biffi16Biffi, V.; Borgani, S.; Murante, G.; Rasia, E.; Planelles, S.; Granato, G.L.; Ragone-Figueroa, C.; Beck, A.M.; Gaspari, M.; Dolag, K. On the Nature of Hydrostatic Equilibrium in Galaxy Clusters. Astrophys. J. 2016, 827, 112. Ansarifard20Ansarifard, S.; Rasia, E.; Biffi, V.; Borgani, S.; Cui, W.; De Petris, M.; Dolag, K.; Ettori, S.; Movahed, S.M.S.; Murante, G.; et al. The Three Hundred Project: Correcting for the hydrostatic-equilibrium mass bias in X-ray and SZ surveys. Astron. Astrophys. 2020, 634, 113. Pizzuti20Pizzuti, L.; Sartoris, B.; Borgani, S.; Biviano, A. Calibration of systematics in constraining modified gravity models with galaxy cluster mass profiles. J. Cosmol. Astropart. Phys. 2020, 4, 24. Foreman-Mackey13Foreman-Mackey, D.; Hogg, D.W.; Lang, D.; Goodman, J. emcee: The MCMC Hammer. Publ. Astron. Soc. Pac. 2013, 125, 306. Herbonnet20Herbonnet, R.; Sifon, C.; Hoekstra, H.; Bahe, Y.; van der Burg, R.F.J.; Melin, J.-B.; von der Linden, A.; Sand, D.; Kay, S.; Barnes, D. CCCP and MENeaCS: (updated) weak-lensing masses for 100 galaxy clusters. Mon. Not. R. Astron. Soc. 2020, 497, 4684–4703. Dutton14Dutton, A.A.; Maccio, A.V. Cold dark matter haloes in the Planck era: Evolution of structural parameters for Einasto and NFW profiles. Mon. Not. R. Astron. Soc. 2014, 441, 3359–3374. Eckert19Eckert, D.; Ghirardini, V.; Ettori, S.; Rasia, E.; Biffi, V.; Pointecouteau, E.; Rossetti, M.; Molendi, S.; Vazza, F.; Gastaldello, F.; et al. Non-thermal pressure support in X-COP galaxy clusters. Astron. Astrophys. 2019, 621, 40.
http://arxiv.org/abs/2307.04116v1
20230709081305
Neutron scattering and muon-spin spectroscopy studies of the magnetic triangular-lattice compounds $A_2$La$_2$NiW$_2$O$_{12}$ ($A$ = Sr, Ba)
[ "B. C. Yu", "J. Y. Yang", "D. J. Gawryluk", "Y. Xu", "Q. F. Zhan", "T. Shiroka", "T. Shang" ]
cond-mat.str-el
[ "cond-mat.str-el", "cond-mat.mtrl-sci" ]
plain Preprint: August 12, 2023, These authors contributed equally Key Laboratory of Polar Materials and Devices (MOE), School of Physics and Electronic Science, East China Normal University, Shanghai 200241, China These authors contributed equally Institute of High Energy Physics, Chinese Academy of Sciences (CAS), Beijing 100049, China Spallation Neutron Source Science Center (SNSSC), Dongguan 523803, China Laboratory for Multiscale Materials Experiments, Paul Scherrer Institut, CH-5232 Villigen PSI, Switzerland Key Laboratory of Polar Materials and Devices (MOE), School of Physics and Electronic Science, East China Normal University, Shanghai 200241, China Key Laboratory of Polar Materials and Devices (MOE), School of Physics and Electronic Science, East China Normal University, Shanghai 200241, China Laboratory for Muon-Spin Spectroscopy, Paul Scherrer Institut, CH-5232 Villigen PSI, Switzerland Laboratorium für Festkörperphysik, ETH Zürich, CH-8093 Zürich, Switzerland [Corresponding authors: ][email protected] Key Laboratory of Polar Materials and Devices (MOE), School of Physics and Electronic Science, East China Normal University, Shanghai 200241, China Chongqing Key Laboratory of Precision Optics, Chongqing Institute of East China Normal University, Chongqing 401120, China We report on the geometrically frustrated two-dimensional triangular-lattice magnets A_2La_2NiW_2O_12 (A = Sr, Ba) studied mostly by means of neutron powder diffraction (NPD) and muon-spin rotation and relaxation (µSR) techniques. The chemical pressure induced by the Ba-for-Sr substitution suppresses the ferromagnetic (FM) transition from 6.3 K in the Ba-compound to 4.8 K in the Sr-compound. We find that the R3̅ space group reproduces the NPD patterns better than the previously reported R3̅m space group. Both compounds adopt the same magnetic structure with a propagation vector k = (0, 0, 0), in which the Ni^2+ magnetic moments are aligned ferromagnetically along the c-axis. The zero-field µSR results reveal two distinct internal fields (0.31 and 0.10 T), caused by the long-range ferromagnetic order. The small transverse muon-spin relaxation rates reflect the homogeneous internal field distribution in the ordered phase and, thus, further support the simple FM arrangement of the Ni^2+ moments. The small longitudinal muon-spin relaxation rates, in both the ferromagnetic- and paramagnetic states of A_2La_2NiW_2O_12, indicate that spin fluctuations are rather weak. Our results demonstrate that chemical pressure indeed changes the superexchange interactions in A_2La_2NiW_2O_12 compounds, with the FM interactions being dominant. Neutron scattering and muon-spin spectroscopy studies of the magnetic triangular-lattice compounds A_2La_2NiW_2O_12 (A = Sr, Ba) T. Shang Accepted 08-Jul-2023. Received 18-Jun-2023; in original form 23-May-2023 ================================================================================================================================ 3pt § INTRODUCTION 8pt Geometric frustration occurs when a system of interacting spins is unable to find its lowest energy state because of how the spins are arranged. This property plays an important role at microscopic scales in solids. In particular, in certain cases, such as in spin glasses, spin ice, and spin liquids <cit.>, the localized magnetic moments interact through competing exchange interactions that cannot be simultaneously satisfied, thus giving rise to a highly degenerate magnetic ground state. For instance, in a spin-liquid system, the constituent spins are highly correlated, but still strongly fluctuating down to zero temperature <cit.>. Such fluctuations lead to remarkable collective phenomena such as emergent gauge fields and fractional excitations <cit.>. Most of the magnetic frustrations have a simple geometric origin <cit.>, usually occurring in materials with a 2D triangular- or kagome lattice, or a 3D pyrochlore lattice, etc., with the nearest-neighbor interactions being antiferromagnetic (AFM) <cit.>. A two-dimensional triangular lattice with antiferromagnetic interactions provides one of the prototypes of magnetic frustration <cit.>. The perovskite-derived compounds A_4B'B_2O_12 (A = Sr, Ba, La; B' = Mn, Co, Ni; B = Sb, Te, W, Re) represent one such system <cit.>. Depending on the valence states of the B' and B atoms, the A site can be occupied by either a Sr^2+ (Ba^2+) or La^3+ ion, or by their combinations. Here, the magnetic B' ions form a layered structure with a 3-fold site symmetry [see Fig. <ref>(a) for the B' = Ni^2+ case]. Since the magnetic B' layers are well separated by the nonmagnetic A- and BO_6 layers, the former give rise to a magnetic quasi-2D triangular lattice, which can potentially host magnetic frustrations. To date, different magnetic ground states have been found to occur in the A_4B'B_2O_12 family <cit.>, whose magnetic properties are thought to be determined mostly by the competition between the ferromagnetic (FM-) B'-O-B-O-B' and antiferromagnetic B'-O-O-B' superexchange interactions, shown by solid- and dashed lines in Fig. <ref>(c) <cit.>. The spin state of the magnetic B' ions plays a decisive role in the competition between the two superexchange interactions. As a consequence, A_4CoB_2O_12 (effective spin S = 1/2 for Co^2+) and Ba_2La_2NiW_2O_12 (S = 1 for Ni^2+) are reported to be ferromagnetic, while Ba_2La_2MnW_2O_12 (S = 5/2 for Mn^2+) is reported to be antiferromagnetic <cit.>. Similar superexchange interactions and their competitions have been observed in other triangular-lattice magnets, e.g., Ba_3B'Nb_2O_9 <cit.> and AAg_2B'(VO_4)_2 <cit.>. Unsurprisingly, such closely competing interactions can be tuned by either external pressure or by chemical substitution, each of which able to introduce lattice distortions and to modify the bond lengths and angles <cit.>, thus, tuning the magnetic order and frustration. For example, in A_4CoB_2O_12, the chemical pressure (i.e., the substitution of Ba with Sr and/or La, or W with Re) can tune the FM transition temperature <cit.>. However, the effects of chemical pressure on the magnetic properties of A_4NiB_2O_12 have not been investigated in detail. To clarify the above issues, in this paper, we synthesized polycrystalline samples of A_2La_2NiW_2O_12 (A = Sr, Ba) and studied their magnetic properties by means of mag­ne­ti­za­tion specific heat-, neutron scattering-, and muon-spin rotation and relaxation (µSR) measurements. The chemical pressure is introduced by substituting Ba with Sr, which suppresses the FM transition temperature from 6.3 down to 4.8 K, while the magnetic moments of the Ni^2+ ions are ferromagnetically aligned along the c-axis in both compounds. Our results suggest that the chemical pressure indeed changes the superexchange interactions in A_2La_2NiW_2O_12, with the B'-O-B-O-B' superexchange path dominating the competition between the FM and AFM interactions. External pressure on Sr_2La_2NiW_2O_12 or chemical substitution on the Ni site may further tune the magnetic interactions and lead to magnetic frustration. § EXPERIMENTAL DETAILS 8pt The A_2La_2NiW_2O_12 (A = Sr, Ba) polycrystalline samples were prepared by the solid-state reaction method. Stoichiometric amounts of La_2O_3, BaCO_3, SrCO_3, NiO, and WO_3 powders were used to prepare the materials. The La_2O_3 rare-earth oxide was annealed for 15 hours in atmosphere to remove moisture. The powders were then mixed, ground, and sintered at 1200^∘C for 24 hours. After grinding the samples again, the powders were pressed into pellets and sintered at 1200^∘C for extra 48 hours. The magnetic-susceptibility and heat-capacity measurements were performed on a Quantum Design magnetic property measurement system (MPMS) and physical property measurement system (PPMS), respectively. Neutron powder diffraction (NPD) measurements were carried out at the Swiss Neutron Source SINQ of the Paul Scherrer Institute in Villigen, Switzerland. The A_2La_2NiW_2O_12 powder samples were introduced in cylindrical vanadium cans (8 mm in diameter and 50 mm high) and mounted on a helium cryostat stick (2–300 K). High-resolution room-temperature NPD patterns were recorded at the powder diffractometer HRPT [Ge (822), λ = 1.154 Å]. To discern the magnetic diffraction peaks, high-intensity NPD patterns were collected at 1.7 K on the DMC diffractometer using a longer wavelength [pyrolitic graphite (002), λ = 2.458 Å]. The collected NPD patterns were analyzed using the Rietveld package of the FullProf suite <cit.>. The bulk µSR measurements were carried out at the ge­ne­ral­-pur­pose surface-muon instrument (GPS) of the Swiss muon source at Paul Scherrer Institut, Villigen, Switzerland. In this study, we performed two types of experiments: zero-field (ZF)-, and longitudinal-field (LF) µSR measurements. In both cases, we aimed at studying the temperature evolution of the magnetically ordered phase and the spin fluctuations. The µSR spectra were collected upon sample heating and then analyzed by the software package <cit.>. § RESULTS AND DISCUSSION 8pt §.§ Magnetic susceptibility The A_2La_2NiW_2O_12 samples were first characterized by magnetic-susceptibility measurements. Figures <ref>(a) and (d) show the temperature-dependent magnetic susceptibility χ(T) collected in an applied magnetic field of 0.1 T using a zero-field-cooling (ZFC) protocol. χ(T) shows a sharp increase close to T_c, the temperature where the Ni^2+ moments give rise to a FM order. The Curie temperatures T_c can be determined from the derivative of susceptibility with respect to temperature dχ/dT [see Fig. <ref>(c) and (f)] which, in a 0.1-T applied field, provides a T_c of 6.3 and 4.8 K for Ba_2La_2NiW_2O_12 and Sr_2La_2NiW_2O_12, respectively. The magnetic susceptibility was also measured under various magnetic fields up to 6 T. As shown in Fig. <ref>(b) and (e), as the magnetic field increases, the transition becomes broader and T_c moves to higher temperatures, both features typical of ferromagnetic materials. The insets in Fig. <ref>(a) and (d) show the Curie-Weiss fits to the inverse susceptibility (solid lines), which yield a Weiss temperature θ_p = 7.4 K for Ba_2La_2NiW_2O_12 and θ_p = 8.4 K for Sr_2La_2NiW_2O_12. The positive θ_p values indicate that FM interactions are dominant in both compounds. The estimated effective moments are μ_eff = 3.17 μ_B and 3.13 μ_B for Ba_2La_2NiW_2O_12 and Sr_2La_2NiW_2O_12, respectively. Both are close to the theoretical value of spin-only Ni^2+ ions (2.83 μ_B), i.e., assuming a quenching of the orbital moment, typical of octahedral complexes <cit.> — such as the NiO_6 units in Fig. <ref>(a). The FM ground state was further confirmed by field-dependent magnetization measurements (see Fig. <ref>). For T < T_c, a small yet clear magnetic hysteresis loop is observed. For both materials, the magnetization starts to saturate for μ_0H > 5 T. After substituting the Ba with Sr, the magnetism becomes softer. The coercive field of Ba_2La_2NiW_2O_12 is about 67 mT, while, in Sr_2La_2NiW_2O_12, it decreases to 4 mT. Thus, in A_2La_2NiW_2O_12, the chemical pressure suppresses both the magnetization and the T_c, hence suggesting an enhancement of the magnetic competition. Nevertheless, the FM interactions remain dominant also in Sr_2La_2NiW_2O_12. §.§ Heat capacity We measured the zero-field heat-capacity of A_2La_2­Ni­W_2­O_12 from 2 to 300 K. The low-T heat-capacity data were also collected under various external fields, up to 9 T. As shown in Fig. <ref>, in both compounds, there is a sharp λ-like transition at low temperatures, typical of long-range magnetic order. The C(T) data show a distinct peak at T_c = 6.1 and 4.7 K for Ba_2La_2NiW_2O_12 and Sr_2La_2NiW_2O_12, which are consistent with the T_c values determined from magnetization data (see Fig. <ref>). To extract the magnetic contribution, the normal-state (i.e., T ≫ T_c) specific-heat data were fitted to C/T = γ + βT^2, where γ≡ 0, due to the insulating nature of both compounds [see solid lines in Fig. <ref>(a) and (d)]. The derived β values are 0.0013 and 0.0012 J/mol-K^4 for Ba_2La_2NiW_2O_12 and Sr_2La_2NiW_2O_12, which yield a Debye temperature θ_D = 142 and 145 K, respectively. After subtracting the phonon contribution (i.e, the βT^2 term), the magnetic specific heat C_m/T vs. temperature is plotted in Fig. <ref>(b) and (e) for Ba_2La_2NiW_2O_12 and Sr_2La_2NiW_2O_12, respectively. Upon increasing the magnetic field, the peak at T_c becomes broader and moves to higher temperatures, once more confirming the FM nature of the magnetic transition in both materials. The zero-field magnetic entropy S_m(T) obtained by integrating C_m(T)/T is shown in Fig. <ref>(c) and (f) for Ba_2La_2NiW_2O_12 and Sr_2La_2NiW_2O_12, respectively. In both compounds, at temperatures close to T_c, S_m reaches Rln(2) (corresponding to S = 1/2). In Ba_2La_2NiW_2O_12, at temperatures above T_c, S_m reaches Rln(3) (corresponding to S = 1), while in Sr_2La_2NiW_2O_12, S_m is slightly smaller than Rln(3). Such a deviation is most likely due to an over-subtraction of the phonon contribution from the specific-heat data. To properly subtract the phonon contribution and estimate the magnetic entropy, heat-capacity measurements on the non-magnetic counterparts, as e.g., A_2La_2ZnW_2O_12, are highly desirable. §.§ Neutron diffraction To determine the crystal- and magnetic structures of A_2La_2NiW_2O_12, neutron powder diffraction patterns were collected at both the paramagnetic (300 K)- and ferromagnetic states (1.7 K). The room-temperature patterns were first analyzed by using the space group R3̅m (No. 166), as reported in previous studies <cit.>. With this model, the powder x-ray diffraction (XRD) patterns could be fitted reasonably well with a goodness of fit χ_r^2 ∼ 7. However, in case of the NPD patterns, although the Bragg peaks were located at the right positions, the R3̅m space group yielded a fairly large χ_r^2 ∼ 18, as evinced also from the clear discrepancy between the observed- and calculated intensities. This indicates that the space group R3̅m does not describe the crystal structure of A_2La_2NiW_2O_12 compounds accurately and, thus, further corrections to the structural model are required. Considering that neutron diffraction is more sensitive to the oxygen atoms than x-ray diffraction <cit.>, the oxygen positions are most likely to require corrections. We found that the space group R3̅ (No. 148) reproduces the NPD patterns quite well. In fact, both R3̅m and R3̅ groups belong to the trigonal system, with the latter exhibiting slightly different oxygen positions. Figures <ref>(a) and (b) show the Rietveld refinements of NPD at 300 K using the R3̅ space group for both compounds. These refinements yield a significantly reduced χ_r^2 ∼ 2, thus confirming that, in both cases, the R3̅ space group is more appropriate than R3̅m. With R3̅, the NiO_6 and WO_6 octahedra rotate in opposite directions around the c-axis, which breaks the mirror symmetry. A similar symmetry breaking has been observed also in the Ba_2La_2NiTe_2O_12 compound <cit.>. The refined lattice parameters, atomic positions, and bond lengths/angles, together with the goodness of fits are summarized in Table <ref> for A_2La_2NiW_2O_12 compounds. To clarify the magnetic structure of Ba_2La_2NiW_2O_12 and Sr_2La_2NiW_2O_12, the NPD patterns were also collected in the magnetically ordered state (i.e., 1.7 K) using long wavelength neutrons (λ = 2.458 Å). The LeBail fits of the magnetic diffraction patterns reveal a commensurate magnetic structure with a propagation vector k = (0, 0, 0) for A_2La_2NiW_2O_12 compounds. For such a magnetic vector, the little group G_k is identical to the space group R3̅ and it includes the symmetry elements 1, 3^+, 3^-, 1̅, 3̅^+, and 3̅^- <cit.>. The magnetic unit cell of A_2La_2NiW_2O_12 possesses a single orbit with only one site located at the Ni (0, 0, 0) position. For k = (0, 0, 0), G_k has six different irreducible representations (irreps) τ1, τ2, τ3, τ4, τ5, and τ6, among which only τ1, τ3, and τ5 allow for a long-range magnetic order at the Ni site. Table <ref> summarizes the basis vectors of τ1, τ3, and τ5 irreps calculated with BasIreps. For the R3̅ space group, the Ni atoms are located at the 3a site (0, 0, 0), invariant under all the symmetry operations. As a consequence, all the allowed irreps generate a FM coupling with the spins aligned along the c-axis for τ1, or lying within the ab-plane for τ3 and τ5 (see details in Table <ref>). According to the Rietveld refinements of the 1.7-K NPD pattern [see Fig. <ref>(c) and (d)], the best fits were obtained by using the τ1 irrep, yielding the smallest χ_r^2 = 1.93 and 2.77 for Ba_2La_2NiW_2O_12 and Sr_2La_2NiW_2O_12, respectively. The refined magnetic structure is shown in Fig. <ref>(b). The magnetic moments of Ni atoms obtained from the refinements are 1.94(2) and 1.84(3) μ_B for Ba_2La_2NiW_2O_12 and Sr_2La_2NiW_2O_12, consistent with their saturation magnetization (see Fig. <ref>). §.§ ZF- and LF-µSR The large gyromagnetic ratio of muons, combined with their availability as 100% spin-polarized beams, makes ZF-µSR a very sensitive probe for investigating magnetic materials. Here, to study the magnetic properties of A _2La_2NiW_2O_12 at a local level, we collected a series of ZF-µSR spectra at temperatures covering both the paramagnetic- and ferromagnetic states. Since neutron diffraction data suggest FM ground states for both Ba_2La_2NiW_2O_12 and Sr_2La_2NiW_2O_12 (with the Ni^2+ moments aligned along the c-axis), for our µSR measurements we focused on Ba_2La_2NiW_2O_12 due to its slightly higher T_c value. In a magnetic material with a long-range order, the time evolution of ZF-µSR asymmetry, A_ZF(t), encodes both the intrinsic magnetic fields and their distribution at the muon-stopping site <cit.>. The ZF-µSR spectra of Ba_2La_2NiW_2O_12 collected at different temperatures are shown in Fig. <ref>(a). In the paramagnetic state (T > T_c), the ZF-µSR spectra exhibit a relatively slow muon-spin depolarization (∼0.5–1 µs^-1 at 10 K), indicating rather weak spin fluctuations. Considering the two muon-stopping sites in Ba_2La_2NiW_2O_12, attributed to two distinct oxygen sites (see Table <ref>), the ZF-µSR spectra in the paramagnetic state were analyzed using the following model: A_ZF(t)= ∑_i=1^2 A_i e^-λ^L_it. Here, λ^L_i represent the longitudinal muon-spin relaxation rates, while A_i are the asymmetries of the two nonequivalent muon-stopping sites. In the FM state (T < T_c), the ZF-µSR spectra are characterized by highly-damped oscillations, typical of long-range magnetic order. These are clearly visible in Fig. <ref>(b), where short-time oscillations are superimposed on a long-time slow relaxation. The ZF-µSR spectra in the FM state were, hence, analyzed using the following model: A_ZF(t)= ∑_i=1^2A_i[αcos(ω_it+ϕ)e^-λ^T_it + (1-α)e^-λ^L_it]. Here, α and 1–α are the oscillating (i.e., transverse) and nonoscillating (i.e., longitudinal) fractions of the µSR signal, respectively, whose initial total asymmetry is equal to A_1 and A_2. In polycrystalline materials with a long-range magnetic order, one expects α = 2/3, since statistically one third of the muon spins are aligned parallel to the local field direction (i.e., S_μ∥ B_int) and, hence, do not precess; ω_i (=γ_μ B_i^int) represents the muon-spin precession frequency, with γ_μ= 2π×135.5 MHz/T the muon gyromagnetic ratio and B_i^int the local field sensed by muons; λ^T_i are the transverse muon-spin relaxation rates, reflecting the internal field distributions; ϕ is a shared initial phase. The derived fitting parameters are summarized in Fig. <ref>(c)-(e). The B_i^int, λ^T_i, and λ^L_i all show a distinct anomaly at T_c. The T_c determined from ZF-µSR is consistent with the value determined from magnetic susceptibility and heat capacity (see Figs. <ref> and <ref>). As shown in Fig. <ref>(c), below T_c, there are two distinct internal fields, here reflecting the two different muon-stopping sites. In the FM state, the temperature evolution of B^int_i(T) resembles the typical mean-field curve. To estimate the zero-temperature internal field, B^int_i(T) was analyzed by means of a phenomenological model: B^int_i(T) = B^int_i(0) [1-(T/T_c)^γ]^δ, where B^int_i(0) is the zero-temperature internal field, while γ and δ represent two empirical parameters. As shown by solid lines in Fig. <ref>(c), the above model describes the data reasonably well, yielding B^int_1(0) = 0.30 T and B^int_2(0) = 0.10 T for Ba_2La_2NiW_2O_12. The resulting power exponents are γ = 5.5(2) and δ = 0.54(2) for B_1^int(T), and γ = 4.6(2) and δ = 0.26(1) for B_2^int(T), respectively. The lack of any anomalies in B^int_i(T) below T_c is consistent with the simple FM structure of Ba_2La_2NiW_2O_12 (see Fig. <ref>). In fact, in some complex magnetic materials with multiple transitions, one observes a more complex B^int(T), since changes in magnetic structure are reflected in the local-field distribution <cit.>. The transverse muon-spin relaxation rate λ^T reflects the static magnetic field distribution at the muon-stopping site and is also affected by dynamical effects such as spin fluctuations, while its longitudinal counterpart λ^T is solely determined by spin fluctuations. The λ_i^T(T) of Ba_2La_2NiW_2O_12 exhibits the typical behavior of magnetic materials with a long-range order <cit.>, i.e., diverging at T_c and continuously decreasing well inside the magnetic state [see Fig. <ref>(d)]. In the paramagnetic state, λ_i^T is zero, due to the lack of a magnetic moment in the absence of an external field. The λ_i^L(T) in Fig. <ref>(e) shows a similar behavior to the λ_i^T(T), i.e., λ_i^L(T) diverges near T_c, followed by a significant drop at T < T_c, indicating that spin fluctuations are the strongest close to the onset of the FM order. Note that, the absolute values of longitudinal relaxation are much smaller than the transverse ones. Thus, at 1.5 K, λ^L/λ^T∼ 0.097 and 0.002 for the two different muon-stopping sites. In the paramagnetic state (i.e., T > 8 K), λ_i^L is also very small, suggesting weak spin fluctuations in both the ferromagnetic and paramagnetic states of Ba_2La_2NiW_2O_12. Such weak spin fluctuations are further supported by LF-µSR measurements. Figure <ref> shows the 2-K LF-µSR spectra collected in a longitudinal field of 0.1 and 0.5 T. Once the external field exceeds the internal field (here, ∼ 0.3 T), the µSR spectra become almost flat. This suggests that, in Ba_2La_2NiW_2O_12, muon spins are fully decoupled from the electronic magnetic moments in a field of 0.5 T. § DISCUSSION Although our comprehensive set of measurements suggest that both Ba_2La_2NiW_2O_12 and Sr_2La_2NiW_2O_12 have FM ground states, the magnetic susceptibility and neutron diffraction results indicate that the competition between FM- and AFM couplings is indeed tuned by the chemical pressure induced by the substitution of Ba- with the smaller Sr ions. To understand this, we examine the crystal-structure parameters of A_2La_2NiW_2O_12 (see details in Table <ref>), including the bond lengths and angles. The latter are directly related to the magnetic superexchange interactions and, thus, control the magnetic properties. In A_4B'B_2O_12, the B'O_6 octahedra share their corners with the BO_6 octahedra via oxygen atoms, thus leading to two superexchange interaction paths, i.e., B'-O-B-O-B' and B'-O-O-B' [see details in Fig. <ref>(c)]. According to the Goodenough-Kanamori rule, which provides the signs of the competitive interactions that are responsible for non-collinear spin ordering <cit.>, the B'-O-B-O-B' superexchange interaction (with ∠O-B-O ∼ 90^∘) favors a FM coupling, while the B'-O-O-B' path (with ∠B'-O-O ∼ 120-180^∘) allows for an AFM coupling. Although the R3̅ space group implies reduced O-B-O and B'-O-O bond angles with respect to the previously reported R3̅m space group <cit.>, the change is such that the FM or AFM character of the superexchange interactions is maintained. For instance, in Ba_2La_2NiW_2O_12, R3̅m gives ∠Ni-O2-O2 = 137.2^∘ and ∠O2-W-O2 = 86.7^∘; while in R3̅, these bond angles become 121.5^∘ and 84.5^∘. Consequently, the B'-O-B-O-B' and B'-O-O-B' superexchange interaction paths remain valid also in the R3̅ space group. The competition between these FM and AFM interactions eventually determines the magnetic ground state of A_4B'B_2O_12. Since Sr has a smaller atomic radius than Ba, by replacing Ba with Sr, the lattice constants along both the a- and c-axis are reduced by a factor of 1.14 and 2.81%, the Ni-O bond length decreases from 2.064 Å to 2.051 Å, while the Ni-O2-O2 bond angle increases from 121.50^∘ to 120.62^∘. By contrast, the W-O bond length and the O2-W-O2 bond angle are less affected, most likely because the W-O2 layer is further away from the Ba- or Sr-layers [see Fig. <ref>(a)]. The O2-W-O2 bond angle increases slightly from 84.51^∘ to 84.53^∘. The changes of Ni-O2-O2 and O2-W-O2 bond angles induced by chemical pressure (i.e., the substitution of Ba by Sr) tune the competition between FM- and AFM superexchange interactions in A_2La_2NiW_2O_12. The physical pressure might further tune the competition between the FM- and AFM interactions, and yield magnetic frustration. Previous studies reveal that the magnetic ground states of A_4B'B_2O_12 can also be tuned by chemical substitution on the B sites <cit.>. The substitution on the B'-site of Ni may enhance the B'-O-O-B' AFM interactions and stabilize the AFM ground state. For instance, Ba_2La_2MnW_2O_12 shows an AFM order below 1.7 K <cit.>. The Ni^2+ ions can also be substituted by Cu^2+ ions, but the latter case is not yet studied, although it may represent another interesting compound to exhibit magnetic frustration. Finally, the introduction of magnetic ions on the A site (e.g., the substitution of Ba^2+ or Sr^2+ with Eu^2+), whose magnetic interactions can compete with the above superexchange interactions, may lead to exotic magnetic properties. § CONCLUSION To summarize, we studied the effects of chemical pressure on the magnetic triangular-lattice compounds A_2La_2NiW_2O_12 (A = Sr, Ba). Their magnetic properties (due to the Ni^2+ ions) were investigated by means of magnetic susceptibility, specific heat, neutron diffraction, and µSR spectroscopy. When replacing Ba with Sr, chemical pressure is introduced which can tune the competition between the FM- and AFM superexchange interactions. While the Curie temperature T_c is suppressed from 6.3 K to 4.8 K, the FM interactions still persist in Sr_2La_2NiW_2O_12. According to the refinements of neutron diffraction patterns, in both compounds, the magnetic moments of Ni atoms are aligned along the c-axis, with a propagation vector k = (0, 0, 0). By using ZF-µSR measurements, we could follow the temperature evolution of the spin fluctuations and of the local magnetic fields. The estimated internal fields at zero temperature for the two different muon-stopping sites are 0.31 and 0.1 T. The smooth transverse muon-spin relaxation rates λ_T in the ordered phase confirm the simple FM structure of A_2La_2NiW_2O_12. In both materials, spin fluctuations are rather weak, reflected in a small longitudinal muon-spin relaxation rate in both the ferromagnetic- and paramagnetic states. In the future, it could be interesting to check if the combined physical pressure and chemical substitution on the A and B' sites can further tune the magnetic competitions in Sr_2La_2NiW_2O_12, and eventually lead to magnetic frustration or to a quantum spin-liquid state. This work was supported by the Natural Science Foundation of Shanghai (Grants No. 21ZR1420500 and 21JC­140­2300), Natural Science Foundation of Chongqing (Grant No. 2022NSCQ-MSX1468), and the Schweizerische Nationalfonds zur För­der­ung der Wis­sen­schaft­lichen For­schung (SNF) (Grants No. 200021_188706 and 206021_139082). Y.X. acknowledges support from the Shanghai Pujiang Program (Grant No. 21PJ1403100) and the Natural Science Foundation of China (Grant No. 12274125).
http://arxiv.org/abs/2307.04536v1
20230710130127
DADO -- Low-Cost Selection Strategies for Deep Active Design Optimization
[ "Jens Decke", "Christian Gruhl", "Lukas Rauch", "Bernhard Sick" ]
cs.LG
[ "cs.LG", "cs.CE" ]
Q-YOLOP: Quantization-aware You Only Look Once for Panoptic Driving Perception Chi-Chih Chang11, Wei-Cheng, Lin11, Pei-Shuo Wang11, Sheng-Feng Yu112, Yu-Chen Lu112, Kuan-Cheng Lin11 and Kai-Chiang Wu1 1 National Yang Ming Chiao Tung University 2 Macronix International Co., Ltd. August 12, 2023 =========================================================================================================================================================================================================================== In this experience report, we apply deep active learning to the field of design optimization to reduce the number of computationally expensive numerical simulations. We are interested in optimizing the design of structural components, where the shape is described by a set of parameters. If we can predict the performance based on these parameters and consider only the promising candidates for simulation, there is an enormous potential for saving computing power. We present two selection strategies for self-optimization to reduce the computational cost in multi-objective design optimization problems. Our proposed methodology provides an intuitive approach that is easy to apply, offers significant improvements over random sampling, and circumvents the need for uncertainty estimation. We evaluate our strategies on a large dataset from the domain of fluid dynamics and introduce two new evaluation metrics to determine the model's performance. Findings from our evaluation highlights the effectiveness of our selection strategies in accelerating design optimization. We believe that the introduced method is easily transferable to other self-optimization problems. Self-Optimization, Self-Supervised-Learning, Design-Optimization, Active-Learning, Numerical-Simulation § INTRODUCTION High-performance computing (HPC) systems have a high energy demand, which results in significant carbon dioxide emissions contributing notably to climate change <cit.>. Numerical simulations, for instance, computational fluid dynamics (CFD) or finite element analysis (FEA), widely used in industry and research, often demand days to weeks of computing time on HPC systems, posing a particular concern in this regard. Examples include simulations for weather predictions, structural dynamics, and electrodynamics. Reducing the number of numerical simulations or accelerating them could lead to significant savings in energy consumption and the reduction of carbon dioxide emissions. Design optimization (DO) aims to determine the optimal shape of components and typically involves many numerical simulations to identify the best design for pre-defined constraints. In recent years, deep learning methods are emerging in the field of DO to accelerate numerical simulations or to improve the overall performance <cit.>. Nevertheless, there is still the need for massive annotated training datasets. The annotations are acquired through numerical simulations, which are computationally expensive. To tackle this problem, we propose an approach to reduce the number of computer simulations required in DO processes with deep active learning (DAL) for regression. We refer to this as deep active design optimization (DADO). In DAL, the objective is to train a deep learning model, while actively selecting and annotating the most useful samples from a non-annotated dataset <cit.>. The criteria for determining the most valuable sample depends on the specific application area and will be explicitly defined later for our use case. Unlike traditional passive learning approaches, which require a large amount of annotated data, DAL aims to reduce the annotation effort by iteratively selecting the most useful samples for annotation. In DAL, the selection is typically based on selection strategies, such as uncertainty or diversity sampling <cit.>. These strategies aim to identify samples that are expected to improve the model's performance the most. The selected samples are then annotated by an oracle, which could be a human expert or a computer simulation. The annotated samples are used to update the deep learning model, incorporating the newly annotated samples into the training process. This iterative cycle of self-selecting samples, annotating samples, and updating the model continues until a pre-defined stopping criterion is achieved. The main advantage of DAL for regression is the potential to achieve high performance with less annotated data compared to traditional supervised learning approaches <cit.>. We conduct experiments on a real-world DO use case (cf. Figure <ref>) in the problem domain of fluid dynamics and thermodynamics, where flow deflections significantly contribute to efficiency losses in technical systems, such as piping systems for industrial heating and cooling. The objective is to discover a design that both reduces pumping power and ensures sufficient cooling capacity. This is a typical multi-objective and multi-physics optimization problem. Our approach employs DAL to reduce the number of computer simulations required for DO by selecting only the most valuable samples (i.e., those that are expected to yield the best performance gains), rather than accelerating individual simulations. We begin with a small number of randomly drawn annotated samples (i.e., designs) and a large data-pool of non-annotated samples (i.e., design candidates). The selection strategy iteratively selects design candidates to be evaluated by the computer simulation (i.e., expert model) to provide the ground truth annotation. The objective of this approach is to maximize performance with as few requests to the expert model as possible by selecting only those design candidates that are expected to be the most valuable for the model's performance. In typical DAL scenarios, the primary objective is to attain high predictive performance across the entire dataset. In DADO, the primary objective is to find a multi-objective optimal solution with as few candidate evaluations as possible. Since we are only interested in promising candidates, the predictive performance must only be high for these candidates, and it is not necessary to discriminate between mediocre and bad candidates. Consequently, our interest lies in a prediction model that exhibits strong performance and generalization within the feature space (i.e., design space) where the optimal solution is likely to reside. Metaphorically, this concept can be linked to a shrouded mountain range, where the peaks of different mountains emerge above a dense layer of fog. Rather than focusing on the entirety of the mountain, we solely concentrate on the elevated summits. One challenge in DAL is that the selection of promising design candidates for annotating can be biased towards certain regions of the design space <cit.> which results in bad model generalization. In contrast, we deliberately induce a bias by exploiting only the most promising regions in the design space. Thus, since conventional selection strategies are not well-suited to address our primary objective, we have developed two low-cost selection strategies that enable a model training within the relevant design space. They are characterized by their ease of implementation, low computational cost, and high effectiveness in finding promising design candidates. We refer to them as L2-Select and L2-Reject, as they select or reject design candidates based on the L2 norm. The proposed selection strategies are also applicable to other self-optimization problems and can be used to guide decision-making. Additionally, we propose two metrics tailored to the DAL regression problem to monitor and evaluate the model's performance at each iteration. Two scenarios with high and low annotation budgets with different DAL experiment parameters are investigated. This experience report presents our proposal to address DO problems using DAL methods. In addition to the publicly available code [<https://git.ies.uni-kassel.de/jdecke/lcs4dado>] developed on an open access dataset <cit.> we provide reproducible experiments and the following contributions to the research area. * We conduct initial research in applying DAL in the domain of DO as an optimization method to efficiently discover promising design candidates, therefore reducing the number of numerical simulations. * We propose two novel low-cost selection strategies for multi-objective DADO. Additionally, we introduce two metrics to evaluate the model's performance. * We make the first steps towards a deep generative active learning-based model for DO. The report also presents and discusses the challenges we encountered during this process. The remainder of this article is structured as follows. Section <ref> briefly overviews related work, focusing specifically on deep learning in design optimization and active learning. Section <ref> delves into the considered problem domain, providing a concise discussion on design optimization and focusing on the domain of fluid dynamics. Furthermore, we introduce our dataset, outlining its relevance to our research. Moving on to Section <ref>, we present our methodology in detail, describing how we trained a deep neural network and highlighting the selection strategies employed. Section <ref> is dedicated to the experimental setup and its results, where we compare our newly developed selection strategies against random strategies, providing insightful analyses and statistical observations. In Section <ref>, we present an idea to extend the described method to include a variational autoencoder (VAE) for future research. Finally, the article is concluded in Section <ref>. § RELATED WORK The optimization of design is a fundamental problem in engineering that has been extensively investigated over several decades <cit.>. Recently, there is a growing interest in employing machine learning methods to study DO problems. This interest is spurred by two factors: first, the emergence of new additive manufacturing techniques, which enable the production of free-form designs <cit.>; and second, the availability of computing power that allows the resolution of complex and relevant industrial problems <cit.>. For example, a current study shows the possibilities of combining DO and additive manufacturing of electromagnets <cit.>. Nie et al. <cit.> proposed TopologyGAN in 2021. It is used to optimize the stress and strain in a simple truss structure by comparing it with a baseline conditional generative adversarial network. The authors generate a dataset comprising already optimized truss structures, which were dependent on the size and direction of the load. The model's generalization capability was evaluated by applying unknown load boundary conditions. Although TopologyGAN did not perform optimization, it was able to identify an optimal truss structure for changed boundary conditions. The authors of <cit.> employed a graph neural network; with knowledge of the boundary conditions, they aim to generalize to previously unobserved or superimposed numerical meshes. A study from 2022 investigates if anomaly detection algorithms can be used to solve DO problems <cit.>. A significant problem is the tradeoff between exploration and exploitation. The key finding is that anomaly detection can be used to explore the design space. Still, there is a great difficulty in exploitation because anomaly detection algorithms would consider a design candidate as already detected whose target value is only slightly better than an already known one. The methodology in this work seeks to focus on exploitation without compromising exploration. Genetic algorithms (GA) such as the Non-dominated Sorting Genetic Algorithm 2 are well-established methods for solving DO problems; however, their convergence speed is rather slow <cit.>. In 2022, Parekh et al. developed a generative model for electrical machines with multiple topologies by using VAE in conjunction with a prediction network <cit.>. They concatenated the design parameter spaces of two distinct machine topologies and trained a latent representation that was highly effective in reconstructing the input. The latent dimension employed was defined to be greater than the design parameter space of the more complex machine topology in the latent space. Consequently, the latent representation did not compress any information of the input, and we hypothesize that the network learned the identity of the input designs only. The prediction network extended the capabilities of the VAE to enable it to predict objective values in a supervised manner. The dataset of both machines used in their study included 28,278 designs, which is a considerable amount of data. In real-world scenarios, DO problems do not typically provide such a large dataset. So our approach aims to use a significantly smaller number of design candidate with the help of DAL without compromising the model's prediction performance. Unfortunately, it was not possible to reproduce and extend their ideas because the code and data were not publicly available. To the best of our knowledge, DAL was not yet directly applied in DO. Nevertheless, Deng et al. introduce a comparable approach called Self-directed Online Learning Optimization for topology optimization in 2022 <cit.>. This approach integrates neural networks (NN) with numerical simulation data. The NN learns and substitutes the target as a function of design variables. At the same time, a small subset of training data is generated dynamically based on the NN prediction of the optimum. The NN fits the new training data and provides a better prediction in the region of interest until convergence. New design candidates selected for numerical evaluation are generated by adding a disturbance to the best design of the previous iteration, similar to mutation and crossover of GA. The main difference between the work of Deng et al. and this article is how the selection strategy performs. We focus on low-cost selection strategies, while they added disturbance to their design parameters. Furthermore, we have a vast dataset available to conduct our experiments offline. A request to the computer simulation can be answered instantaneously by drawing a design from the data-pool. § USE CASE The DO methodology developed in this work is based on a use case from the field of fluid dynamics and thermodynamics, but can also be applied to other problems and domains such as aerospace engineering, automotive industry, civil engineering, architecture, and manufacturing. In aerospace engineering, DO is used to improve the performance and efficiency of aircraft components, such as wings, fuselage, and engines. In the automotive industry, DO is employed to enhance the performance and safety of vehicles, such as improving aerodynamics, reducing emissions, and increasing efficiency of electromagnets <cit.>. In civil engineering, DO is applied to optimize the design of structures such as bridges, buildings, and dams, in terms of strength, stability, and cost. In architecture, DO is used to improve building performance regarding energy efficiency, natural light, and structural integrity. In manufacturing, DO is employed to optimize the design of products, such as reducing material waste and improving production efficiency. Our use case is a U-Bend flow channel. They can be found in various technical systems and applications, particularly those involving fluid transport or heat transfer. They are commonly employed in heat exchangers, such as condensers and evaporators, where they facilitate the transfer of heat between a fluid and its surroundings. U-bend flow channels can also be utilized in piping systems, refrigeration systems, air conditioning systems, and hydraulic systems to redirect or control the flow of fluids. The parameterization of the U-Bend is depicted in Figure <ref>. It is described with 28 design parameters and two target values. The parameterized geometry utilizes six boundary points, illustrated in green, with each boundary point offering two design parameters that are allowed to vary within their respective dashed bounding boxes. Additionally, we incorporate 16 curve parameters to connect these boundary points. In Figure <ref>, we present exemplary the pressure distribution of a particular design candidate, obtained through numerical simulation using the expert model. In a subsequent post-processing analysis, the pressure loss is computed based on this simulated solution. The design parameters determine the shape of the flow deflection, while the target values represent the pressure loss in [Pa] and the cooling capacity, which is quantified as the squared temperature difference between the heating surface and the cooling medium in [K^2m^2]. A small temperature difference corresponds to a high cooling capacity. The dataset comprises three distinct data formats for each design. However, for the purpose of this study, our focus lies solely on the parameter representation of the designs. This particular representation is chosen due to its streamlined and efficient nature, making it ideally suited for our methodology. The data is freely available and can be found in <cit.>, providing additional information on this specific use case and the numerical investigations to obtain the data. § METHODOLOGY §.§ DAL Process We present the methodology in Figure <ref>. The DAL process starts by randomly selecting initial_size designs candidates for training X_train_0 (depicted as a grey box) from a data-pool (depicted as a blue box). Based on the design candidates X_train_{i}, the Expert Model determines the corresponding target values y_train_{i}. Where i is the iteration loop count, indicating how many times the process has looped. Subsequently, the design candidates and the target values are used to train the Meta Model in a supervised manner. After training, the Meta Model predicts the target values of a draw_size large number of design candidates X_draw_{i} (depicted as green box). These predictions are passed to the Selector. Draw_size many random design candidates X_draw_{i} are bootstrapped in every iteration. Based on the selection strategy, the Selector chooses a subset of design candidates X_aq_{i} with the acquisition size aq_size. The Expert Model determines the true target values and the iteration loop finishes by adding the newly acquired designs to the training dataset. Each training cycle starts with newly initialized weights of the Meta Model. This loop iterates until a defined number n_iter is achieved. Expert Model: The Expert Model is not directly needed in this work, since a large annotated data-pool is available. Therefore, the Expert Model can be simulated to ensure a simple and fast pool-based experimentation and evaluation. However, the introduced experimental procedure can be used in an online setting where the Meta Model has to generate annotations on-the-fly. Meta Model: This study utilizes a multi-layer perceptron (MLP) as the Meta Model, with the first hidden layer consisting of 200 neurons and the second hidden layer comprising 100 neurons. A leakyReLU activation function and a dropout layer are applied to each hidden layer to enhance the model's generalization performance. The dropout rate is set to a constant value of 0.1. For the regression task, the output layer consists of one linear neuron for each of the two target values. Both the learning rate and the batch size are kept constant. The value for the learning rate is set to 0.0005 and the batch size to 4. An early stopping criterion in case the training error does not reduce further after 10 epochs is performed. The hyperparameters were determined based on results of preliminary studies. The weights of the best performing epoch are reloaded to evaluate the model's performance. At each process iteration, the model is trained from scratch to avoid potential bias to data selected in earlier iterations <cit.>. §.§ Selection Strategies We developed two simple but efficient selection strategies for DADO named L2-Select (L2S) and L2-Reject (L2R). These strategies can be characterized as simple in the sense that they are model-agnostic and they solely necessitate a point estimate for their targets. With these selection strategies, there is no need to rely on complex, computationally expensive and sensitive methods for uncertainty modeling. First, draw_size design candidates are bootstrapped from the entirety of the non-annotated data-pool to prevent test data leakage and to ensure an unbiased test of the model after each iteration. Subsequently, the target values y⃗_⃗n⃗ of these design candidates are determined by the Meta Model. The goal of the strategies is to choose aq_size candidates from the target value set y_draw_i: y_draw_i = {y⃗_⃗n⃗ | y⃗_⃗n⃗∈ J_1 × J_2} with |y_draw_i| = draw_size where J_1 and J_2 represent the objectives (for the use case J_1: pressure loss, J_2: cooling performance). For the L2S selection strategy, the aq_size design candidates with the smallest magnitude (or L2-norm) of the target value vector y⃗_⃗n⃗ are selected, cf. Equation (<ref>). The L2-norm L2S(y⃗) = |y⃗| = √(∑_j=1^num_obj y_j^2) { L2S(y⃗_⃗n⃗) | y⃗_⃗n⃗∈ y_draw_i, L2S(y⃗_⃗n⃗) ≤ L2S(y⃗_⃗n⃗+⃗1⃗) } is calculated by the square root of the sum of the squared elements y_n,j of the target vector as shown in Equation (<ref>). A graphical interpretation of this strategy is provided in Figure <ref>. L2R uses an adapted variant of the L2-norm based on the design candidate in the draw_size that has the largest predicted target values y_max,j as its origin. L2R uses an adapted variant of the L2-norm where the origin corresponds to the maximum values of the currently considered design candidates from y_draw_i, cf. Equation (<ref>). Equation <ref> shows the adapted L2-norm and its graphical interpretation is given in Figure <ref>. Instead of selecting the design candidate with lowest values for L2R(y⃗_⃗n⃗), the first draw_size - aq_size design candidates are rejected, cf. Equation (<ref>). Therefore, we select the remaining aq_size design candidates which are not rejected. y_max,j = max{y_n,j}, y⃗_⃗n⃗∈ y_draw_i L2R(y⃗) = √(∑_j=1^num_obj (y_j - y_max,j)^2) { L2R(y⃗_⃗n⃗) | y⃗_⃗n⃗∈ y_draw_i, L2R(y⃗_⃗n⃗) ≤ L2R(y⃗_⃗n⃗+⃗1⃗) } When comparing the two selection strategies in more detail, the differences in the choice of design candidates can be highlighted more clearly. In Figure <ref>, 400 design candidates are plotted following a multivariate Gaussian distribution. The selection strategy separates the aq_size selected design candidates from the unselected design candidates which are shown in blue. Design candidates selected by both selection strategies are indicated in purple, and design candidates marked with a red or green show the differences of the selected design candidates. We propose the L2R as an alternative to the L2S strategy because it may offer several advantages. Firstly, we assume that L2R effectively accounts for design candidates that reside at the edges of the target space, which are often overlooked by the L2S strategy. Additionally, the selected design candidates using L2R are more likely to correspond to a Pareto front, which is a key objective in multi-objective optimization. In contrast, design candidates drawn from the core of the distribution are less likely to offer diversity in the design space, assuming L2R to be the preferred selection strategy. We compare these selection strategies against each other and a random selection. § EXPERIMENTS §.§ Setup We define two experimental scenarios. One is a low-budget (S1) experiment and the other is a high-budget (S2) one. The main difference between the two scenarios is the amount of initial design candidates X_train_0 for training and the number of design candidates X_aq_i added to the training dataset per iteration. S1 holds an initial_size of 100 design candidates, its draw_size is set to 400 and the selected subset aq_size is variable in a set of {10, 20, 25, 50} design candidates per iteration until a budget of 500 design candidates is exhausted. We selected the experimental parameters for our DAL experiments based on the observation that datasets in the domain of DO are generally very small. Thus, the parameters for experiment S1 were chosen to represent a real-world scenario. Scenario S2 consists of 500 initial design candidates X_train_0 and acquires {50, 100, 125, 200} X_aq_i per loop execution from its draw_size of 2000 design candidates until a budget of 1500 is reached. In S2, we show the process again with a larger budget as it is typically used for DO, but the amount of data can still be considered to be very small for deep learning applications. All experiments for the multi-objective optimization are performed using the L2S, the L2R and a random selection strategy. In addition, each experiment is performed with 5 different random seeds to ensure a representative evaluation. The different DAL experiment parameters that are investigated are summarized in Table <ref>. To evaluate the experiments presented, we employ various metrics, including the mean square error (MSE), the spearman rank-order correlation coefficient (SROCC), as well as the mean rank (MR) and intersection metrics, which we introduce below. The results of the conducted experiments are summarized with the help of the area under the learning curve (AUC) metric in Table <ref>. This allows us to evaluate an entire optimization process in a single value per metric. To calculate the SROCC and MR metrics, it is necessary to sort the target value y⃗_⃗n⃗ of the estimated y_draw_i and the true values y_draw_i_true according to their quantity. To do so, we sort the currently drawn candidates X_draw_i based in the current selection strategy S (i.e., L2S, L2R, random) and the true performance values, cf. Equation (<ref>). The set K contains the indices from the sorted candidates set that correspond to the candidates in X_aq. The MR metric is then the average of the first aq_size indices of the set K. Additionally, we normalized the MR to be between 0 and 1, where 0 corresponds to its optimal value depending on its aq_size and the MR after the first process iteration which is to be assumed the highest value of the process. The optimal value of the MR metric would therefore result in aq_size · 1/2, K = { k | x⃗_⃗n⃗ = z⃗_⃗k⃗, x⃗_⃗n⃗∈ X_aq, z⃗_⃗k⃗∈sort(X_draw_i, S(y_draw_i_true)) } MR = 1/aq_size∑_k ∈ K k The SROCC is calculated using the first aq_size indices of both sorted lists as input and outputs a value between 0 and 1, where a value of 1 indicates that the aq_size design candidates of the predicted values match the correct sorting of the true values. The intersections metric assesses the accuracy of the top-rated designs. The metric is relatively simple. It compares the aq_size selected candidates X_aq_i against X_aq_i_true⊂ X_draw_i, the aq_size candidate selected based on the ground truth performance. The intersection of both sets can be used to directly calculate the accuracy which is based on the cardinality of the intersection, cf. Equation (<ref>). The name intersection for the metric is based on the intersection operation. intersection = |X_aq_i ∩ X_aq_i_true|/aq_size We prioritize SROCC, MR, and intersections metrics over classic MSE for DADO, as accurate ranking of designs is more crucial than precise estimations of target values. With the true ranking of the designs, the true y_aq_{i} values are calculated using the Expert Model. Nevertheless, we assume that DAL will lead to an improvement of the MSE of the added designs X_aq_{i} after each iteration. §.§ Results In Table <ref>, we present the AUC and the final value at the end of the process for all experiments and metrics. S1 is highlighted in green and S2 in blue, while the three selection strategies are differentiated by varying shades of gray. The results indicate that L2S outperforms L2R in every single experiment and that the random strategy consistently yields the worst results, except in the case of rnd_MSE which is the MSE between y_draw_i and its true annotations. Additionally, the quality of the results does not simply increase with its aq_size, for S1. For S2, the experiment with the smallest aq_size based on the best_MSE, the MR and the SROCC provide predictions with the highest performance. The best_MSE is the MSE between y_aq_i and its true annotations. The superiority of the random selection strategy in the rnd_MSE metric is attributable to the bias that we attempt to impose through our selection strategies, whereby the model's predictions in regions where the selection strategies assume promising values are expected to yield a higher performance. As such, it is reasonable that models that are trained using the design candidates suggested by L2S and L2R would perform worse in other regions. The best_MSE identifies the MSE that was evaluated on the selected design candidates. This metric monitors the predictive performance of our model. To look at the results in more detail, we have selected an experiment for S1 and S2 which we would like to discuss in more detail below. In Figure <ref> shows the results for S1 with a aq_size of 25 design candidates per iteration. Since our initial_size is 100 design candidates and our budget is 500 design candidates in total, we iterate 16 times. In Figure <ref>, we present the intersections metric, the SROCC in Figure <ref>, the rnd_MSE in Figure <ref>, and the best_MSE in Figure <ref>. The lines represent the mean values obtained from the five runs conducted for each experiment. In addition to the mean values, the plot displays the standard-error intervals for each metric and selection strategy. Throughout the course of the experiment, it is evident that the intersections and the SROCC show an increasing trend, while the rnd_MSE and the best_MSE exhibit a decreasing trend. Although the random strategy shows good predictive performance on the randomly selected design candidates, it underperforms compared to the two selection strategies on the promising design candidates. The benefits of the low-cost selection strategies become apparent upon examining the following metrics. The intersection metric shows that the process develops a self-awareness in the course of the iterations and is increasingly able to select suitable design candidates for multi-objective DO. However, it becomes apparent that the intersection metric is too strict for random selection, and despite being able to improve, models trained on the basis of random selection are still unable to satisfy this metric. Therefore, the MR and the SROCC are introduced as alternative metrics. While the visualization of the MR has been omitted due to limited space, it is proven to be a useful metric for comparing experiments (see Table <ref>). The SROCC shows a similar qualitative trend as the intersection metric, with L2S outperforming L2R and the random strategy. However, it also reveals that the random strategy improves in sorting the draw_size design candidates by rank based on their target value over the iterations, which is not reflected by the intersection metric. Unexpectedly, the L2S strategy outperforms the L2R strategy, which may be attributed to the nature of the available data. The selection strategy was originally designed for a multivariate Gaussian distribution; however, as illustrated in Figure <ref>, the two scaled target values of the real data do not conform to a Gaussian distribution, hence the solution quality of L2S exceeds that of L2R in this use case. As stated before, the S2 experiment with an aq_size of 50 produced the best results. Therefore, we will examine the results of this experiment more closely in the Figure <ref>. Since S2 had a budget of 1500 design candidates, this implies that 20 iterations were completed. When examining the results from S2, it becomes evident that the standard-error intervals in the experiments are considerably reduced due to the larger budget. Additionally, the metrics are notably improved when compared to S1. The disparities between the selection strategies mentioned earlier are also more distinct in evaluating S2 but are in line with the outcomes previously discussed for S1. Detecting any noticeable variation in prediction quality based on the rnd_MSE is challenging for both L2S and L2R. The best_MSE values exhibit almost identical patterns and trends. Nonetheless, differences in the performance between L2R and L2S can be observed with the aid of the intersection and the SROCC metrics. Notably, a decline in the slope of the curve can be inferred with random selection, as indicated by the SROCC. Also noteworthy is the high fluctuation of the best_MSE in random selection, from which it can be concluded that the prediction performance on the selected design candidates is considerably lower. In comparing the four metrics between the final iteration of S1 (cf. Figure <ref>) and the initial iteration of S2 (cf. Figure <ref>), a considerable performance improvement is observed in favor of S1, despite both scenarios having an equal budget at that stage. This finding supports the effectiveness and benefits of our methodology in DO § TOWARDS GENERATIVE DEEP ACTIVE DESIGN OPTIMIZATION Based on our confidence in the feasibility of performing self-optimizing multi-objective optimization using DAL, we aim to augment the Meta Model in the presented process with a VAE. Similar to Parekh et al. <cit.>, we extend the VAE with an additional prediction network and, thereby, perform a multi-task regression and reconstruction model. As described in Section <ref>, we believe that their VAE is exclusive learning the identity of the two motor topologies. The reason for that is the chosen size of the latent space and the fact that the used trainingset does not represent a real-world DO scenario. Our idea is to embed the VAE into the DAL process presented above. As a selection strategy, a clustering approach in the latent representation shall be applied to separate areas of promising design candidates from other less well-performing design candidates. The generative properties of the VAE will then be used to specifically generate new design candidates which belong to the promising area of latent space. The smaller the latent size is, the easier it will be for clustering methods to separate these areas, but the more challenging the subsequent reconstruction of the design candidates might be. The prediction network is parallel to the decoder of the VAE in the latent space. With the help of this additional network, the latent space can be divided based on the predicted target values, to enable clustering. Although numerous experiments have been conducted to optimize the structure and hyperparameters of the VAE, a suitable trade-off between a well-separable latent space and a reconstruction error, that is not excessively large, has yet to be found. If the reconstruction error is too large, it is expected that there will be a high deviation between the prediction of the Meta Model and the Expert Model. On the other hand, if the prediction performance is too low, patterns cannot be detected in the latent space. This issue may be attributed to the weighting factor β, which determines the influence of the Kullback Leibler divergence. Several approaches have been explored, such as introducing a cyclical annealing schedule <cit.> for β, to address the reconstruction and separability trade-off. However, no clear trend can be observed in Figure <ref>, which displays the absolute deviation of the reconstruction of eight random test design candidates. Each of the 28 boxes, arranged in a 7x4 grid, represents a design parameter. In our next steps, we plan to introduce cyclic training for the reconstruction and prediction tasks. We believe that the VAE, as a Meta Model, will play a central role in DADO, and therefore, we consider it the focus of our future work. § CONCLUSION In this experience report, we have demonstrated the feasibility of utilizing DAL to tackle DO, leveraging a large pool of non-annotated data. The developed DAL selection strategies for regression applied to a multi-objective DO have shown promising results. The results remain consistent across all of the experiments. The conducted experiments show that based on rnd_MSE metric the performance of the random selection strategy surpasses both of our selection strategies. This outcome is, however, unsurprising, as the MSE of the prediction is computed based on randomly selected design candidates from the entirety of our data-pool. Nevertheless, our objective is to bias the model to perform well on the self-selected design candidates, which are advantageous for self-optimization by proposing promising design candidates only. Our assumption that the developed L2R selection strategy would outperform the L2S strategy due to its more Pareto-like selection was not confirmed by the experiments. The reason for this is that the method was developed using a multivariate Gaussian distribution. In our dataset, the assumption of a Gaussian is not fulfilled. Especially, for small draw_sizes. Both strategies presented in this article rely on the L2-norm, which selects a sample circularly around an origin. To improve the robustness of the selection to differently scaled target values, one possibility is to replace the circular selection with an ellipsoid. Our study demonstrated that the selection strategies are providing promising results with two target values, an extension to higher dimensional multi-objective optimization should be straightforward. Further, we have shown the limitations of incorporating a generative model into the DADO process. We plan to develop a selection strategy based on a clustering procedure in the latent space, once we have achieved a good balance between reconstruction and disentanglement in the latent space. Subsequently, we propose the integration of the complex numerical simulations into the process, described in this work, enabling real-time generation of annotations for design candidates outside the existing data-pool. Moreover, there is clear potential in exploring the Meta Model and its hyperparameters to enhance prediction quality and accelerate DO, which was not the focus of this study. An investigation of the raw data that is available in numerical simulations in the sense of numerical meshes could be investigated using graph neural networks in order to determine if another data representation is advantageous for performance predictions in DADO. With our research endeavors, we seek to make a contribution towards the reduction of CFD and FEA simulations on HPC systems. By reducing the number of such simulations, we aim to effectively reduce the associated energy costs and mitigate associated climate-damaging emissions, thus promoting a more sustainable and environmentally conscious approach for future computational simulations. § ACKNOWLEDGMENT We express our gratitude to Dr. Franz Götz-Hahn for the insightful discussions. ieeetr
http://arxiv.org/abs/2307.05853v2
20230712001304
GLA-GCN: Global-local Adaptive Graph Convolutional Network for 3D Human Pose Estimation from Monocular Video
[ "Bruce X. B. Yu", "Zhi Zhang", "Yongxu Liu", "Sheng-hua Zhong", "Yan Liu", "Chang Wen Chen" ]
cs.CV
[ "cs.CV" ]
GLA-GCN: Global-local Adaptive Graph Convolutional Network for 3D Human Pose Estimation from Monocular Video Bruce X.B. Yu1 Zhi Zhang1 Yongxu Liu1 Sheng-hua Zhong2 Yan Liu1 Chang Wen Chen1 1The Hong Kong Polytechnic University 2Shenzhen University August 12, 2023 ========================================================================================================================================================================== empty 3D human pose estimation has been researched for decades with promising fruits. 3D human pose lifting is one of the promising research directions toward the task where both estimated pose and ground truth pose data are used for training. Existing pose lifting works mainly focus on improving the performance of estimated pose, but they usually underperform when testing on the ground truth pose data. We observe that the performance of the estimated pose can be easily improved by preparing good quality 2D pose, such as fine-tuning the 2D pose or using advanced 2D pose detectors. As such, we concentrate on improving the 3D human pose lifting via ground truth data for the future improvement of more quality estimated pose data. Towards this goal, a simple yet effective model called Global-local Adaptive Graph Convolutional Network (GLA-GCN) is proposed in this work. Our GLA-GCN globally models the spatiotemporal structure via a graph representation and backtraces local joint features for 3D human pose estimation via individually connected layers. To validate our model design, we conduct extensive experiments on three benchmark datasets: Human3.6M, HumanEva-I, and MPI-INF-3DHP. Experimental results show that our GLA-GCN[Code is available: <https://github.com/bruceyo/GLA-GCN> ] implemented with ground truth 2D poses significantly outperforms state-of-the-art methods (e.g., up to 3%, 17%, and 14% error reductions on Human3.6M, HumanEva-I, and MPI-INF-3DHP, respectively). § INTRODUCTION 3D Human Pose Estimation (HPE) in videos aims to predict the pose joint locations of the human body in 3D space, which can facilitate plenty of applications such as video surveillance, human-robot interaction, and physiotherapy <cit.>. 3D human poses can be directly retrieved from advanced motion sensors such as motion capture systems, depth sensors, or stereotype cameras <cit.>. The 3D HPE task can be performed under either multi-view or monocular view settings. Although state-of-the-art multi-view methods <cit.> generally show superior performance than monocular ones <cit.>, ordinary RGB monocular cameras are much cheaper than these off-the-shelf motion sensors and more widely applied in real-world surveillance scenarios. Hence, 3D HPE from a monocular video is an important and challenging task, which has been attracting increasing research interest. Recent monocular view works can be grouped into model-based and model-free methods <cit.>. Model-based methods <cit.> incorporate parametric body models such as kinematic <cit.>, planar <cit.>, and volumetric models <cit.> for 3D HPE. Model-free methods can be further grouped into single-stage and 2D to 3D lifting methods. Single-stage methods estimate the 3D pose directly from images in an end-to-end manner <cit.>. 2D to 3D lifting methods have an intermediate 2D pose estimation layer <cit.>. Among these methods, 2D to 3D lifting methods implemented with ground truth 2D poses achieved better performance. The advantages of 2D to 3D lifting methods can be summarized as two main points: allowing make use of advances in 2D human pose detection and exploiting temporal information along multiple 2D pose frames <cit.>. For the 2D human pose detection, it has achieved remarkable progress via detectors such as Mask R-CNN (MRCNN) <cit.>, Cascaded Pyramid Network (CPN) <cit.>, Stacked Hourglass (SH) detector <cit.>, and HR-Net <cit.>. The intermediate 2D pose estimation stage via these 2D pose detectors significantly reduces the data volume and complexity of the 3D HPE task. For the temporal information, existing mainstream methods <cit.> gained noticeable improvements by feeding a long sequence of 2D pose frames to their models, among which <cit.> achieved the state-of-the-art performance via ground truth 2D poses. Recent methods <cit.> simply fine-tuned these 2D pose detectors on the target datasets and achieved great improvements for the performance of estimated 2D pose data but remain far behind the results of using ground truth 2D pose, which motivates us to concentrate on improving the 3D HPE via ground truth 2D pose data for potential improvements via future more quality estimated 2D pose data. Given the promising performance and advantages of 2D to 3D lifting methods, our work contributes the literature along this direction. For 2D to 3D lifting approaches, since <cit.> proposed Fully Connected Network (FCN), recent advanced models have three main groups: Temporal Convolutional Network (TCN)-based <cit.>, Graph Convolutional Network (GCN)-based <cit.>, and Transformer-based ones <cit.>. On the one hand, we observe that existing TCN- and Transformer-based methods can receive large receptive fields (i.e., a long 2D pose sequence) with strided convolutions. However, it can be difficult to make further intuitive designs to backtrace local joint features based on the pose structure, since the 2D pose sequence is flattened and fed to the model. Meanwhile, the estimation of different pose joints relies on the same fully connected layer, which lacks considering the independent characteristic of different pose joints. On the other hand, GCN-based models can explicitly reserve the structure of 2D and 3D human pose during convolutional propagation. However, this advantage of GCN remains under-explored. Existing GCN-based methods <cit.> also utilized a fully connected layer for the estimation of different 3D pose joints, which does not consider the structural features of GCN representations. To this end, we propose Global-local Adaptive GCN (GLA-GCN) for 2D to 3D human pose lifting. Our GLA-GCN contains two modules: global representation and local 3D pose estimation. In the global representation, we use an adaptive Graph Convolutional Network (GCN) to reconstruct the global representation of an intermediate 3D human pose sequence from its corresponding 2D sequence. For the local 3D pose joint estimation, we temporally shrink the global representation optimized by the reconstructed 3D pose sequence with a strided design. Then, an individual connected layer is proposed to locally estimate the 3D human pose joints from the shrunken global representation. Our contributions can be threefold as follows: ∙ We propose a global-local learning architecture that leverages the global spatialtemporal representation and local joint representation in the GCN-based model for 3D human pose estimation. ∙ We are the first to introduce an individual connected layer that has two components to divide joint nodes and input the joint node representation for 3D pose joint estimation instead of based on pooled features. ∙ Our GLA-GCN model performs better than corresponding state-of-the-art methods <cit.> with considerable margins e.g., up to 3% and 17% error reductions on Human3.6M <cit.> and HumanEva-I <cit.>, respectively. § RELATED WORK 2D to 3D Lifting. 3D HPE is a traditional vision problem that has been studied for decades <cit.>. Existing works of 3D HPE from a monocular view usually target two main scenarios: single person and multi-person <cit.>. This work aims to improve the performance of single person 3D HPE. <cit.> represent early efforts that attempt to infer 3D position from 2D projections. They usually rely on manually chosen parameters based on assumptions about pose joint mobility. Methods <cit.> estimating 3D pose from less frames or even a single frame has shown great progress but can be a lack of considering temporal information. Recent advances in 2D human pose estimation <cit.> enable 2D to 3D lifting approaches to achieve remarkable performance over other counterparts. Inspired by <cit.>, there has been more well-designed learning architectures being proposed to improve the performance, in particular, by utilizing temporal information. These methods are also known as 2D to 3D lifting, which can be grouped into three directions: TCN-, GCN-, and Transformer-based architectures <cit.>. TCN-based methods <cit.> successfully push the performance of 2D to 3D lifting forward with a strided design for their learning architectures built upon 1D CNN layers. The strided design is on the temporal dimension of the input, which allows the features to shrink from a 2D pose sequence to a feature embedding for the 3D pose estimation via a final fully connected layer. The number of channels for the fully connected layer is conventionally set to 1024, which is shared to predict the 3D positions of all pose joints. While varied numbers of input 2D pose frames have been extensively investigated, which shows input 2D pose frames with reasonable length can benefit the 3D pose reconstruction. The strided design can effectively reduce the feature size by shrinking the number of temporal frames along propagation of several TCN blocks. Using this strided structure, Transformer-based methods <cit.> show promising performance, especially <cit.> that takes advantage of weighted and temporal loss functions and helps it outperform the GCN-based methods optimized with an additional motion loss <cit.>. The motion loss was shown not very effective in <cit.>. These observations compel us to explore effective models in the direction of GCN-based models with the inspiring designs in mind but without relying on various novel loss functions. Graph Convolutional Network. A popular method representing the pose data with GCN is Spatial Temporal GCN (ST-GCN) <cit.>, which is originally proposed to model large receptive fields for the skeleton-based action recognition. Following ST-GCN, advanced GCN models have been proposed to advance 3D HPE <cit.>. Regarding GCN-based models for 3D HPE, Ci et al. <cit.> proposed Locally Connected Network (LCN) that takes the advantages of FCN <cit.> and GCN <cit.>. LCN has the similar design for the convolutional filters to ST-GCN <cit.>, which defines a neighbor set for a node based on the distance to perform convolutional operation. Zhao et al. <cit.> proposed an architecture called SemGCN that stacks GCN layers by flatten output to a fully connected layer. The optimization of SemGCN is based on both joint positions and bone vectors. Choi et al. <cit.> also proposed to use GCN to recover 3D human pose and mesh from a 2D human pose. Liu et al. <cit.> investigated how weight sharing schemes in GCNs affect the pose lifting task, which shows the pre-aggregation method leads to relatively better performance. The architecture in <cit.> is similar with that of SemGCN. The above mentioned GCN-based methods achieved good performance via a single pose frame input but they did not take the advantage of temporal information in a 2D pose sequence. Taking multiple 2D pose frames as input, U-shaped Graph Convolution Networks (UGCN) <cit.> further improves the performance of GCN-based methods by paying attention to the temporal characteristics of a pose motion. Specifically, UGCN utilizes spatial temporal GCN <cit.> to predict a 3D pose sequence from a 2D pose sequence for the reconstruction of a single 3D pose frame. A motion loss term that regulates the temporal trajectory of pose joints based on the prediction of a 3D pose sequence and its corresponding ground truth 3D pose sequence. Despite the improvements grained with novel loss terms in works such as SemGCN and UGCN, we aim to contribute the literature of 2D-3D lifting by using the consistent loss term used in <cit.>. In our model design, we propose to incorporate the strided convolutions to a GCN-based model that represents global information of a 2D pose sequence. Based on the structure of GCN representation, we explicitly utilize the structured features of different pose joints to locally predict their corresponding 3D pose locations. § METHOD Given the temporal information of a 2D human pose sequence estimated from a video P={p_t,i∈ℝ^2| t=1,...,T;i=1,...,N}, where T is the number of pose frames and N is the number of pose joints, we aim to utilize this 2D pose sequence P to reconstruct 3D coordinates of pose joints P̅={p̅_i∈ℝ^3|i=1,...,N}. Figure <ref> shows the learning architecture of our GLA-GCN, which uses AGCN layers to globally represent the 2D pose sequence and locally estimate the 3D pose via an individual connected layer. In the following of this section, we introduce the detailed design of our GLA-GCN. §.§ Global Representation Adaptive Graph Convolutional Network. An AGCN block <cit.> is based on the GCN with an adaptive design that improves the flexibility of a typical ST-GCN block <cit.>. Let us represent the 2D pose sequence P as a spatial-temporal graph 𝒢={υ_t,ε_t|t=1,...,T}, where υ_t={υ_t,i|i=1,...,N} represents the pose joints and ε_t represents the corresponding pose bones. To implement a basic ST-GCN block, a neighbor set ℬ_i is first defined to indicate the spatial graph convolutional filter for a specific pose joint υ_t,i. Specifically, for the graph convolutional filter of a vertex node, we apply three distance neighbor subsets: the vertex itself, centripetal subset, and centrifugal subset. The definitions of centripetal and centrifugal subsets are based on the pose frame’s gravity center (i.e., the average coordinate of all pose joints). Centripetal and centrifugal subsets represent nodes that are closer and farther to the average distance from the gravity center, respectively. Empirically, similar with 2D convolution, we set the kernel size K to 3, which will lead to 3 subsets in ℬ_i. To implement the subsets, a mapping h_t,i→{0,...,K-1} is used to index each subset with a numeric label, where centripetal and centrifugal subsets are respectively labeled as 1 and 2. Subsets that have the average distance to gravity center is indexed to 0. This graph convolutional operation can be written as f_out(υ_t,i)=∑_υ_t,j∈ℬ_i1/Z_t,jf_in(υ_t,j)W(h_t,i(υ_t,j)) where f_in:v_t,j→ℝ^2 is a mapping that gets the attribute features of joint node v_t,j and Z_t,j is a normalization term that equals to the subset’s cardinality. W(h_t,i(v_t,j)) is a weight function W(υ_t,i,υ_t,j):ℬ_i→ℝ^2 implemented by indexing a (2,K) tensor. For a pose frame, the determined graph convolution of a sampling strategy (e.g., centripetal and centrifugal subsets) can be implemented by an N× N adjacency matrix. Specifically, with K spatial sampling strategies ∑_k=1^K𝐀_k and the adaptive design, Equation <ref> can be transformed into 𝐟_out(υ_t)=∑_k=1^K (𝐀_k+𝐁_k+𝐂_k) 𝐟_in𝐖_k where Λ_k^-1/2𝐀̅_kΛ_k^-1/2 is a normalized adjacency matrix of 𝐀̅_k with its elements indicating whether a vertex υ_t,j is included in the neighbor set. Λ_k^ii=∑_j(𝐀̅_k^ij)+α is a diagonal matrix with α set to 0.001 to prevent empty rows. 𝐖_k denotes the weighting function of Equation <ref>, which is a weight tensor of the 1× 1 convolutional operation. Unlike 𝐀_k that represents the physical structure of a human pose, 𝐁_k represents learnable parameters that indicate the connection strength between pose joints, which is implemented with an N× N adjacency matrix initialized to 0. 𝐂_k performs the similar function of 𝐁_k, which is implemented by the dot product of two feature maps calculated by embedding functions (i.e., θ and ϕ) to calculate the similarity between pose joints. Calculation of 𝐂_k can be represented as 𝐂_k = SoftMax(𝐟_in^T𝐖_θ k^T𝐖_ϕ k𝐟_in) where 𝐖_θ and 𝐖_ϕ are learnable parameter of the two embedding functions, which are initialized as 0.0. Then an AGCN block is realized with a 1×Γ classical 2D convolutional layer (Γ is the temporal kernel size that we set to 9) and the defined adaptive graph convolution 𝐟_out(υ_t), which are both followed by a batch normalization layer and a ReLU layer and a dropout layer in between them. Meanwhile, a residual connection <cit.> is added to the AGCN block. Reconstruct 3D Pose Sequence. Taking the inspiration of recent works <cit.>, the introduced AGCN block is then used to extract the spatiotemporal structural information in the global graph representation, which is supervised by estimating the 3D pose sequence from the corresponding 2D sequence (see Figure <ref> [Reconstruct 3D Pose Sequence]). Here, each AGCN block has three key parameters: the number of input channels C_in, the number of output channels C_out, and the stride S of the temporal convolution, while the other parameters are kept consistent (e.g., the temporal convolution kernel size is three). Given an input C_in-dim pose representation F(C_in, T_in, N), the AGCN block derives the output C_out-dim pose F(C_out, T_out, N) via convolution on the pose structure sequence, where T_out depends on N_in and S. To reconstruct the 3D pose sequence, we first use AGCN(2,96,1) to convert the 2D pose sequence F(2,T,N) into a 96D pose representation F(96,T,N). Following the settings of related work, we set T to 243 and N to 17 for the Human3.6M dataset. That is, the input 2D pose sequence of F(2, 243, 17) is converted into a 96D pose sequence of F(96,243,17). Then, we stack iterative layers of AGCN(96,96,1) to construct the deep spatiotemporal structural representation of the 96D pose sequence. The output of the last AGCN block is fed into an AGCN(96,3,1) to estimate the 3D pose sequence based on the 96D joint representation and derive F(3,243,17). Then, we let ⃛p_t,i∈ℝ^3 be the 3D position of the i-th joint at time t, and minimize the difference between the estimated 3D pose sequence and the ground truth 3D pose sequence: ℒ_global=1/T1/N∑_t=1^T ∑_i=1^N⃛p_t, i-p_t, i_2 Strided Learning Architecture. Inspired by the TCN-based approaches <cit.>, we further adapt the strided learning architecture to the AGCN model, using strided convolution to reduce long time sequences and aggregate temporal information near time t for pose estimation. The gray module in Figure <ref>(Strided Learning) illustrates the design of the strided AGCN modules. Each strided AGCN module has two consecutive AGCN blocks, which are surrounded by residual connections <cit.>. We perform strided convolutions at the second AGCN block of each strided AGCN module to gradually shrink the feature size at the temporal dimension. The input of the first strided AGCN module is the intermediate output in 3D pose sequence reconstruction, i.e., the extracted F(96, 243,17). After the propagation through the first strided AGAN module, the 96D pose sequence will be shrunken to F(96,81,17). Then, we repetitively perform subsequence AGCN layers until the feature size is shrunken to the size of 96×1×17. In this way, the pattern of the temporal neighbor in the pose sequence will be aggregated for subsequent local 3D pose joint estimation to estimate the 3D pose of the centric timestep. §.§ Local 3D Pose Joint Estimation Based on the above-mentioned strided AGCN modules, the input 2D pose sequence represented as F(96,243,17) can be transformed into a feature map F(96,1,17). The next step is to estimate the 3D position of joint nodes based on the feature map. Individually Connected Layers. Existing TCN- and GCN-based methods <cit.> usually flatten the derived feature maps and use a global skeleton representation consisting of all joint nodes to estimate every single joint, neglecting the matching information between joints and corresponding vectors in feature maps. Unlike existing works, we believe the global knowledge of the temporal and spatial neighborhoods has been aggregated via the proposed global representation. Thus, it is crucial to scope at the spatial information of the corresponding joint node to infer its 3D position. Based on this idea, this paper first proposes an individual connected layer to estimate the 3D position of every single joint based on the corresponding joint node feature F(96,1,1), instead of the pooled representation of all joint nodes F(96,1,17). Mathematically, the individual connected layer can be denoted as: ṗ_i^(unshared)=v_i 𝐖_i+𝐛_i where the estimated 3D position of joint i is denoted by ṗ_i and v_i represents the flattened features of F(96,1,i) joint node i. The weight parameters of the individual connected layer is represented by 𝐖_i and 𝐖_i ∈ℝ^96 × 3, whose bias parameter is 𝐛_i and 𝐛_i ∈ℝ^1 × 3. Due to the weight 𝐖_i and bias 𝐛_i are not shared between joints, we name the above individually connected layers as unshared individually connected layers. On top of that, we find that individually connected layers in the unshared fashion may ignore the shared rules between joints in 2D to 3D pose lifting, resulting in overfitting joint-specific distribution. Therefore, we further designed shared individually connected layers: ṗ_i^(shared)=v_i 𝐖_s + 𝐛_s The weight parameters of the shared individual connected layer is represented by 𝐖_s and 𝐖_s∈ℝ^96 × 3, whose bias parameter is 𝐛_s and 𝐛_s∈ℝ^1 × 3. Then, the 3D pose estimation of each joint can be formulated as the weighted average of the estimated results from the shared and unshared individually connected layers: p̅_i=λṗ_i^(unshared) + (1-λ)ṗ_i^(shared) Here, λ is the parameter that weighs the shared individual connected layer and the unshared individual connected layer. When λ is 0.0, the model uses only the shared individual connected layer for estimation, and when λ is 1.0, the model uses only the unshared individual connected layer for prediction. Especially, for convenience, the connected layers are implemented via a 1D CNN layer in this paper. Finally, we wish to minimize the difference between the estimated joint pose p̅_i and the ground truth joint pose p_i via L_local: ℒ_local=1/N∑_i=1^Np̅_i-p_i_2 During the training process, we optimize ℒ_global and ℒ_local in two stages. In the first stage, we minimize ℒ_global+ℒ_local to optimize the model using globally supervised signal guidance. In the second stage, we minimize ℒ_local to improve the 3D pose estimation performance. § EXPERIMENTS §.§ Datasets and Evaluation Our experiments are based on three public datasets: Human3.6M <cit.>, HumanEva-I <cit.>, and MPI-INF-3DHP <cit.>. With respect to Human3.6M, the data of subjects S1, S5, S6, S7, and S8 are applied for training, while that of S9 and S11 are used for testing, which is consistent with the training and validation settings of existing works <cit.>. In terms of HumanEva-I, following <cit.> and <cit.>, data for actions “walk” and “jog” from subjects S1, S2, and S3 are used for training and testing. For MPI-INF-3DHP, we follow the experimental setting of the recent state-of-the-art <cit.> for a fair comparison. Standard evaluation protocols: Mean Per-Joint Position Error (MPJPE) and Pose-aligned MPJPE (P-MPJPE), respectively known as Protocol#1 and Protocol#2, are used for both datasets. The calculation of MPJPE is based on the mean Euclidean distance between the predicted 3D pose joints aligned to root joints (i.e., pelvis) and the ground truth 3D pose joints collected via motion capture, which follows <cit.>. Comparing with MPJPE, P-MPJPE is also based on the mean Euclidean distance but has an extra post-processing step with rigid alignments (e.g., scale, rotation, and translation) to the predicted 3D pose. P-MPJPE leads to smaller differences with the ground truth and it follows <cit.>. §.§ Implementation Details We introduce the implementation detail of our GLA-GCN from three main perspectives: 2D pose detections, model setting, and hyperparameters for the training process. For fair comparison, we follow the 2D pose detections of Human3.6M <cit.> and HumanEva-I <cit.> used in <cit.>, which are detected by CPN <cit.> and MRCNN <cit.>, respectively. The CPN's 2D pose detection has 17 joints while the MRCNN's 2D pose detection has 15 joints. Besides, we also conduct experiments for the ground truth (GT) 2D pose detections of the two datasets. Based on the specific structure of 2D pose, we implement the graph convolutional operation filters of AGCN blocks, e.g., the sizes of 𝐀_k, 𝐁_k, 𝐂_k are set to 17×17, 15×15, and 17×17 for Human3.6M, HumanEva-I, and MPI-INF-3DHP, respectively. The designed model has some key parameters that can be adjusted to get better performance. For this part, we conduct ablation expariments with difference numbers of channels and 2D pose frames (i.e., C_out and T, respectively) on Human3.6M. To verify the proper design of the proposed model regarding the strided design and individual connected layer, we perform further ablation experiments on both datasets. In terms of the hyperparameters, we respectively set the batch size to 512, 256, and 256 for Human3.6M, HumanEva-I, and MPI-INF-3DHP. Being consistent with <cit.>, we adopt the ranger optimizer and train the model with the MPJPE loss for 80 and 1000 epochs for Human3.6M and HumanEva-I, respectively, using an initial learning rate of 0.01. Meanwhile, we set the dropout rate to 0.1. For both training and testing phases, data augmentation is applied by horizontally flipping the pose data. All experiments are conducted with two GeForce GTX 3090 GPUs. §.§ Comparison with State-of-the-Art Tables <ref> and <ref> show the comparison on Human3.6M and HumanEva-I with state-of-the-art methods under Protocol#1 and Protocol#2, respectively. Based on the implementation via GT 2D pose respectively optimized with or without loss of reconstructing the intermediate 3D pose sequence (defined in Equation <ref>), our GLA-GCN outperforms the state-of-the-art method <cit.> in terms of averaged results of two evaluation protocols. Figure <ref> shows the training process of our GLA-GCN on Human3.6M, which indicates our model converges quickly without observable overfitting. For the HumanEva-I dataset, the results of Protocol#2 (see Table <ref>) also show that our method is superior to state-of-the-art methods by just using the MPJPE loss. We also conduct a qualitative comparison with the state-of-the-art method that does not have a 3D pose sequence reconstruction module <cit.>. Figure <ref> shows the visualized improvements over <cit.>. For example, in the “S11 WalkT.” action, the visualizations of right-hip and right-hand joints estimated with our method and the ground truth 3D pose are clearly separate but those of <cit.> are connected to each other. Moving on to the MPI-INF-3DHP dataset in Table <ref>, we can see a significant decline in MPJPE with our model. Compared with the state-of-the-art method, P-STMO<cit.>, the MPJPE of our model decreases from 32.2mm to 27.76mm, representing an error reduction of approximately 14%. Comparing the performance by using estimated 2D pose (i.e., CPN or HR-Net pose data) is also regarded as important by most existing works. However, models such as <cit.> can perform well on relatively low-quality estimated 2D pose data but fail to well generalize the good performance to high-quality 2D data (i.e., ground truth poses). We note that our method underperforms on relatively low-quality estimated 2D pose when compared with some recent methods <cit.>. In the following, we conduct an in-depth discussion on this issue. Discussion: the Effect of 2D Pose Quality. Tracing back to the first work on 3D Pose lifting, Martinez et al. <cit.> used the SH 2D pose detector fine-tuned on the Human3.6M dataset to improve the 3D HPE (significantly from 67.5mm average MPJPE to 62.9mm), indicating that the quality of 2D pose can be essential for the 3D HPE. Recent works <cit.> took advantage of advanced 2D pose detector HR-Net and achieved better performance (e.g., 39.8mm average MPJPE). Zhu et al. <cit.> also successfully improved the result to 37.5mm average MPJPE by fine-tuning the SH network <cit.> on Human3.6M, which remains far behind the results implemented with GT 2D pose. A similar observation is also applicable to the HumanEva-I and MPI-INF-3DHP datasets. As shown in Table <ref>, our method yields a remarkable 40% drop in P-MPJPE on the HumanEva-I dataset. Given the GT 2D pose, the P-MPJPE goes from 15.4mm to 9.2mm compared with the best state-of-the-art algorithm. While on MPI-INF-3DHP, the MPJPE goes from 32.2mm to 27.76mm. Hence, improving the performance of the estimated pose purely relies on preparing quality 2D pose data, which can be easily achieved by either using an advanced 2D pose detector that can generate pose data similar to the GT 2D pose or just arbitrarily fine-tuning the existing pose detectors. On the other hand, it remains unclear for what scenario the reconstructed 3D pose with advanced pose detectors can be beneficial. One scenario is 3D human pose estimation in the wild, which is usually evaluated with qualitative visualization <cit.>. However, whether the 3D pose reconstructed from the estimated 2D pose can contribute to pose-based tasks remains under-explored. Given that how to improve the performance of the estimated 2D pose is straightforward and its usage remains lack of good applicable scenario, we argue that the comparison based on the GT 2D pose can more properly reflect a model's 3D HPE ability than comparisons based on the estimated 2D pose. §.§ Ablation Studies In the following, we ablate our model design gradients (i.e., AGCN layers, strided design, and individual connected layer). To validate the properness of using AGCN layers, we compare our model with the version implemented with ST-GCN <cit.> blocks, which leads to the ablation of AGCN. As shown in the row #1 of Table <ref>, the results of Protocol#2 on two datasets consistently indicate that using AGCN blocks can achieve better performances. For the ablation of strided design, we perform average pooling on the second (i.e., temporal) dimension of the feature map. Results in row #2 of Table <ref> indicates that it is not as effective as the strided design. Without the strided design, it will not only lead to a larger feature map representation, i.e., increased from F(C_out,1,N) to F(C_out,T,N) but also affects the 3D HPE. To verify the design of our individual connected layer, we compare it with the implementation of a fully connected layer that takes the expanded feature map as its input. The results in the row #3 of Table <ref> indicates that our individual connected layer can make better use of the structured representation of GCN and thus significantly improves the performance. The differences of features before the prediction layers (i.e., individually and fully connected layers) are respectively visualized in the upper and lower rows of Figure <ref>. The visualization indicates that our individual connected layer can make prediction based on more interpretable features that cannot be traced by using a fully connected layer. For example, the arm and leg joints show relatively higher independence to other joints for actions “eating” and “walking”, respectively. Feeding these independent features of all joints to a fully connected layer will interfere with the prediction of a specific joint. We further verify the advantage of this structured representation of GCN by swapping the left and right limbs of 2D pose input data, leading to the break of pose structure. Results in row #4 of Table <ref> show that breaking the pose structure will affect the 3D pose estimation. This observation, in turn, further indicates the proper design of our individual connected layer. Discussion: Limitation on Model Size. Similar to state-of-the-art methods <cit.>, we note that our method is faced with the issue of model size. Specifically, the lower table of Table <ref> shows that our model can achieve better performance than state-of-the-art methods <cit.> but uses slightly more model parameters. We aim to tackle this issue in the future by using techniques such as pruning. § CONCLUSION This paper proposes a GCN-based method utilizing the structured representation for 3D HPE in the 2D to 3D lifting paradigm. The proposed GLA-GCN globally represents the 2D pose sequence and locally estimates the 3D pose joints via an individual connected layer. Results show that our GLA-GCN outperforms corresponding state-of-the-art methods implemented with GT 2D poses on datasets Human3.6M, HumanEva-I, and MPI-INF-3DHP. We verify the properness of model design with extensive ablation studies and visualizations. In the future, we will tackle the issue of parameter efficiency of our model via tuning techniques <cit.>. Meanwhile, we will consider its effect on application scenarios such as human behavior understanding <cit.> and aim to improve the results of the estimated 2D pose by preparing high-quality 2D pose data via fine-tuned 2D pose detectors (e.g., SH detector <cit.>, OpenPose <cit.>, and HR-Net <cit.>), abd investigate the effects of other loss terms (e.g., based on bone features <cit.> and motion trajectory <cit.>). ieee_fullname
http://arxiv.org/abs/2307.06176v1
20230712140334
Tunneling, Page Curve and Black Hole Information
[ "Chong-Sun Chu", "Rong-Xin Miao" ]
hep-th
[ "hep-th", "gr-qc", "hep-ph" ]
equationsection =0.7cm
http://arxiv.org/abs/2307.05560v1
20230709161935
Automatic Coding at Scale: Design and Deployment of a Nationwide System for Normalizing Referrals in the Chilean Public Healthcare System
[ "Fabián Villena", "Matías Rojas", "Felipe Arias", "Jorge Pacheco", "Paulina Vera", "Jocelyn Dunstan" ]
cs.CL
[ "cs.CL" ]
Investigating Berezinskii-Kosterlitz-Thouless phase transitions in Kagome spin ice by quantifying Monte Carlo process: Distribution of Hamming distances Nvsen Ma August 12, 2023 ======================================================================================================================================================== The disease coding task involves assigning a unique identifier from a controlled vocabulary to each disease mentioned in a clinical document. This task is relevant since it allows information extraction from unstructured data to perform, for example, epidemiological studies about the incidence and prevalence of diseases in a determined context. However, the manual coding process is subject to errors as it requires medical personnel to be competent in coding rules and terminology. In addition, this process consumes a lot of time and energy, which could be allocated to more clinically relevant tasks. These difficulties can be addressed by developing computational systems that automatically assign codes to diseases. In this way, we propose a two-step system for automatically coding diseases in referrals from the Chilean public healthcare system. Specifically, our model uses a state-of-the-art NER model for recognizing disease mentions and a search engine system based on Elasticsearch for assigning the most relevant codes associated with these disease mentions. The system's performance was evaluated on referrals manually coded by clinical experts. Our system obtained a MAP score of 0.63 for the subcategory level and 0.83 for the category level, close to the best-performing models in the literature. This system could be a support tool for health professionals, optimizing the coding and management process. Finally, to guarantee reproducibility, we publicly release the code of our models and experiments. § INTRODUCTION The clinical text represents a significant proportion of patient's health records, commonly found in a non-structured format. These texts have particular challenges due to the extensive use of abbreviations, the variability of clinical language across medical specialties, and its restricted availability for privacy reasons <cit.>. Due to the complexity of its analysis, this data is commonly discarded in projects that seek to support clinical decision-making <cit.>. Clinical coding involves mapping medical texts into codes using a controlled vocabulary consistent across different departments, hospitals, or even countries <cit.>. The World Health Organization maintains an open, controlled vocabulary called the International Classification of Diseases (ICD), which is used in almost every country. Currently, the most widely used revision is the tenth (ICD-10) <cit.>, and they are developing its eleventh revision, which will include not only diseases <cit.>. Regarding the Chilean public health system, the ICD-10 terminology is used for coding hospital discharges (morbidity coding by each healthcare provider) and deaths (mortality coding by the Ministry of Health). Having patients' data normalized using these controlled vocabularies enables the ability to summarize information automatically and not deal with the noisiness of free-text data. The already-digested information from the normalized data empowers data analysts who are not experts in NLP to add more complex information into their workflows. The Waiting Time Management System (SIGTE, in Spanish) contains electronic records of referrals from the Chilean Waiting List, which is the system that manages the high demand existent for consultation by specialists <cit.>. This data provided by 29 health services contain information about the medical diagnoses of patients but is not standardized <cit.>. As of November 2022, SIGTE recorded 25,374,491 waiting list referrals, of which 18,716,629 correspond to "new specialty referrals" and are associated with patient pathologies. Of these referrals, approximately 5,760,750 (30.7 %) have an ICD-10 code. This calculation was performed by searching for a regular expression formatted as an ICD-10 code in the free-text diagnosis fields. Clinical experts perform the disease coding task manually, which is not optimal for several reasons. Firstly, since this process is subject to errors, medical personnel must have significant competence in coding rules and a thorough knowledge of specialized terminologies, such as ICD, which also get updated frequently. In other words, expert coding staff must be familiar with the clinical field, analytical and focused, and have fundamental skills for inspecting and analyzing highly specialized texts. In addition, manual coding is time-consuming <cit.>, which could be optimized by a support system, and this time could be used for other tasks relevant to clinical decision-making. These difficulties can be efficiently addressed using computational systems capable of automatically performing the coding task using NLP. Currently, most automatic coding systems are based on an end-to-end architecture based on deep learning techniques. Although these systems have boosted the performance of several coding tasks, they cannot incorporate context-specific rules, such as code priority, medical assumptions, code definition, and synonyms. In this work, we developed an automated disease coding system, thus being able to code the entire historical waiting list in Chile, identifying a total of 18,716,629 referrals. Our system is based on two steps; first, the automatic extraction of diseases is addressed using a state-of-the-art NER model, and then, using a search engine, the most probable code for each disease found is identified. Finally, we explored the potential applications derived from this system and studied in more depth the most frequent diseases in the country today. § RELATED WORK The disease coding task involves transforming clinical texts, commonly written by physicians in a non-structured format, into codes following medical terminologies. This is not an easy task since a medical ontology such as ICD in Spanish has 14,668 codes, an example of extreme multi-label classification <cit.>. We have identified two major groups of computational methods proposed to solve this task; rule-based coding and neural network-based coding. §.§ Rule-based Models This approach involves designing hand-crafted rules to represent and simulate the flow that clinical experts follow when assigning codes. Most of the studies are based on using regular expressions and keywords to transform diseases found in the text into their respective codes. However, these methods are not feasible since manually capturing all the relations between texts and codes is time-consuming and complex. Different approaches based on machine learning have been proposed to address this issue. In this way, features extracted from statistical models such as decision trees and support vector machines, among others, are incorporated into the manual rules <cit.>. Another method is to create a list of synonyms of the original text to calculate a word distance with respect to the code descriptions of the terminology. Despite their disadvantages, these methods have yielded high results in the literature, effectively supporting manual coding performed by humans <cit.>. §.§ Models based on neural networks Deep learning-based methods have significantly improved the disease coding task in recent years. The advantage of using these models is that the healthcare-specific domain knowledge is no longer needed for the manual development of complex rules. In contrast, these methods can automatically build features powerful enough to capture the relationships between clinical texts and their respective codes. Most proposed systems are based on posing the problem as a multi-label text classification task <cit.>. Thus, the algorithm's input is text, while the output can be one or more codes associated with diseases. Unlike traditional text classification problems, this problem is considered extreme since the number of possible labels increases to thousands (depending on the terminology). The main disadvantage of this approach is that manual coding requires incorporating context-specific rules, such as code priority, medical assumptions, code definition, and synonyms, among other types of information, to improve system performance. In the case of deep learning, this is not considered since the systems are commonly created using an end-to-end approach, meaning that no human knowledge is involved when creating the features or making the predictions. To solve the previous problem, we followed another approach used in the literature, which consists of mixing the previous ideas using two sequential steps; the first one uses deep learning algorithms, while the second allows us to incorporate medical knowledge into the computational system. Firstly, we used a Named Entity Recognition model for automatically recognizing sequences of words in the text which are associated with diseases. Then, each disease found is associated with its most likely ICD-10 code, a task better known as Entity Linking <cit.>. Nowadays, the most commonly used methods for solving the NER task are based on deep neural networks such as transformers-based models or recurrent neural networks, while a frequent technique for assigning codes is to use distance algorithms or search engines to compare the diseases found with the code descriptions of the terminology. §.§ Commercial Systems A handful of commercial products offer information extraction from clinical data, including automatic coding. These products usually are delivered as services and offered by leading cloud providers such as Amazon Web Services with Amazon Comprehend Medical[<https://aws.amazon.com/comprehend/medical/>], Google Cloud with Google Cloud Healthcare Data Engine[<https://cloud.google.com/healthcare>] and Microsoft Azure with Azure Cognitive Service for Language[<https://azure.microsoft.com/en-in/products/cognitive-services/language-service>]. The problem with these services is that they do not offer automatic coding for languages other than English. Data privacy concerns may arise from using this third-party software to extract patients' information. Some healthcare providers may prohibit sending data to systems outside the primary source due to potential cybersecurity issues. § DATA AND METHODS The Chilean Waiting List is characteristic of the the public healthcare system. This list arises due to the high demand for medical care and the limited capacity of the public health system to meet it. Entry on the waiting list begins when a patient goes to primary care or secondary care physician to treat pathology. The patient has two possible paths: if the pathology is included in the “Garantías Explícitas en Salud” (GES) program, the patient enters a process where his or her health problem is assured a maximum waiting time for medical attention. If the GES program does not cover the pathology, the referral is classified in one of these five options: New Specialty Consultations (CNE), Follow-up Consultations (CCE), Diagnostic Procedures (Proc), Surgical Intervention (IQ) and Complex Surgical Intervention (IQC). In any of these alternatives, the patient is placed on a waiting list and must wait a variable amount of time to receive medical attention from a specialist. The Chilean Waiting List comprises 25,374,491 referrals, divided into five categories: 18,716,629 correspond to CNE type referrals, 4,391,257 to Proc type referrals, 2,222,545 to IQ type referrals, 39,266 to CCE type referrals, and finally, 4,794 to IQC type referrals. In particular, this work will focus on CNE-type referrals. Within the Chilean Waiting database, 73 attributes are separated into two main types of sets. The first set corresponds to the attributes associated with the person (date of birth, sex, national identifier). In contrast, the second set corresponds to the administrative information associated with the referral given to the person (date of admission, date of discharge, the benefit provided, specialty, diagnostic suspicion, and diagnostic confirmation). For the analysis of the diagnoses present in the referrals, two free-text attributes representing medical diagnoses are considered: diagnostic suspicion and diagnostic confirmation. Table <ref> shows the frequency of referrals according to medical specialty, while Table <ref> shows corpus statistics of the texts analyzed. We used 10,000 referrals from the historical Chilean Waiting List to train the NER module for disease recognition. As detailed in <cit.>, these referrals were previously consolidated by a team of clinical experts, thus constituting the so-called Chilean Waiting list corpus. In addition, we performed rounds of evaluation of the NER performance, identifying diseases that the model could not identify. Thus, these diseases were incorporated as new examples of the model training process. § PROPOSED SYSTEM To code the narratives, we first used a NER model to automatically recognize sequences of words in the text associated with diseases. Then, each disease found is associated with its most likely ICD-10 code through a search engine. Figure <ref> shows an overview of our proposed system. §.§ NER Model As shown in Figure <ref>, the input of our system is the referral written by the physician in an unstructured format. These texts are used as input for the automatic disease recognition model. In particular, this NER model is based on the work proposed in <cit.>, where a simple but highly effective architecture for medical entity recognition is introduced. This model, named Multiple LSTM-CRF (MLC), is a deep neural network system composed of three main modules, emphasizing the impact of using domain-specific contextualized embeddings. The first layer of the MLC approach, the “stacked embedding layer”, transforms the texts associated with the diagnoses into a vector representation using character-level contextual embeddings and static word embeddings, both trained in the clinical domain. Then, in the encoding layer, a recurrent neural network is used to obtain long-distance dependencies between words in the sentence, thus obtaining a better context to improve the previous layer's representations. Finally, the classification layer assigns the most probable label to each word in the diagnosis using the CRF algorithm, identifying which parts of the text correspond to the beginning and end of a disease. Regarding the experimental setup, the disease model was trained to 150 epochs using an SGD optimizer with mini-batches of size 32 and a learning rate of 0.1. As mentioned, to encode sentences, we used two types of representations; a 300-dimensional word embedding model trained on the Chilean Waiting List corpus[<https://zenodo.org/record/3924799>] and character-level contextualized embeddings retrieved from the Clinical Flair model <cit.>. To implement the model and perform our experiments, we used the Flair framework, widely used by the NLP research community <cit.>. §.§ Search Engine The output of the NER step is a list containing all the diseases mentioned in the referral. This second module aims to assign an ICD-10 code to each disease found, which can be used later for clinical decisions or management. The assignment of the ICD-10 code is done through a search engine tool based on Elasticsearch[Registered trademark of Elasticsearch B.V. Available at <https://www.elastic.co/elasticsearch/>], an open-source search and analytics engine. This system can assign similarities between the mention of the disease and each of the codes of the ICD-10 tabular list. Unlike the algorithms of distance comparison between words, this search engine has an index that contains each of the ICD-10 diseases represented through a series of synonymous sentences extracted from different sources of information, simulating in a better way the process followed by clinical experts to determine the code of a disease. For example, in the index, the code “K02.2” contains the canonical code description “Caries of cementum” and multiple synonymous definitions, such as “Cement caries” and “Root caries”. This is important as disease mentions found in unstructured diagnoses are rarely equivalent to the exact definition. The sources of information used for the extraction of synonymous disease definitions were as follows: Tabular list of ICD-10 terminology: This is the basis of the index, which tells us which codes we will assign to the disease mentions. Alphabetical index of ICD-10 terminology: The guide for the manual assignment of codes to diseases and was obtained using the “web scraping” technique from the website of the Spanish Ministry of Health [<https://eciemaps.mscbs.gob.es/ecieMaps/browser/index_10_2008.html>]. IRIS dictionary: It maps natural language sentences to an ICD-10 code. This dictionary was built from the mortality coding rounds conducted in the Chilean Department of Statistics and Health Information. UMLS: Spanish definitions from multiple vocabularies were extracted from the metatresaurus database. DEIS abbreviations: Manually constructed list of abbreviations and their expansions. §.§ Experiments In our experiments, we measure how well the predictions made by the model fit compared to the decisions made by clinical experts. In this way, a subset of the referrals described in Section <ref> was selected to be manually coded by a team of two clinical coders. The manual annotation process and system validation steps are provided below. §.§.§ Manual coding The clinical experts carried out the annotation process using Excel software. For this purpose, a file containing a unique identifier for each referral, the associated diagnostic suspicion, and a blank column for the actual coding was provided to the coders. This way, the expert coders identified disease codes in 1,188 clinical narratives from the Chilean Waiting List for a new specialty. It is important to mention that in this process, codes were identified at the referral level, not at the entity level; therefore, it is not possible to determine the performance of the NER model in this experiment. In future work, specialized software such as INCEpTION, could be used, as proposed in the work of <cit.>. This software would make it possible to identify which parts of the text refer to diseases. On the other hand, only diseases were coded, but future research could extend it to new entity types, such as clinical procedures or clinical findings. §.§.§ Metric The Mean Average Precision (MAP) metric is used to evaluate the performance of our coding system. This metric is widely used in works that address the same automatic coding task. This metric is defined as follows: AveP = ∑ (P(k)· rel(k))/, where P(k) represents the precision at position k, and rel(k) is an indicator function equal to 1 if the element in rank k is a relevant document and 0 otherwise. The MAP is computed using the Python implementation of the TREC evaluation tool, , by <cit.>, where an adaptation was applied, in which the coded diagnoses have to be ordered based on a ranking, which for this work is considered the order in which the mention was found and subsequently the code was assigned. § RESULTS §.§ Coding Performance The ICD-10 consists of a solitary coded catalog composed of categories with three characters, each of which can be additionally subdivided into as many as ten subcategories of four characters. We computed the MAP metric over the test set at the category (e.g. K02) and subcategory (e.g. K02.2) levels. We achieved a MAP of 0.83 for the category and 0.63 for the subcategory level. To underline the difficulty of achieving outstanding results in coding, we analyzed the results obtained only by clinical experts. The expert coders achieved an agreement MAP of 0.75 for subcategory and 0.83 for category level. Several reasons, such as the subjectivity in clinical judgment, the complexity of coding guidelines, the evolving nature of medicine, the time pressure and workload, personal bias, and lack of standardization, could explain the low agreement score. § ERROR ANALYSIS To better understand the errors made by our coding system, we performed a granular analysis of the scores obtained among the different specialties in the corpus. Tables <ref> and <ref> show the top 14 best and 10 worst scores according to the specialties. We noted that in the top 14 best specialties the diagnostic suspicions registered in the referral were written straightforwardly and were specific diagnoses, such as “lipoma”, “caries”, and “nephrolithiasis”, avoiding other clinical information like comorbidity, medication intake, or some other medical history. Furthermore, it can be noted that half of these referrals are related to dental diagnosis. On the other hand, the top 10 worst specialties share in common that most of the diagnoses are very unspecific, with the incorporation of non-medical information such as the patient's phone number, patient's address, physician's name, the specialty the patient is referred to and information about comorbidity. Besides, several referrals are without a diagnosis but with the text “unspecific consultation” or “other”. § MODEL DEPLOYMENT AND USE CASES Due to internal regulations, we could not send patients' data to third-party systems such as cloud providers or academic supercomputing clusters <cit.>. For this reason, we deployed the whole coding system on-premise on a bare metal machine with a GPU compute module (NVIDIA RTX A4000[The compute module has 16 GB of GPU memory and 6.144 CUDA cores. More information at <https://www.nvidia.com/en-us/design-visualization/rtx-a4000/>]) to process the coding requests from the whole department efficiently. The complete automatic coding system was deployed as a pair of microservices running inside containers to ease portability. One container hosts the NER module and exposes an API as a web service listening to disease-mention detection requests. The other container consists of the recommended implementation of the Elasticsearch software, which also exposes its API as a web service listening to mention-coding requests. To code the waiting list and schedule recurrent coding when new data arrives, we used the KNIME[Registered trademark of KNIME GmbH. Available at <https://www.knime.com/>] software, a visual-programming data mining platform. We chose this software because of its ease of use for non-expert developers. The workflow starts with the raw waiting list, which is first passed through the NER module to detect disease mentions, and then each mention is sent to the coding module to assign the most relevant code. The automatic coding result from the workflow mentioned above is persisted on a table inside a database that stores each disease mention for each referral along with the predicted code from the system. § CONCLUSIONS In this work, we created a nationwide system to improve the management of the Chilean public healthcare system. Specifically, we addressed the challenge of creating an automated system to code the diseases present in the Chilean Waiting List referrals. We developed and validated a model based on two steps: a NER model to recognize disease mentions and a search engine based on Elasticsearch to assign the codes to each disease. This mapping system was enriched with several terminology resources used in real life by manual coders to assign codes, thus partially simulating the pipeline followed by these professionals when solving this task. The system allowed us to assign codes to 18,716,629 referrals, thus demonstrating its efficiency and effectiveness. The performance obtained in our experiments was 0.83 according to the MAP score, which is close to the most advanced systems currently in the coding task. The model was deployed into production in the Department of Health Statistics and Information Systems of the Ministry of Health of Chile. The use of this system could be an important support for the management of waiting lists. In addition, since 75% of the Chilean population is in the public healthcare system, the analysis of the new specialty consultations can be used for epidemiological studies, such as the one done on the incidence of psoriasis <cit.>. § ACKNOWLEDGEMENTS This work was funded by ANID Chile: Basal Funds for Center of Excellence FB210005 (CMM); Millennium Science Initiative Program ICN17_002 (IMFD) and ICN2021_004 (iHealth), Fondecyt grant 11201250, and National Doctoral Scholarship 21220200. We also acknowledge Daily Piedra and Marcela Carmona for their work on annotating and coding the test dataset. acl_natbib
http://arxiv.org/abs/2307.07485v2
20230714171208
Generating Entanglement by Quantum Resetting
[ "Manas Kulkarni", "Satya N. Majumdar" ]
quant-ph
[ "quant-ph", "cond-mat.stat-mech" ]
http://arxiv.org/abs/2307.05885v2
20230712031749
Around the dynamical Mordell-Lang conjecture
[ "Junyi Xie" ]
math.NT
[ "math.NT", "math.AG", "math.DS" ]
]Around the dynamical Mordell-Lang conjecture Beijing International Center for Mathematical Research, Peking University, Beijing 100871, China [email protected] The author is supported by the NSFC Grant No.12271007. alpha [ Junyi Xie August 12, 2023 =================== There are three aims of this note. The first one is to report some advances around the dynamical Mordell-Lang (=DML) conjecture. Second, we generalize some known results. For example, the Dynamical Mordell-lang conjecture was known for endomorphisms of ^2 over . We generalize this result to all endomorphisms of ^2 over . We generalize the weak DML theorem to a uniform version and to a version for partial orbit. Using this, we give a new proof of the Kawaguchi-Silverman-Matsuzawa's upper bound for arithmetic degree. We indeed prove a uniform version which works in both number field and function field case in any characteristic and it works for partial orbits. We also reformulate the “p-adic method", in particular the p-adic interpolation lemma in language of Berkovich space and get more general statements. The third aim is to propose some further questions. § INTRODUCTION The dynamical Mordell-Lang conjecture describes the return time set for an algebraic dynamical system. It was proposed by Ghioca and Tucker in <cit.>, under the influence of Shou-Wu Zhang, see <cit.>. Its original version only concerns endomorphisms. We present here a slightly more general form for rational self-maps. Let be an algebraically closed field. Let X be a quasi-pro­jective variety over . Let f: X X be a dominant rational self-map. Let V be a Zariski closed subset of X. Denote by I(f) the inderterminacy locus of f. Let X_f() be the set of points in X() whose f-orbit O_f(x):={n≥ 0| f^n(x)} is well-defined i.e. for every n≥ 0, f^n(x)∉I(f). [=DML] If =0, then for every x∈ X_f() the set {n∈ℕ| f^n(x)∈ V()} is a union of at most finitely many arithmetic progressions[An arithmetic progression is a set of the form {an+b| n∈ℕ} with a,b∈ℕ. In particular, when a=0, it contains only one point.]. The assumption that is algebraically closed is just for the convenience. In general, easy to see that DML holds over implies the same statement over . Moreover, we may further assume that =. Indeed, DML over implies DML over any field of characteristic 0. In practice, the following geometric form of DML is often easier to handle and apply since it explains the mechanism to make the return time set infinite. [=geometric form of DML] Assume =0 and V is irreducible of dimension at least 1. For x∈ X_f() if O_f(x)∩ V is Zariski dense in V, then V is periodic i.e. the orbit of η_V is well-defined and f^m(V)⊆ V for some m≥ 1. Here η_V is the generic point of V and f^m(V):=f^m(η_V). Not hard to see that DML and its geometric form are equivalent. The DML conjecture fits the general principle of “unlikely intersection problem" i.e. an irreducible subvariety containing a Zariski dense subset of “special" points should be “special" <cit.>. In the setting of DML, a point is “special" if it is contained in the orbit of x and an irreducible subvariety is “special" if it is periodic. We say that DML holds for (X,f) if the statement of DML conjecture holds for (X,f). Further, for every i=0,…, X, we say that DML(i) holds for (X,f) if the statement of Conjecture <ref> holds for every irreducible subvariety of dimension ≤ i. It is clear that DML(0) holds and DML=DML(X-1)=DML(X) for (X,f). §.§ The plan of the note The first aim of this note is to report some advances around the DML conjecture; the second is to generalize some known results and the third aim is to propose some further questions. The note is organized as follows: In Section <ref>, we discuss the two origins of DML: Diophantine geometry and recurrence sequences. In Section <ref>, we recall some known results of the DML conjecture. Moreover, we show that how to modify the proof of <cit.>, to generalize it to all endomorphisms _^2. The DML conjecture holds when f is an endomorphism of ^2_. In Section <ref>, we recall some advances of DML in positive characteristic. In Section <ref>, we discuss the Weak DML problem which weaken the DML conjecture in time. We first recall the Weak DML theorem of Bell, Ghioca and Tucker <cit.>. Then we generalize this result to a uniform version. Let X be a variety over a field . Let f: X X be a rational self-map. Let V be a proper closed subset of X. For every ϵ>0, there is N≥ 1 such that for every x∈ X_f() with Zariski dense orbit and every interval I in _≥ 0 with |I|≥ N, we have #({n≥ 0| f^n(x)∈ V}∩ I)/#I<ϵ. We also prove a version of Weak DML for partial orbit (c.f. Theorem <ref>). In Section <ref>, we apply the Weak DML (for partial orbit) to prove a uniform version of the Kawaguchi-Silverman-Matsuzawa's upper bound. Let be either or the algebraic closure of a function field K(B) for a smooth projective curve B over an algebraically closed field K. Let f: X X be a dominant rational map defined over . Let h be any Weil height on X associated to some ample line bundle. We denote by h^+:=max{h,1}. Then for any ϵ>0, there exists C>0 such that h^+(f^n(x))≤ C(_1(f)+ϵ)^nh^+(x) for all n≥ 0 and x∈ X_f(, n). In particular, for any x∈ X_f(), we have α_f(x)≤_1(f), where α_f(x) stands for the upper arithmetic degree. In Section <ref>, we discuss the almost DML, which weaken the DML in space. In Section <ref>, we reformulate the “p-adic method", in particular the p-adic interpolation lemma using the language of Berkovich spaces. We also generalize the results in more general setting. In Section <ref>, we list some questions relate to the DML conjecture. §.§ Acknowledgement The author would like to thank Yohsuke Matsuzawa who asked the question whether the KSM's upper bound of arithmetic degree on singular varieties can be promoted to a uniform version. This motivates the current form of Theorem <ref>. The author would like to thank Ariyan Javanpeykar for his help on Siegel's theorem over finitely generated domain. The author would like to thank Thomas Tucker and She Yang for their helpful comments on the first version of this work. The author would like to thank the Simons foundation and the organizers, Laura DeMarco and Mattias Jonsson, for inverting me to the Simons symposium “Algebraic, Complex and arithmetic Dynamics (2019)" and for providing the chance to write this note. § ORIGINS OF DML §.§ Diophantine geometry A motivation of the DML conjecture is the Mordell-Lang conjecture on subvarieties of semiabelian varieties which is now a theorem of Faltings <cit.> and Vojta <cit.>. Let V be a subvariety of a semiabelian variety G over and let Γ be a finitely generated subgroup of G(). Then V()⋂Γ is a union of at most finitely many translates of subgroups of Γ. In fact, according to the well-known dictionary between arithmetic dynamics and Diophantine geometry, the DML conjecture is an analogy of the classical Mordell-Lang conjecture, see <cit.> and <cit.>. Easy to see that the DML conjecture for translations on semiabelian varieties implies the classical Mordell-Lang conjecture in the case when Γ has rank ≤ 1. It will be interesting to have a nice formulation of a higher rank analogy of the DML conjecture. One note that the naive formulation should not work as shown in the following example: Let f,g be automorphisms of ^2_ defined by f:(x,y)↦ (x+1,y) and g:(x,y)↦ (x,y+1). One note that f and g commute to each other. Let V be the curve {y=x^2} and p:=(0,0). Then the set {(m,n)∈_≥ 0^2| f^m∘ g^n(p)∈ V} is infinite, but contains no infinite translation of subsemi-group of _≥ 0^2. The classical Mordell-Lang conjecture has the same phenomenon for additional groups. It does not hold when Γ≥ 2, but holds when Γ≤ 1. §.§ DML and recurrence sequences Another origin of the DML conjecture is the Skolem-Mahler-Lech Theorem <cit.> on linear recurrence sequences. Let {A_n}_n≥ 0 be any recurrence sequence satisfying A_n+l=∑_i=0^l-1a_iA_n+i for all n≥ 0, where l≥ 1. Then the set {n≥ 0| A_n=0} is a union of at most finitely many arithmetic progressions. This statement is equivalent to the DML conjecture for the linear map f:(x_0,…,x_l-1)↦(x_1,…,x_l-1,∑_i=0^l-1a_ix_i) and for the hyperplane V={x_0=0}. There are two natural ways to generalize the Skolem-Mahler-Lech Theorem as follows, both of them are subsequences of the DML conjecture. First, if we allow the recurrence relation in Theorem <ref> to be any polynomial, we get a non-linear generalization of the Skolem-Mahler-Lech Theorem. [=non-linear SML] Let {A_n}_n≥ 0 be any recurrence sequence satisfying A_n+l=F(A_n,…, A_n+l-1) for all n≥ 0, where l≥ 1 and F∈[x_0,…, x_l-1]. Then the set {n≥ 0| A_n=0} is a union of at most finitely many arithmetic progressions. Conjecture <ref> is implied by the DML conjecture for the polynomial map f:(x_0,…,x_l-1)↦(x_1,…,x_l-1,F(x_0,…,x_l)) where l≥ 1, F∈[x_0,…, x_l-1] and for the hyperplane V={x_0=0}. By <cit.>, the non-linear SML holds when l= 2 and F is defined over . We will see later in Theorem <ref>, with a few modification of the proof of <cit.>, it can be generalized to any endomorphism of _^2. Hence the non-linear SML holds when l= 2. Another way to generalize Theorem <ref> is allowing the coefficients a_i to be non-constant. If we ask a_i to be generated by iterating of a rational function, we get the following conjecture. [=SML with non-constant coefficients] Let l≥ 1 and let g, a_i∈(x), i=0,…, l-1 be rational functions. Let α∈ be a point such that a_i(g^n(α))≠∞ for every i=0,…, l-1, n≥ 0. Let {A_n}_n≥ 0 be any recurrence sequence satisfying A_n+l=∑_i=0^l-1a_i(g^n(α))A_n+i for all n≥ 0. Then the set {n≥ 0| A_n=0} is a union of at most finitely many arithmetic progressions. In the case where g(x)=x+1, such a sequence A_i is called a p-recursive sequence. They are exactly the coefficients of D-finite formal power series. Such sequences play a very important role in symbolic computation <cit.>. In this case, Conjecture <ref> becomes Rubel's conjecture <cit.>. As shown in <cit.>, Conjecture <ref> is implied by the DML conjecture for certain skew linear map f: ^1×^l^1×^l. By <cit.>, Conjecture <ref> holds when g≥ 2. When g=1, see <cit.> for a weaker result. Conjecture <ref> strongly relates to the Picard-Vessiot problem in difference Galois theory. Keep the notations in Conjecture <ref>. Then g defines a difference field ((x),σ) where sigma is the endomomorphism of (x) defined by σ(h):=g^*h for any h∈(x). [Picard-Vessiot problem] Does there exist a Picard-Vessiot extension of (x) for the linear difference equation σ^l(y)-∑_i=0^l-1a_iσ^i(y)=0. inside the ring of -valued sequences? For more background on difference equations and the aforementioned two problems, we refer the reader to <cit.> and <cit.>. It is shown in <cit.> that Question <ref> has positive answer when g≥ 2. § KNOWN RESULTS AND IMPROVEMENTS The DML conjecture is only known in a few special cases. Here are some notable results: The DML conjecture is true in the following cases: * (Bell, Ghioca and Tucker, <cit.>) The map f is an étale endomorphism. * (Xie, <cit.>) The map f is an endomorphism of ^2 defined over . * (Ghioca and Xie, <cit.>) The map f is a skew-linear map on ^1×^l taking form (x,y)↦ (g(x), A(x)y) where g is an endomorphism of ^1 of degree ≥ 2 and A(x) is a matrix in M_l× l(k(x)). The proof of (i) and (iii) relies on the p-adic interpolation lemma (<cit.> and <cit.>). Such p-adic interpolation method for studying DML backs to the proof of Skolem-Mahler-Lech Theorem <cit.> and it was mainly developed by Bell, Ghioca and Tucker in a series of papers. In Section <ref>, we reformulate and generalize this method in the language of Berkovich space. The proof of (ii) is based on the theory of valuation tree introduced by Favre and Jonsson in <cit.> and developed in <cit.>. It also relies on Siegel's integral point theorem <cit.> and the strategy developed in <cit.>. More known results can be found in <cit.>. Moreover, a very good introduction of the DML conjecture can be found in the monograph <cit.> of Bell, Ghioca and Tucker. Argument in <cit.> shows that (iii) of Theorem <ref> implies Conjecture <ref> (=SML with non-constant coefficients) when g≥ 2. With a few modifications of the proof in <cit.>, one can generalize (ii) of Theorem <ref> for any endomorphism of _^2. The DML conjecture holds when f is an endomorphism of ^2_. Theorem <ref> implies Conjecture <ref> (= non-linear SML) when l≤ 2. We will show how to modify the proof in <cit.> to get Theorem <ref> in Section <ref>. Using Theorem <ref>, we may generate <cit.> to any endomorphisms over . DML(1) holds for f: ^N→^N over taking form (x_1,…, x_N)↦ (F_1(x_1), …, F_N(x_N)). where F_i are polynomials. We can also get Corollary <ref> by <cit.> and a specialization argument as in <cit.>. §.§ DML for endomorphisms of ^2_ [Proof of Theorem <ref>] In the proof of <cit.>(=(ii) of Theorem <ref>), the assumption that f is defined over is only used for two purposes: (1)The first one is to apply Siegel's integral points theorem <cit.>, which asserts that for a curve C defined over a number field K whose normalization has at least 3 boundary points, for every finite set of places S of _K containing of archimedean places, the set S-integral points (for any models of C over S) is finite. (2) The second one is the Northcott property for Weil heights of points. In the over case, we can choose a subring R of which is finitely generated over such that the setups in Theorem <ref> are defined over R. Replace the above (1) and (2) by their versions for finitely generated fields, we may conclude the proof. For (2), we replace Weil's height machine over number field to the height machine for the Moriwaki height over finitely generated fields <cit.>. The Moriwaki height satisfies all the properties we need in the proof, in particular, it has the Northcott property. For (1), we replace Siegel's integral points theorem over number field to its version for R-points <cit.>. The statement in <cit.> is not exactly the one we need. However, it is not hard to deduce to the version we need as follows: In <cit.>, the curve is assumed to be smooth and we need to treat singular curves. This assumption is easy to get by taking normalization. In <cit.>, the curve is asked to have genus at least 1. In our case, the curve C may be ^1 with at least three removed points. In this case, we may take a suitable étale cover C' of C satisfying the assumption in <cit.>. By the version of Hermite-Minkowski theorem over finitely generated ring <cit.>, Siegel's integral points theorem for C' implies Siegel's integral points theorem for C. § DML IN POSITIVE CHARACTERISTIC As shown in the following example, the DML conjecture does not hold in positive characteristic. <cit.> Let p be a prime and =_p(t). Let f: ^2→^2 be the endomorphism defined by (x,y)↦ (tx, (1-t)y). Set V:={x+y=1} and e=(1,1). Then {n≥ 0| f^n(e)∈ V}={p^n| n≥ 0}. More examples can be found in <cit.>. So we have the following two natural questions when >0: (a) What is the form of the set {n≥ 0| f^n(x)∈ V}? (b) DML holds for which (X,f)? As an answer of (a), Ghioca and Scanlon proposed a variant of the Dynamical Mordell-Lang conjecture in positive characteristic (=p-DML) in <cit.>, which asserts that up to a finite set, {n≥ 0| f^n(x)∈ V} is a finite union of arithmetic progressions along with finitely many sets taking form {∑_i=1^mc_ip^k_in_i| n_i∈_≥ 0, i=1,…,m}, where m∈_≥ 1, k_i∈_≥ 0, c_i∈. Surprisingly, the p-DML is even open for endomorphisms of semi-abelian varieties. Indeed in <cit.>, Corvaja, Ghioca, Scanlon and Zannier showed that p-DML for endomorphisms of semi-abelian varieties is equivalent with some difficult diophantine problem on polynomial-exponential equations in characteristic 0. Partial results can be found in <cit.>. These works essential rely on Hrushovski's resolution of the Mordell-Lang conjecture in positive characteristic <cit.> and the further description of the intersection Γ∩ V in <cit.>. For (b), the author in <cit.> (and also Ghioca and Scanlon) guessed that DML holds for most dynamical systems and the counter-examples often involve some group actions. Outside the case of endomorphisms of semi-abelian varieties which was studied in <cit.>, only a few results are known: DML holds in the following cases: * (Xie, <cit.>) The map f: ^2→^2 is a birational endomorphism with _1(f)>1. * (Xie, <cit.>) The map f: X→ X is an automorphism of a projective surface X with _1(f)>1. * (Yang, <cit.>) Let be a complete non-archimedian valuation field with =p>0. Let ^∘ be its valuation ring and ^∘∘ be the maximal ideal. Assume that f:^N_→^N_ is totally inseparable and it is a lift of Frobenius i.e. it takes the following form: f: [x_0:… :x_N]↦ [x_0^q+g_0(x_0^p,…,x_N^p):… :x_N^q+g_N(x_0^p,…,x_N^p)] where q is a power of p and g_i are homogenous polynomials of degree q/p in ^∘∘[x_0,…,x_N]. In Theorem <ref>, _1(f) stands for the first dynamical degree of f <cit.> (c.f. Section <ref>). The proofs of (i) and (ii) indeed work in any characteristic. The situation of (iii) only appears in positive characteristic case. §.§.§ Idea to prove (i): For the simplicity, assume that =. Let x∈^2() be a non-periodic point and V be an irreducible curve in ^2. First we find a “good" compactification X of ^2 w.r.t f in the sense of <cit.>. We view f as a birational self-map on X. After replacing f by a suitable iterate, there is a superattracting fixed point q∈∂ X:=X∖^2 such that f maps ∂ X∖ I(f) to q. By some geometric arguments, we reduce to the case where q∈V. Next, we prove a local version of DML near q for the topology on X() induced by any valuation on k. In the end, we conclude the proof by the Norhcott property. §.§.§ Idea to prove (ii): For the simplicity, assume that =. Let x∈ X be a non-periodic point and V be an irreducible curve in X. Using an argument backs to <cit.>, we construct two canonical height functions h^+ and h^- associated to two nef numerical classes θ^+,θ^-∈ N^1(X)_ such that f^*h^+=_1(f)h^+ and f^*h^-=_1(f)^-1h^-. By Hodge index theorem, θ^++θ^- is big and nef. Easy to show that if V· (θ^++θ^-)=0, then V is f-periodic. As an application of the Hodge index theorem, one can show that there exists σ∈(/) such that σ(θ^+)=θ^-. So V· (θ^++θ^-)>0 implies that V·θ^->0. Considering h^-(f^n(x)) for those f^n(x)∈ V, we conclude the proof by the Northcott property. §.§.§ Idea to prove (iii): For the simplicty, we assume that is a local field. We extend f to an endomorphism on the model ^N_^∘. Let x be a point in ^N() and V be an irreducible subvariety of dimension ≥ 1. Assume that O_f(x)∩ V is Zariski dense in V. Observe that in this case, f is super-attracting at each -point. Our idea is to mimic the process of constructing stable/unstable manifold in hyperbolic dynamics. More precisely, we show that at each closed point x in the special fiber, among all of its lifts x∈^N() there are some special ones which looks like the “unstable" manifolds through x. The unstable lifts of x form a Cantor set. To translate this idea from hyperoblic dynamics to our purely algebraic setting, one need to apply the techniques of the jet scheme and the critical scheme introduced in <cit.>. In this step, it is crucial to use the assumption that f is totally inseparable. Our assumption that O_f(x)∩ V is Zariski dense in V implies that for every closed point x in the special fiber of V, it has an unstable lift x in V(). Pick a lift of q-Frobenius σ in (/). The assumption that f is a lift of Frobenius and the construction of the unstable lift x shows that σ(x)=f(x). It implies that f(V)=σ(V). This implies that V is periodic and concludes the proof. The idea of last step backs to Scanlon's proof of the dynamical Manin-Mumford conjecture for periodic points of lifts of Frobenius <cit.>. § WEAK DML Let X be a variety over a field . Let f: X X be a rational self-map. In <cit.>, Bell, Ghioca and Tucker proved the Weak DML theorem, which assert that if the orbit of a point x∈ X() is Zariski dense, then the return time set {n≥ 0| f^n(x)∈ V} has zero density. This version weakened the original DML in time. Let x be a point in X_f() with O_f(x)=X. Let V be a proper subvariety of X. Then {n≥ 0| f^n(x)∈ V} is of Banach density zero in _≥ 0; i.e. for every sequence of interval I_n, n≥ 0 in _≥ 0 with lim_n→∞# I_n=+∞, we have lim_n→∞#({n≥ 0| f^n(x)∈ V}∩ I_n)/#I_n=0. This result was proved in earlier works in <cit.>, <cit.> in a different form for natural density. See <cit.>, <cit.> and <cit.> for different proofs. Here we state and prove a uniform version of Weak DML. Let X be a variety over a field . Let f: X X be a rational self-map. Let V be a proper closed subset of X. For every ϵ>0, there is N≥ 1 such that for every x∈ X_f() with Zariski dense orbit and every intervals I in _≥ 0 with |I|≥ N, we have #({n≥ 0| f^n(x)∈ V}∩ I)/#I<ϵ. In this version, the sparsity of the return time set is uniform for the point x. It seems that even the DML conjecture does not imply this Uniform Weak DML directly. As suggested in <cit.>, a natural way to study the (Uniform) Weak DML problem is to use the ergodic theory for constructible topology. We also get a generalization of Theorem <ref> (c.f. Theorem <ref>) for partial orbits, in which we don't need the whole orbit of x to be well defined. The idea of using ergodic theory for Zariski topology backs to the works of Favre and Gignac <cit.>. In <cit.>, the author studies ergodic theory for constructible topology instead of Zariski topology. This idea also played an important role in <cit.> for studying the entropy of dynamics of Berkovich spaces. The proof of Theorem <ref> is ineffective. It is natural to ask the following question: Is there an explicit way to compute N in Theorem <ref> using X,V and ϵ? §.§ Constructible topology Let X be a variety over a field . Denote by |X| the underling set of X with the constructible topology; i.e. the topology on a X generated by the constructible subsets (see <cit.>). In particular every constructible subset is open and closed. This topology is finer than the Zariski topology on X. Moreover |X| is (Hausdorff) compact. Using the Zariski topology, on may define a partial ordering on |X| by x≥ y if and only if y∈x. The noetherianity of X implies that this partial ordering satisfies the descending chain condition: for every chain in |X|, x_1≥ x_2≥… there is N≥ 1 such that x_n=x_N for every n≥ N. For every x∈ |X|, the Zariski closure of x in X is U_x:={x}={y∈ |X|| y≤ x} which is open and closed in |X|. Denote by (|X|) the space of Radon measures on X endowed with the weak-∗ topology. The following is <cit.>. Every μ∈(|X|) takes form μ=∑_i≥ 0a_iδ_x_i where δ_x_i is the Dirac measure at x_i∈ X, a_i≥ 0. Let ^1(|X|) be the space of probability Radon measures on |X|. Since |X| is compact, ^1(|X|) is compact. It is also sequentially compact as showed in the following result. <cit.>. The space ^1(|X|) is sequentially compact. §.§ Proof of the Uniform Weak DML Let f: X X be a dominant rational self-map. Set |X_f|:=∩_n≥ 0f|_X∖ I(f)^-n(X∖ I(f)). Because every Zariski closed subset of X is open and closed in the constructible topology, |X_f| is a closed subset of |X|. The restriction of f to |X_f| is continuous. We still denote by f this restriction. Denote by (X,f) the set of f-periodic points in |X_f|. Theorem <ref> implies directly the following lemma. <cit.> If μ∈^1(|X_f|) with f_*μ=μ, then there are x_i∈(X,f), i≥ 0 and a_i≥ 0, i≥ 0 with ∑_i=0a_i=1 such that μ=∑_i≥ 0a_i/#O_f(x_i)(∑_y∈ O_f(x_i)δ_y) [Proof of Theorem <ref>]If Theorem <ref> is not correct, there is ϵ>0 and a sequence of points x_m∈ X_f() with Zariski dense orbit and intervals I_m:={a_m,…, b_m} in _≥ 0 with |I_m|→∞, such that #({n≥ 0| f^n(x_m)∈ V}∩ I_m)/#I_m≥ϵ. Set μ_n:=1/I_m∑_n∈ I_mδ_f^n(x_m). Let η_X be the generic point of X. Set Y:=|X_f|∖ (∪_y∈(X,f)∖{η_X}∪_n≥ 0f^-n(U_y)), which is closed in |X_f|. We have μ_n⊆ Y. After taking subsequence, we may assume that μ_m converges to a probability measure μ∈^1(|X_f|) in the weak-∗ topology. Observe that f_*μ_m-μ_m=1/I_m(δ_f^b_m+1(x_m)-δ_f^a_m(x_m))→ 0 as m→∞. So f_*μ=μ. Since μ⊆ Y and η_X=(X,f)∩ Y, by Lemma <ref>, μ=δ_η_X. Note that the character function 1_V of V is continuous on |X|. Then we have ϵ≤#({n≥ 0| f^n(x_m)∈ V}∩ I_m)/#I_m=∫ 1_U_Vμ_m→∫ 1_U_Vδ_η_X=0. We get a contradiction. §.§ Partial orbits Define |X|^+ to be the disjoint union of |X| with a point ϕ. We extend the partial ordering on |X| to |X|^+ by setting x≥ϕ for every x∈ |X|^+. For every x∈ |X|^+ define U_x^+:={y≤ x}. When x∈ X, U_x^+=U_x∪{ϕ}. It is clear that (|X|^+)=(X)⊕_>0δ_ϕ. By Theorem <ref> and Corollary <ref>, (|X|^+)^1 is sequentially compact and every Radon measure on |X|^+ is atomic. Let f^+: |X|^+→ |X|^+ be the endomorphism such that f^+|_I(f) is the map I(f)→{ϕ} and f^+|_X∖ I(f)=f|_X∖ I(f). Since I(f) is open and closed in |X|, f^+ is continuous. Moreover, for f^+ preserves the ordering. Define I(f)^+:=f^-1(ϕ) and I(f):=I(f)^+∖{ϕ}. For every n≥ 0, define X_f(n):=∩_i=0^nf^-i(|X|). Then X_f(n) is a constructible subset of X. It is clear that I(g^+)=I(g). For every x∈ X_f(n), n≥ 0, define μ_n(x):=1/n+1∑_i=0^nδ_f^i(x). For for every y∈(X,f), let r_y be the minimal period of f. Define ν_y:=r_y^-1∑_i=0^r_yδ_f^i(y). Let _d(X,f) be the subset of (X) taking form ∑_0=1^la_iν_y_i with a_i>0, y_i,i=1,…, l is a strictly decreasing sequence of points in (X,f). Indeed, we have l≤ X. Set _d^1(X,f):=_d(X,f)∩^1(X,f). As shown in the proof of <cit.>, Theorem <ref> has a measure theoretic reformulation, which assert that, for any point in x∈ X_f() with O_f(x)=X and any sequence of intervals I_n, n≥ 0 in _≥ 0 with lim_n→∞# I_n=+∞, we have |I_n|^-1∑_i∈ I_nδ_f^i(x)→δ_η_X in weak-∗ topology. The following result generalize Theorem <ref>, for which we don't need to take the whole orbit. Let N_m∈_≥ 0, m≥ 0 be a sequence tending to +∞ and x_m∈ X_f(N_m-1). Assume that μ_x_m,N_m→μ as m→∞ in the weak-∗ topology. Then μ_x_m,N_m∈_d^1(X,f) [Proof of Theorem <ref>] As x_m∈ X_f(N_m-1) for every m≥ 0, we have μ⊆^1(|X|). Observe that f^+_*μ_x_m,N_m-μ_x_m,N_m=1/N_m+1(δ_(f^+)^N_m+1(x_m)-δ_x_m)→ 0 as m→∞. So f^+_*μ=μ. By Theorem <ref>, we may write μ=∑_i≥ 1a_iδ_x_i where x_i∈ X are distinct, a_i>0 and ∑_i≥ 1 a_i=1. Since μ is f^+-invariant, we may write μ=∑_i=1^l b_iν_y_i where l∈_≥ 1∪{+∞}, b_i>0, ∑_i≥ 1b_i=1 and y_i are distinct points in (X,f). Define Y_i:=∪_j=0^r_y_i-1U_f^j(y_i)^+. We have f^+(Y_i)⊆ Y_i. We first treat the case where b_1,b_2>0 and Y_1⊈Y_2 and Y_2⊈Y_1. Set U_1:=Y_1∖ Y_2 and U_2:=Y_2∖ Y_1. Note that U_i, i=1,2 are open and closed in |X|. For i=1,2, we have ∫ 1_U_iμ_m→∫ 1_U_iμ≥ b_i. So for m≫ 0, we have ∫ 1_U_iμ_m≥ b_i/2. So there is a minimal s_m∈{0,…, N_m-1} such that f^s_m(x_m)∈ U_1∪ U_2. Since U_1∩ U_2=∅, there is a unique t_m∈{1,2} such that f^s_m(x_m)∈ U_t_m. We claim that f^i(x_m)∉Y_2∪ Y_1 for i=0,…, s_m-1. Otherwise, f^i(x_m)⊆ (Y_2∪ Y_1)∖ (U_1∪ U_2)=Y_1∩ Y_2. Hence f^s_m(x_m)⊆ f^s_m-i(f^i(x_m))⊆ Y_1∩ Y_2, which is a contradiction. So the claim holds. For every i≥ s_m, f^i(x_m)=f^i-s_m(f^s_m(x_m))∈ Y_t_m. So f^i(x_m)∉U_3-t_m for i≥ s_m. This implies that ∫ 1_U_t_mμ_m=0, which is a contradiction. Hence for every distinct i,j∈{1,…, l} either Y_i⊆ Y_j or Y_j⊆ Y_i. By the Noetherianity, we may assume that Y_l⊊…⊊ Y_1. After replacing each y_i by a suitable element in its orbit, we may ask y_1>… >y_l. This concludes the proof. §.§ An example The limiting measure in Theorem <ref> is much more complicate than the one for a whole orbit c.f. Theorem <ref>. In this section, we give an example to show that these more complicated measure are necessary. Let X:=(_^1)^l. Let g_i: ^1→^1, i=1,…, l endomorphisms of degree d_i≥ 2. Assume that d_1<…<d_l. Let f: X→ X be the map f=g_1×…× g_l. For every i=1,…,l consider a sequence of points o_i(n), n≥ 0 with g_i(o_i(0))=o_i(0) and g_i(o_i(n))=o_i(n-1) for n≥ 1. Assume that o_i(1)≠ o_i(0) for every i. Let b_0,…,b_l≥ 0 with ∑_i=0^l b_i=1. Let y_i be the generic point of ^l-i×{o_l-i+1}×…×{o_l} (=(^1)^l when i=0). We will construct a sequence of points x_m, m≥ 1 and N_m≥ 1 such that μ(x_m, N_m)→∑_i=0^lb_iδ_y_i in weak-∗ topology. Define n_i(m):=⌈ m × b_i⌉. Note that in ^l(), [n_0(m):…: n_l(m)] tends to [b_0:…: b_l] in the euclidien topology. Define x_m:=(o_0(∑_i=0^l-1n_i(m)), o_1(∑_i=0^l-2n_i(m)),…, o_l-1(n_0(m)))∈ X=(_^1)^l and N_m:=∑_i=0^ln_i(m). We will show that μ(x_m, N_m)→∑_i=0^lb_iδ_y_i. By Corollary <ref>, we only need to treat the case where μ(x_m, N_m) converges to a measure μ in ^1(X). We do the proof by induction on l. First assume that l=1. Since ∫ 1_o_0μ(x_m, N_m)= n_0(m)/N_m→ b_0, by Theorem <ref>, μ(x_m, N_m)→ b_0δ_y_1+b_1δ_y_0. Now assume that l≥ 2 and (<ref>) holds for l-1. Define Y:=(^1)^l-1 and h: Y→ Y by g_2×…× g_l. Then X=^1× Y and f=g_1× h. Let π_1, π_Y be the projection from X to ^1 and to Y respectively. Let y_i', i=0,…,l be the generic point of ^l-i-1×{o_l-i}×…×{o_l} in Y. Let z_0,z_1 be the generic points of ^1 and o_1(0) respectively. The induction hypothesis implies that (π_1)_*μ=(b_0+…+b_l-1)δ_z_0 +b_lδ_z_1 and (π_Y)_*μ=b_0δ_y_0'+… +(b_l-1+b_l)δ_y_l-1'. By <cit.>, μ takes form μ=∑_i=0^l c_iδ_z_s(i)× y_t(i)' where c_i≥ 0, ∑_i=0^l c_i=1 and s, t are increasing functions from {0,…,l} to {0,1} and to {0,…,l-1} respectively satisfying max{s(i)-s(i-1),t(i)-t(i-1)}≥ 1 for every i=1,…,l. This follows that (s(l), t(l))=(1,l-1) and s^-1(1)∩ t^-1(l-1)={l}. By (<ref>) and (<ref>), we have c_l=b_l. By (<ref>), we get s(i)=0 for i∈{0,…, l-1} with b_i≠ 0. Then we concludes the proof by (<ref>). § KSM'S UPPER BOUND FOR ARITHMETIC DEGREE A notable application of weak DML is <cit.>, in which Jia, Shibata, Zhang and the author generalized Kawaguchi-Silverman-Matsuzawa's upper bound=(KSM's upper bound) <cit.> on arithmetic degree to singular case. Recently, Song <cit.> gave a different and more conceptual proof of the same upper bound using Arakelov geometry. Indeed Song's result is more general, which applies for higher dimensional arithmetic degree. In Matsuzawa's original work <cit.>, he indeed proved a uniform version of the upper bound. In this section we use Theorem <ref>(=Weak DML for partial orbit) to give a new proof of this the Uniform KSM's upper bound. Our result is indeed more general, it works on singular varieties for both number field and function field in any characteristic, moreover we don't need the whole orbit of x is well-defined. Let be either or the algebraically closure of a function field K(B) for a smooth projective curve B over an algebraically closed field K. Let f: X X be a dominant rational map defined over . Let h be any Weil height on X associated to some ample line bundle. We denote by h^+:=max{h,1}. Then for any ϵ>0, there exists C>0 such that h^+(f^n(x))≤ C(_1(f)+ϵ)^nh^+(x) for all n≥ 0 and x∈ X_f(, n). In particular, for any x∈ X_f(), we have α_f(x)≤_1(f), where α_f(x) stands for the upper arithmetic degree. Our proof here is based on the proof of the (non-uniform) KSM's upper bound by the author given in the course “Topics in algberaic geometry: Arithmetic dynamics" in Peking university in 2022. The current uniform version of Theorem <ref> is inspirited by Matsuzawa's question, who asked whether the KSM's upper bound of arithmetic degree can be maked to uniform for singular varieties. To prove Theorem <ref>, we first recall the definition and basic properties of dynamical degree and arithmetic degree. §.§.§ The dynamical degrees Let X be a variety over a field and f: X X a dominant rational self-map. Let X' be a normal projective variety which is birational to X. Let L be an ample (or just nef and big) divisor on X'. Denote by f' the rational self-map of X' induced by f. For i=0,1,…, X, and n≥ 0, let (f'^n)^*(L^i) be the ( X-i)-cycle on X' as follows: let Γ be a normal projective variety with a birational morphism π_1Γ→ X' and a morphism π_2Γ→ X' such that f'^n=π_2∘π_1^-1. Then (f'^n)^*(L^i):= (π_1)_*π_2^*(L^i). The definition of (f'^n)^*(L^i) does not depend on the choice of Γ, π_1 and π_2. The i-th dynamical degree of f is _i(f):=lim_n→∞((f'^n)^*(L^i)· L^ X-i)^1/n. The limit converges and does not depend on the choice of X' and L <cit.>. Moreover, if π: X Y is a generically finite and dominant rational map between varieties and g Y Y is a rational self-map such that g∘π=π∘ f, then _i(f)=_i(g) for all i. This can be shown by combing <cit.> with the projection formula. Another way is to proof it is to apply the product formula for relative dynamical degrees (c.f. <cit.> and <cit.>) directly. Recall the following basic property. <cit.> Let X be a variety over and f X X a dominant rational self-map. Let Z be an irreducible subvariety in X which is not contained in I(f) such that f|_Z induces a dominant rational self-map of Z. Then _i(f|_Z)≤_i(f) for i=0,1,…, Z. §.§.§ Arithmetic degree The arithmetic degree was defined in <cit.> over a number field or a function field of characteristic zero. As said in <cit.>, this definition can be extended to characteristic positive. Here we follows the way <cit.> the define the arithmetic degree in the general case. Let =K(B), where K is an algebraically closed field and B is a smooth projective curve. Let X be a normal and projective variety over . For every L∈(X), we denote by h_L: X()→ a Weil height associated to L and the function field K(B). It is unique up to adding a bounded function. As in <cit.>, we define an admissible triple to be (X,f,x) where X is a quasi-projective variety over , f X X is a dominant rational self-map and x∈ X_f(). We say that (X,f,x) dominates (resp. generically finitely dominates) (Y,g,y) if there is a dominant rational map (resp. generically finite and dominant rational map) π X Y such π∘ f=g∘π, π is well defined along O_f(x) and π(x)=y. We say that (X,f,x) is birational to (Y,g,y) if there is a birational map π X Y such π∘ f=g∘π and if there is a Zariski dense open subset V of Y containing O_g(y) such that π|_U: U:=π^-1(V)→ V is a well-defined isomorphism and π(x)=y. In particular, if (X,f,x) is birational to (Y,g,y), then (X,f,x) generically finitely dominates (Y,g,y). * If (X,f,x) dominates (Y,g,y) and if O_f(x) is Zariski dense in X, then O_g(y) is Zariski dense in Y. Moreover, if (X,f,x) generically finitely dominates (Y,g,y), then O_f(x) is Zariski dense in X if and only if O_g(y) is Zariski dense in Y. * Every admissible triple (X,f,x) is birational to an admissible triple (X',f',x') where X' is projective. Indeed, we may pick X' to be any projective compactification of X, f' the self-map of X' induced from f, and x'=x. As in <cit.>, we will associate to an admissible triple (X,f,x) a subset A_f(x)⊆ [1,∞]. Indeed we have A_f(x)⊆ [1,_1(f)] by Theorem <ref>. We first define it when X is projective. Let L be an ample divisor on X, we define A_f(x)⊆ [1,∞] to be the limit set of the sequence (h_L^+(f^n(x)))^1/n, n≥ 0, where h_L^+(·):=max{h_L(·),1}. The following lemma is <cit.>, whose proof is the same as <cit.>. It shows that the set A_f(x) does not depend on the choice of L and is invariant in the birational equivalence class of (X,f,x). <cit.> Let π X Y be a dominant rational map between projective varieties. Let U be a Zariski dense open subset of X such that π|_U U→ Y is well-defined. Let L be an ample divisor on X and M an ample divisor on Y. Then there are constants C≥ 1 and D>0 such that for every x∈ U, we have h_M(π(x))≤ Ch_L(x)+D. Moreover if V:=π(U) is open in Y and π|_U U→ V is an isomorphism, then there are constants C≥ 1 and D>0 such that for every x∈ U, we have C^-1h_L(x)-D≤ h_M(π(x))≤ Ch_L(x)+D. Now for every admissible triple (X,f,x), we define A_f(x) to be A_f'(x') where (X',f',x') is an admissible triple which is birational to (X,f,x) such that X' is projective. By Lemma <ref>, this definition does not depend on the choice of (X',f',x'). §.§.§ The arithmetic degree. We define (see also <cit.>): α_f(x):=sup A_f(x), α_f(x):=inf A_f(x). And call them upper/lower arithmetic degree. We say that α_f(x) is well-defined and call it the arithmetic degree of f at x, if α_f(x)=α_f(x); and, in this case, we set α_f(x):=α_f(x)=α_f(x). By Lemma <ref>, if (X,f,x) dominates (Y,g,y), then α_f(x)≥α_g(y) and α_f(x)≥α_g(y). Applying Inequality (<ref>) of Lemma <ref> to the case where Y=X and M=L, we get the following trivial upper bound: let f X X be a dominant rational self-map, L any ample line bundle on X and h_L a Weil height function associated to L; then there is a constant C≥ 1 such that for every x∈ X∖ I(f), we have h_L^+(f(x))≤ Ch_L^+(x). For a subset A⊆ [1,∞), define A^1/ℓ:= {a^1/ℓ| a∈ A}. We have the following simple properties, where the second half of <ref> used Inequality (<ref>). We have: * A_f(x)⊆ [1,∞). * A_f(x)=A_f(f^ℓ(x)), for any ℓ≥ 0. * A_f(x)=⋃_i=0^ℓ-1(A_f^ℓ(f^i(x)))^1/ℓ. In particular, α_f^ℓ(x)=α_f(x)^ℓ, α_f^ℓ(x)=α_f(x)^ℓ. [Proof of Theorem <ref>(=the Uniform KSM's upper bound)] Fix an ample line bundle A on X. Let h_A be a Weil height associated with A. Assume that h_A≥ 1. In this case, we have h^+=h_A. By Lemma <ref>, after taking normalization, we may assume that X is normal. We first need two height inequalities. The first one is by Lemma <ref>. There is C_1>0 such that for every X()∖ I(f), we have h_A(f(x))≤ C_1h_A(x). This is a weak estimation which holds on a large domain X()∖ I(f). We next prove a stronger inequality, which holds only on some Zariski open subset of X()∖ I(f). For every ψ>_1(f), there is N=N(ψ)≥ 1 and C_2=C_2(ψ)>0 and a proper closed subset B=B(ψ) of X such that for every n≥ 1 and x∈ X_f(Nn)() satisfying f^Ni(x)∉B for i=0,…, n-1, then we have h_A(f^Nn(x))≤ C_2ψ^Nnh(x). The proof of Lemma <ref> is a simple application of Siu's inequality, we postpone the proof to the end of this section. Combining Lemma <ref> and (<ref>), we get that for every ψ>_1(f), there is C_3=C_3(ψ)>0 and a proper closed subset B_1=B_1(ψ) of X such that for every n≥ 1 and x∈ X_f(n)() satisfying f^i(x)∉B_1 for i=0,…, n-1, then we have h_A(f^n(x))≤ C_3ψ^nh(x). We do the proof of Theorem <ref> by induction on the dimension of X. When X=0, nonthing to prove. Now assume that X≥ 1. By (<ref>), if Theorem <ref> holds for a positive iterate of f, then it holds for f. Assume that Theorem <ref> does not hold for our (X,f). Then there are ϵ>0, a sequence N_m≥ 1 satisfying N_m→∞, a sequence of points x_m∈ X_f(N_m-1)() such that h_A(f^N_m(x_m))≥ (_1(f)+ϵ)^N_mh_A(x_m). After taking subsequence, we may assume that μ(x_m,N_m)→μ for some μ∈^1(X). By theorem <ref>, μ∈_d^1(X). We may write μ=∑_i=0^la_iν_y_i with a_0≥ 0, a_i>0 for i≥ 1, y_1=η_X and y_i,i=1,…, l is a strictly decreasing sequence of points in (X,f) with l≤ X. Set U_1:=∪_i=0^r_y_1-1U_y_1 if l≥ 1, otherwise set U_1=∅. Set V:=X∖ U_1. For every m≥ 0, there is a largest s_m≥ 0 such that f^s_m(x_m)∈ V. Hence f^i(x_m)∈ V for i=0,…, s_m and f^i(x_m)∉V for i> s_m. Note that U_1 is an f-invariant Zariski closed subset of U_1< X. Pick ψ∈ (_1(f),_1(f)+ϵ). By Proposition <ref>, the dynamical degree of restriction of f^r_y_1 to every irreducible component of U_1 is ≤_1(f)^r_y_1. By the induction hypothesis for f^r_y_1 for every irreducible component of U_1 and (<ref>), there is C_3>0 such that h_A(f^N_m(x_m))≤ C_3ψ^N_m-s_m-1h_A(f^s_m+1(x_m)). Since ∫ 1_Vμ(x_m,N_m)→∫ 1_Vμ=a_0, we have (s_m+1)/(N_m+1)→ a_0 as m→∞. If a_0=0, by (<ref>), (<ref>) and (<ref>) we get (_1(f)+ϵ)^N_mh_A(x_m)≤ h_A(f^N_m(x_m))≤ C_3ψ^N_m-s_m-1C_1^s_m+1h_A(x_m). Then we have log(_1(f)+ϵ)≤N_m-s_m-1/N_mlogψ+1/N_mlog C_3+s_m+1/N_mlog C_1. Taking m→∞, we get a contradiction. Now assume that a_m>0, then s_m→∞. Set W_m:={i=0,…, s_m| f^i(x_m)∈ B_1} and w_m:=#W_m. We have w_m/s_m+1=w_m/N_m+1×N_m+1/s_m+1=∫1_B_1μ_m/∫1_Vμ→ 0/a_0=0. By (<ref>) and (<ref>), the argument in <cit.> implies that h_A(f^s_m(x_m))≤ C_3^w_m+1C_1^2w_mψ^s_m-w_mh_A(x_m). Then by (<ref>) and (<ref>), we get (_1(f)+ϵ)^N_mh_A(x_m)≤ h_A(f^N_m(x_m))≤ C_3ψ^N_m-s_m-1C_3^w_m+1C_1^2w_mψ^s_m-w_mh_A(x_m). Let m→∞, we get a contradiction by (<ref>). [Proof of Lemma <ref>] For every n≥ 1, denote by X_n the normalization of the graph of f^n: X X. Denote by π_n: X_n→ X the first projection and f_n: X_n→ X the second projection. We have f_n=f^n∘π_n. By Siu's inequality <cit.>, we have f_n^*A≤ d_X_A(f^n)/A^d_Xπ_n^*A where d_X= X and ≤ means that the different is pseudo-effective. Set Q:=(d_X+1)/A^d_X. Then there is a positive -divisor B_n such that Q_A(f^n)π_n^*A=f_n^*A+B_n. There is N≥ 0 such that Q_A(f^N)≤ψ^N. Hence for every x∈ X_f(N)()∖ B_N, we have ψ^Nh_A(x)+D≥ h_A(f^N(x)) for some D≥ 0. Then we have ψ^N(h_A(x)+D/(ψ^N-1))≥ h_A(f^N(x))+D/(ψ^N-1). Then for every n≥ 1 and x∈ X_f(Nn)() satisfying f^Ni(x)∉B for i=0,…, n-1, then we have h_A(f^Nn(x))≤ h_A(f^Nn(x))+D/(ψ^N-1) ≤ψ^Nn(h_A(x)+D/(ψ^N-1))≤ (D/(ψ^N-1)+1)ψ^Nnh_A(x). This concludes the proof. § ALMOST DML Another way to weaken the DML is in space, which means that we only ask it for almost all points in X(). In <cit.>, the author proved a result in this direction. In this result, “almost all points" is made to be precise using the adelic topology. Assume that is an algebraic closed field of characteristic 0 with finite transcendence degree over . In <cit.>, the author has introduced the adelic topology on X(). It has the following basic properties (cf. <cit.>): * It is stronger than the Zariski topology. * It is 𝖳_1, i.e., for every distinct points x, y ∈ X() there are adelic open subsets U, V of X() such that x ∈ U, y ∉ U and y∈ V, x ∉ V. * Morphisms between algebraic varieties over are continuous for the adelic topology. * Flat morphisms are open with respect to the adelic topology. * The irreducible components of X() in the Zariski topology are the irreducible components of X() in the adelic topology. * Let K be any subfield of which is finitely generated over ℚ and such that X is defined over K and K =. Then the action (/K)× X()→ X() is continuous with respect to the adelic topology. When X is irreducible, the property (<ref>) above implies that the intersection of finitely many nonempty adelic open subsets of X() is nonempty. So, if X≥ 1, the adelic topology is not Hausdorff. In general, the adelic topology is strictly stronger than the Zariski topology. An impotent example of adelic open subsets is as follows: Let L be a subfield of such that its algebraic closure L is equal to , L is finitely generated over , and X is defined over L, i.e., X=X_L⊗_L for some variety X_L over L. Fix any embedding τ L↪_p (resp. ). Then, given any open subset U of X_L(_p) for the p-adic (resp. Euclidean) topology, the union X_L(τ, U):= ∪_ιΦ_ι^-1(U) for all embeddings ι→_p extending τ is, by definition, an open subset of X() in the adelic topology. Moreover X_L(τ, U) is empty if and only if U=∅. As defined in <cit.>, we say that a property holds for an adelic general point, if it holds on a non-empty adelic open subset. The following result is <cit.>. [Almost DML] DML holds for an adelic general point in X(). This proof of Almost DML is based on the p-adic interpolation lemma (see Theorem <ref> for a slightly generalization and reformulation in the language of Berkovich space). This strategy backs to <cit.> and <cit.>. This Almost DML is find to be very useful in the recent studies of Zariski dense orbit conjecture and Kawaguchi Silverman conjecture, see <cit.>. § AUTOMORPHISMS ON -AFFINOID SPACES The p-adic method, in particular the p-adic interpolation Lemma plays important role in the study of the DML conjecture <cit.>. It also useful for studying other problems such as the Zariski dense orbit conjecture <cit.> and the group of birational self-maps <cit.>. In this section, we explain this method in the language of Berkovich spaces <cit.>. This method indeed works for any complete non-archimedean field of characteristic 0. However, as I known, all of its applications in the DML conjecture is to apply it over some p-adic field. For this reason, we continue to use the name “p-adic method". At the moment, there is no analogy of this method in positive characteristic. This is one of the difficulty for DML in positive characteristic. Denote by a complete valued field with a non-archimedean norm |· |. Denote by ^∘:={f∈| |f|≤ 1} the valuation ring and ^∘∘:={f∈| |f|< 1} its maximal ideal. Denote by :=^∘/^∘∘ the residue field. Let A be a -affinoid algebra. Let · be a norm on A and let ρ(·) be the spectral seminorm on A. We have g≥ρ(g) for all g∈ A. If A is reduced, these two norms are equivalent to each other. Let f:(A)→(A) be the endomorphism induced by an endomorphism f^*:A→ A. Then f induces an action of _≥ 0 on (A). The difference operator Δ_f:=f^*-𝕀 is a bounded linear operator on the Banach -space. Write Δ_f the operator norm with respect to ·. Denote by ρ(Δ_f) the spectral of the operator Δ. We note that for any norm · on A, we have Δ_f≥ρ(Δ_f). The following lemma shows that when ρ(Δ_f)<1, f is an automorphism. If ρ(Δ_f)<1 ( resp. Δ_f<1), then f is an automorphism. Moreover, we have ρ(Δ_f)=ρ(Δ_f^-1) ( resp. Δ_f=Δ_f^-1). [Proof of Lemma <ref>]We only prove it for ρ(·). The proof for · is similar. Denote by g^*:A→ A the operator defined by g^*:=∑_i=0^∞(-1)^iΔ_f^i. Since ρ(Δ_f)<1, the above series converges and ρ(g^*-𝕀)=ρ(Δ_f). We may check that g^*∘ f^*=f^*∘ g^*=𝕀. Then f is an automorphism and ρ(Δ_f^-1)=ρ(g^*-𝕀)=ρ(Δ_f). §.§ Interpolation of iterates In this section, assume that =0. Define R():=1 if k=0 and R():=p^-1/p-1 if =p>0. This constant was introduced by Poonen in <cit.>, to study the interpolation of iterates of analytic self-maps of the p-adic polydiscs. We note that R(k)∈ (0,1] and R(k)=1 if and only if k =0. It is easy to see that |i!|≥ R(k)^i for all i≥ 0. Denote by :=({T}) the unit disc with the group structure given by the addition. There exists a natural embedding _≥ 0⊆^∘⊆()⊆. The following theorem generalized <cit.>, <cit.> and <cit.>. Our proof is basically the same as the proof in <cit.>. If ρ(Δ_f)<R(), then there exists a unique action (,+) on (A) which extends the action of _≥ 0 on (A). We note that if Δ_f<R() for any norm · on A, then we have ρ(Δ_f)<R(). [Proof of Theorem <ref>] The uniqueness comes from the fact that _≥ 0⊆ is Zariski dense. We only need to prove the part of existence. For any h∈ A, we denote by G(T,h) the analytic function in {T} A by G(T,h):=∑_i≥ 0TiΔ_f^i(h)=∑_i≥ 0T(T-1)…(T-i+1)/i!Δ_f^i(h). It always converges. Indeed, we have |i!|≥ R()^i and Δ_f^i(h)≤Δ_f^ih. Pick R such that ρ(Δ_f)<R<R(). For i large enough, we have Δ_f^i<R^i. For i large enough, we have T(T-1)…(T-i+1)/i!Δ_f^i(h)≤ (R/R())^i. Then G(T,h) converges. We define a linear map Φ^*:A→{T} A by h↦ G(T,h). The above argument shows that Φ≤sup_i≥ 1Δ_f^i/i!. Then Φ is bounded. We note that for any n∈_≥ 0, we have G(n,h)=∑_i≥ 0niΔ_f^i(h)=(𝕀+Δ_f)^n(h)=(f^*)^n(h). Since _≥ 0⊆ is Zariski dense in and for any n∈_≥ 0, we have G(n,1)=1, and G(n,h_1h_2)=G(n,h_1)G(n,h_2) for h_1,h_2∈ A, we get G(T,1)=1, and G(T,h_1h_2)=G(T,h_1)G(T,h_2) for h_1,h_2∈ A. It implies that Φ^* is a morphism of -affinoid algebra. It defines a morphism Φ:×(A)→(A). We only need to show that Φ is an action of a group. Since _≥ 0⊆ is Zariski dense, by (<ref>), Φ is an action of a semigroup. Since Φ(0,·)=𝕀 and is a group, Φ is a group action. If ρ(Δ_f)<R() and f^l=𝕀 for some l≥ 1, then f=𝕀. [Proof of Corollary <ref>]Theorem <ref> shows that the action of f extends to an action of on (A). For any point t∈(), we denote by g_t the action of t. Since f^l=𝕀, we have g_nl=f^nl=𝕀 for all n≥ 0. Since l_≥ 0 is Zariski dense in , g_t=𝕀 for all t∈(). In particular, we get f=g_1=𝕀. Assume ρ(Δ_f)<R() and denote by Φ:×(A)→(A) the action of defined in Theorem <ref>. The natural morphism, {T}→[ϵ]/(ϵ^2) sending T→ϵ induces a morphism Φ^*[T]/(T^2):A→[ϵ]/(ϵ^2) A=A⊕ϵ A. It can be written explicitly by h↦ h+ϵθ_f(h) where θ_f:=log(𝕀+Δ_f)=∑_i≥ 1(-1)^i-1/iΔ_f^i. Since Φ^*[T]/(T^2) is a morphism of -affinoid algebra, θ_f is a vector field i.e. it satisfy the Leibniz's rule. Such a construction was used in <cit.> for the proof of a birational version of Zimmer conjecture. The following results shows that even when Theorem <ref> does not apply for f directly, it may apply for a suitable iterate of f. If Δ_f<1 (resp. ρ(Δ_f)< 1), then there exists N such that Δ_f^N<R() (resp. ρ(Δ_f^N)< R().) Moreover, if =0, we may choose N=1, if =p>0, we may choose N to be a power of p. [Proof of Proposition <ref>] We prove this lemma for Δ_f, the proof for ρ(Δ_f) is similar. If =0, this lemma is trivial. Now we assume that =p>0. Observe that Δ_f^p=(f^*)^p-𝕀=(𝕀+Δ_f)^p-𝕀=∑_i=1^p-1piΔ_f^i+Δ_f^p. Since p| pi for i=1,…,p-1, we have Δ_f^p≤max{ p^-1Δ_f,Δ_f^p}. Observe that if Δ_f≥ R(), we have p^-1Δ_f≤Δ_f^p, hence Δ_f^p≤Δ_f^p. Since Δ_f<1, there exists s≥ 1 such that Δ_f^p^s<R(). Then there exists t∈{1,…,s} such that Δ_f^p^t<R. Otherwise, we have Δ_f^p^s≤Δ_f^p^s-1^p≤…≤Δ_f^p^s<R() which is a contradiction. §.§ Analytic DML The following result can be think as an analytic version of DML for f closed to 𝕀. Assume that if of characteristic 0. Let X=(A) be a -affioid space. Let f: X→ X be an automorphism with ρ(Δ_f)<1. Let V be a Zariski closed subset of X. Then for x∈ X(), the set {n≥ 0| f^n(x)∈ V} is a finite union of arithmetic progressions. By Proposition <ref>, after replacing f by a suitable iterate, we may assume that ρ(Δ_f)<R(). By Theorem <ref>, there is a morphism Φ: → X satisfying Φ(n)=f^n(x) for every n∈_≥ 0⊆(). Assume that {n≥ 0| f^n(x)∈ V} is infinite. Since {n≥ 0| f^n(x)∈ V}⊆Φ^-1(V) and {n≥ 0| f^n(x)∈ V} is Zariski closed in , then Φ^-1(V)=. It follows that {n≥ 0| f^n(x)∈ V}=_≥ 0. This concludes the proof. The above proof comes from a step of the proof of <cit.>. In the proof of <cit.>, Bell, Ghioca and Tucker basically proved the case where X is the p-adic polydisk. Our proof is just a reformulation of theirs in the language of Berkovich space. It is easy to proof <cit.>(=(1) of Theorem <ref>) from Theorem <ref>. Indeed, for a suitable identification ≃_p for some prime p≫ 0, after replacing f by a suitable positive iterate, we may find an f-invariant polydisc U containing x such that ρ(Δ_f|_U)<1. Then we conclude the proof by Theorem <ref>. In <cit.>, Benedetto, Ghioca, Kurlberg and Tucker gave an example which shows that DML does not hold for the map f:^2→^2 over _p defined by (x,y)↦ (x+1, py). As the above map is not an automorphism. We ask the following question. [Analytic DML] Assume that is of characteristic 0. Let X=(A) be a -affioid space. Let f: X→ X be an automorphism. Does DML hold for f? If is contained in the algebraic closure of a finite field, easy to show that the above question has a positive answer. In this case, after replacing f by a suitable positive iterate, we may assume that the reduction x of x is fixed by the reduction f of f. Not hard to show that there is an affinoid subdomain Y of X containing x whose image under the reduction map is x̃ and it is f-invaraint. One can show that ρ(Δ_f|_Y^N)<1 for a suitable N≥ 1. Then we conclude the proof by Theorem <ref>. When is of positive characteristic, the p-adic interpolation lemma (=Theorem <ref>) does not hold. But it is also interesting to ask an analytic version of p-DML (c.f. <cit.>). Assume that is of characteristic p>0. Let X=(A) be a -affioid space. Let f: X→ X be an automorphism (with ρ(Δ_f)<1). Does p-DML hold for f, i.e. up to a finite set, {n≥ 0| f^n(x)∈ V} is a finite union of arithmetic progressions along with finitely many sets taking form {∑_i=1^mc_ip^l_in_i| n_i∈_≥ 0, i=1,…,m}, where m∈_>1, k_i∈_≥ 0, c_i∈? One may expect that some consequences of the p-adic interpolation lemma (=Theorem <ref>) hold in positive characteristic. An example is <cit.>. Here we state it in a more general form. Assume that is algebraically closed and of characteristic 0. Let f: X→ X be an automorphism with ρ(Δ_f)<1. If f is not of finite order, then there is a non-empty open subset U of X such that for every -point in U, the orbit of x is infinite. Conjecture <ref> holds in the characteristic 0 case by Theorem <ref>. This conjecture can be viewed as an analogy of <cit.>. If it has positive answer, it should be helpful for the Zariski dense orbit conjecture. § QUESTIONS In this sections, we give some questions related to the DML problems. §.§ Special cases of DML It seem that the DML conjecture is difficult in general. So it is natural to study it in some special cases. Here we list some special cases which I feel in particular interesting. DML is open even for endomorphims of smooth projective surfaces. Among these cases, we feel that the following two cases are significant: (1) DML for endomorphisms f:^2→^2 over of degree at least 2; (2) DML for endomorphisms of f_1× f_2 on ^1×^1 over where f_1,f_2 are endomorphisms of ^1 of the same degree d≥ 2. In (2), if f_1 and f_2 has different degree, it is not hard to show that DML holds using an argument in <cit.>. In <cit.> and Theorem <ref>, the author proved the DML conjecture for endomorphisms of ^2_. We suspect that the techniques in this proof can be generalized to study more general endomorphisms of affine varieties. The following two cases are in particular interesting: (3) DML for endomorphisms of ^l_, l≥ 3; (4) DML for endomorphisms of affine surfaces over . Note that the dimension l case of (3) implies the order l case of Conjecture <ref>. Few cases of DML are known for rational maps. Two interesting open cases are as follows: (5) DML for birational self-maps of ^2_; (6) DML for skew-linear map on ^1×^l taking form (x,y)↦ (g(x), A(x)y) where g is an automorphism of ^1 and A(x) is a matrix in M_l× l(k(x)). Recall that (6) is known when g≥ 2 <cit.>. Combining with <cit.>, Case (6) implies Conjecture <ref>. In the DML conjecture, the point x is a closed point. It is natural to ask (7) DML for non-closed points i.e. x∈ X with {x}≥ 1. It is easy to see that the non-closed point case (7) is implied by the original DML conjecture. For DML in positive characteristic, recall that Yang proved the following significant result: <cit.> Let be a complete non-archimedian valuation field with =p>0. Let ^∘ be its valuation ring and ^∘∘ be the maximal ideal. Assume that f:^N_→^N_ is totally inseparable and it is a lift of Frobenius i.e. it takes the following form: f: [x_0:… :x_N]↦ [x_0^q+g_0(x_0^p,…,x_N^p):… :x_N^q+g_N(x_0^p,…,x_N^p)] where q is a power of p and g_i are homogenous polynomials of degree q/p in ^∘∘[x_0,…,x_N]. In this result, f is asked to having two conditions: * f is totally inseparable i.e. the morphism f^*Ω_^N_→Ω_^N_ is the zero map; * f is a lift of Frobenius i.e. the reduction f̃: ^N_→^N_ of f is the q-Frobenius map. Note that the second condition still make sense when is of mixed characteristic. It is interesting to ask whether DML holds when one of the above conditions hold. Does DML holds in the following cases: (8) the case f is totally inseparable; (9) the case f is a lift of Frobenius. In <cit.>, the author proved serval results for lifts of Frobenius including DML for backward orbits <cit.> . <cit.> is stated in mixed characteristic case, but its proof also works in pure positive characteristic case. §.§ DML for other maps The DML conjecture is for rational self-maps of algebraic varieties. It is interesting to ask the same question for other maps. Does DML holds for meromorphic self-maps of compact Kähler manifolds? It is even challenging for automorphisms. This case is known for automorphisms of projective manifolds <cit.>, but the proof in <cit.> relies on the p-adic method, which seems hard to be applied in the Kähler case. One may ask similar question for proper meromorphic self-maps of proper Berkovich spaces. Does DML holds for meromorphic self-maps of proper Berkovich spaces? For the non-proper case, we recall the following two questions asked in Section <ref>. [=Question <ref>] Assume that is of characteristic 0. Let X=(A) be a -affioid space. Let f: X→ X be an automorphism. Does DML hold for f? [=Question <ref>] Assume that is of characteristic p>0. Let X=(A) be a -affioid space. Let f: X→ X be an automorphism (with ρ(Δ_f)<1). Does p-DML hold for f, i.e. up to a finite set, {n≥ 0| f^n(x)∈ V} is a finite union of arithmetic progressions along with finitely many sets taking form {∑_i=1^mc_ip^l_in_i| n_i∈_≥ 0, i=1,…,m}, where m∈_>1, k_i∈_≥ 0, c_i∈? Define piecewise algebraic self-maps as follows: Let X=⊔_j∈ J X_j be a decomposition of X into finitely many constructible subsets. Let f_j be a morphism X_j→ X We call such f: X→ X a piecewise algebraic self-map on X. It is clear that f is continuous for the constructible topology. It is clear that every endomorphism of X is a piecewise algebraic self-maps. This class of maps seems quite interesting. They are algebraic in natural, but not exactly algebraic. Here are some examples of piecewise algebraic self-maps not coming from algebraic maps. Let X=M_N× N≃^N^2 be the space of N× N-matrix. Every matrix A has a unique decomposition A=D(A)+N(A) where D(A) is diagonalizable N(A) and nilpotent. There is a unique piecewise algebraic self-maps f: X→ X such that f(A)=D(A). Since f≠𝕀 and it is the identity on a Zariski open subset of X, this map is not coming from rational self-maps. Let X:=^2. Let f be the unique piecewise algebraic self-maps such that f|_{x=2y}: (x,y)↦ (y, x-y) and f|_X∖{x=2y}: (x,y)↦ (y, x+y). This map associates to the piecewise linear recurrence sequences A_n, n≥ 0 such that A_n+2=A_n+A_n+1 if 2A_n≠ A_n-1; and A_n+2=A_n-A_n+1 if 2A_n= A_n-1. Does DML hold for piecewise algebraic self-maps over ? §.§ Other questions The DML conjecture is not a full generalization of the original Mordell-Lang conjecture. In particular, it considers only the forward orbit but not the backward orbit. In an informal seminar, S-W Zhang asked the following question to the author. (=<cit.>) Let X be a quasi-projective variety over and F:X→ X be a finite endomorphism. Let x be a point in X(). Denote by O^-(x):=∪^∞_i=0F^-i(x) the backward orbit of x. Let V be a positively dimensional irreducible subvariety of X. If V∩ O^-(x) is Zariski dense in V, what can we say about V? We note that if V is preperiodic, then V∩ O^-(x) is Zariski dense in V. As the dynamical Manin-Mumford conjecture, the converse is not true. We have counterexamples even when F is a polarized[An endomorphism F:X→ X on a projective variety is said to be polarized if there exists an ample line bundle L on X satisfying F^*L=L^⊗ d, d≥ 2.] endomorphism. The following example is given by Ghioca, which is similar to <cit.>. (=<cit.>) Let E be the elliptic curve over defined by the lattice [i]⊆. Let F_1 be the endomorphism on E defined by the multiplication by 10 and F_2 be the endomorphism on E defined by the multiplication by 6+8i. Set X:=E× E, F:=(F_1,F_2) on X. Since |10|=|6+8i|, F is a polarized endomorphism on X. Let V be the diagonal in X and x be the origin. We may check that V∩ O^-(x) is Zariski dense in V, but V is not preperiodic. As a special case of Question <ref>, the author proposed the following conjecture in <cit.>. (=DML for coherence backward orbits) Let X be a quasi-projective variety over and F:X→ X be a finite endomorphism. Let {b_i}_i≥ 0 be a sequence of points in X() satisfying f(b_i)=b_i-1 for all i≥ 1. Let V be a positively dimensional irreducible subvariety of X. If the {b_i}_i≥ 0∩ V is Zariski dense in V, then V is periodic under F. This conjecture is true for lifts of Frobenius <cit.>. Following the general principle of “unlikely intersection problem" <cit.>, we propose the following question: Let X be a quasi-projective variaty over , f: X→ X be an endomorphism. Let Z,V be irreducible subvarieties of X with Z+ V< X. * Can we describe the set {n≥ 0| f^n(Z)∩ V≠∅}? For example, is it a union of arithmetic progressions? * If (∩_n≥ 0f^n(Z))∩ V is Zariski dense, can we describe V? One may also ask the same question for rational self-maps. The part (i) can be think as a higher dimensional generalization of the original statement of DML and (ii) can be think as a higher dimensional generalization the geometric form of DML. Question <ref> seems quite difficult. It will be also interesting to have a weak version of (i) like the weak DML.
http://arxiv.org/abs/2307.04444v1
20230710095313
Weak gravitational lensing by an ESTGB black hole in the presence of a plasma
[ "Qian Li", "Yu Zhang", "Zhi-Wen Lin", "Qi-Quan Li", "Qi Sun" ]
gr-qc
[ "gr-qc" ]
Faculty of Science, Kunming University of Science and Technology, Kunming, Yunnan 650500, China. [email protected] (Corresponding author) Faculty of Science, Kunming University of Science and Technology, Kunming, Yunnan 650500, China. Faculty of Science, Kunming University of Science and Technology, Kunming, Yunnan 650500, China. Faculty of Science, Kunming University of Science and Technology, Kunming, Yunnan 650500, China. Faculty of Science, Kunming University of Science and Technology, Kunming, Yunnan 650500, China. This paper is devoted to studying the weak-field gravitational lensing properties of a 4D ESTGB black hole, which is surrounded by the plasma medium. The effects of the magnetic charges and the three plasma distribution models in the deflection of light around a 4D ESTGB black hole are investigated in detail. We find that the uniform plasma leads to a larger deflection of light rays in comparison with the singular isothermal sphere (SIS), the non-singular isothermal sphere (NSIS) models. Moreover, the deflection angle increases slightly as the absolute value of the magnetic charge decreases. Finally, we analyze the total magnification of image due to weak gravitational lensing around the black hole. The result shows that the presence of a uniform plasma medium remarkably enhances the total magnification whereas the non-uniform plasma reduces the total magnification. Weak gravitational lensing by an ESTGB black hole in the presence of a plasma Qi Sun ============================================================================== Keywords: Black hole, Weak graviatational lensing, Plasma PACS numbers: 04.70.Dy, 04.50.Kd, 03.65.Xp § INTRODUCTION As one of Einstein's general relativity predictions, black holes are the most mysterious objects in the present universe. Because the light ray is unable to escape the event horizon, which is a one-way causal boundary, black holes are not visible objects, and their existence can only be proven indirectly. However, with the development of related astronomical technology, the EHT cooperation organization <cit.> published the shadow of a supermassive black hole in 2019. This may be another powerful evidence of the existence of black holes after LIGO-Vigro detected the gravitational wave signals generated by the merger of binary black holes <cit.>. In addition to the standard general relativity, many modified gravity theories are proposed due to fundamental general relativity may not hold in high- or low- curvature regimes, such as the extended scalar-tensor-Gauss-Bonnet (ESTGB) theory <cit.>. It is given through the coupling of the Gauss-Bonnet invariant with a scalar field owing to avoidance of Ostrogradski instability, which is a special and interesting extension. This modified theory is a natural modification of general relativity and extension of the standard scalar-tensor theory. The Doneva and Yazadjiev indicated that below a certain critical mass, the Schwarzschild spacetime becomes unstable in ESTGB gravity <cit.>. The ESTGB theory can explain the phenomenon of the present stage of cosmic acceleration in cosmology <cit.>. Shortly thereafter, Cañate and Perez Bergliaffa <cit.> proposed the first exact magnetic black hole solution based on the extended scalar-tensor-Gauss-Bonnet theory (ESTGB) with a special type of nonlinear electrodynamics. The ESTGB black hole solution is characterized by the Arnowitt-Deser-Misner (ADM) mass and magnetic charge. When m>0 and q<0, the black hole solution is similar to the Reissner-Nordström black hole solution. The gray-body factor and absorption cross section of the massless Dirac field for this black hole were studied in Ref.<cit.>. Ma et al. <cit.> investigated the quasinormal modes and absorption cross section of the massless scalar field for this black hole. Besides, the thermodynamical properties for this black hole under the generalized uncertainty principle (GUP) have been studied in Ref.<cit.>. Because the spacetime around compact massive objects is curved, one of the remarkable characteristics of general relativity is light deflection and the lens effect. The phenomenon of light deflection and lens effect is called gravitational lensing. One of the three well-known verification experiments for general relativity involves light deflection. Therefore, gravitational lensing is used as a special tool to verify whether the general relativity theories are correct and to probe properties of matter surrounding black hole. Besides, one can obtain some feature information of the gravitational object by the gravitational lensing. It is extremely important that the difference between different black hole lenses can be obtained by the gravitational lensing effect <cit.>. So gravitational lensing still is the very active research area in the weak and strong field limits. The weak deflection angle of Schwarzschild spacetime in vacuum can be expressed by in form α̂=2R_s/b where R_s =2M and b is the impact parameter. Virbhadra et al. studied the strong gravitational lensing in the context of Schwarzschild black hole <cit.>. The variation of the tangential, radial, and total magnification of the images with respect to the angular source position is investigated by simulating the supermassive black holes M87* as a Schwarzschild lens <cit.>. Sereno <cit.> obtained the time delay and deflection angle expressions of the Reissner-Nordström black holes under the weak field approximation. In addition, many attempts have been made on the weak deflection angle of the different modified gravity theories by using different methods <cit.>. Generally, the angle of deflection or the relevant optical scalar can be expressed in the form of derivatives of the different components of the black hole metric. In strong gravity field, the study of gravitational lensing is a trending topic. There have been a number of articles examining the gravitational lensing in the strong field <cit.>. On the other hand, it is believed that compact astrophysical objects are immersed in a complicated environment, such as plasma. In this paper, we only focus on the plasma environment. Plasma is a dispersive medium whose refractive index relies on the frequency of photons. The plasma around compact astrophysical objects affects the trajectories of the light ray since it may interact with electromagnetic waves. Synge <cit.> firstly proposed the self-consistent approach to the propagation of light rays in the gravitational field in the context of plasma medium. Forty years later, Perlick <cit.> proposed a different type of the method to obtain the integral expression of the deflection angle as the plasma surrounds the Schwarzschild and Kerr black holes. Later, Bisnovatyi-Kogan and Tsupko <cit.> found that the deflection angle relies on the photon frequency in the uniform dispersive medium. The phenomenon has qualitatively different from the vacuum environment. The authors <cit.> also considered the case that the gravitational object is surrounded by the inhomogeneities of plasma and obtained the expression for the deflection angle of the different plasma models. Schee <cit.> et al. studied the gravitational lensing about the regular black hole immersed in plasma. The weak deflection angle of the wormhole solution described by exponential metric was obtained in Ref.<cit.>. The influences of uniform plasma on the the shadow and weak deflection angle for a rotating and regular black hole in a non-minimally coupled Einstein-Yang-Mills (EYM) theory have been studied <cit.>. Zhang et at. <cit.> studied the influences of the plasma with the power-law distribution and logarithmic normal distribution on the shadow of the Kerr black hole. In addition, Atamurotov and his coworkers were devoted to studying the weak gravitational lensing effect in plasma for various kinds of spacetimes such as the Lorentzian wormhole spacetime <cit.>, Schwarzschild-MOG black hole <cit.>, 4D Einstein-Gauss-Bonnet gravity <cit.>, rotating Einstein-Born-Infeld black hole <cit.>. In this study, we focus on the exact expression of the deflection angle for the (3+1)-dimensional ESTGB black hole assuming that the black hole is immersed in a plasma medium. And as an application, we will study the magnification of image in the weak field. The structure of this paper is as follows. Section <ref> presents a brief review of the process of obtaining the deflection angle under the weak-field approximation and calculating the deflection angle for the 4-dimensional ESTGB black hole, which is surrounded by three different plasma density distributions. In Section <ref>, as a type of application, we study the magnification of image for three different plasma density distributions, i.e., uniform plasma, SIS and NSIS medium. Finally, we give our concluding remarks in Section <ref>. Throughout, our choice of a spacetime signature is {-,+,+,+} and natural units c = G = ħ = 1. Latin indices run from 1 to 3 as well as Greek denotes from 0 to 3. § WEAK-FIELD LENSING IN THE PRESENCE OF PLASMA In this section, we will study optical properties, namely, gravitational lensing which is in the context of a 4D ESTGB black hole encompassed by the plasma medium under the weak-field approximation. The 4D ESTGB gravity with an extra matter field, namely a model of non-linear electrodynamics (NLED), has the following action <cit.> S = ∫ d^4x √(-g){1/4π(1/4(R - 1/2∂_μϕ∂^μϕ + f(ϕ) R__GB^2 -2 U (ϕ))-ℒ_ matter) }. Here the first term is the Einstein-Hilbert Lagrangian density, which is defined by the Ricci scalar R, the kinetic term of the scalar field 1/2∂_μϕ∂^μϕ, the non-minimal coupling between the Gauss-Bonnet invariant R__GB^2 and scalar field f(ϕ), i.e., f(ϕ) R__GB^2, and the scalar field potential U (ϕ). The Lagrangian density ℒ_ matter denotes any matter field in the action. Concretely, the Gauss-Bonnet invariant satisfies the form R__GB^2=R_αβμν^αβμν - 4 R_αβR^αβ+R ^2. The function f(ϕ) and the scalar field potential U (ϕ) can be expressed as f=-ℓ^2σ/32{√(2σ)tan^^-1( √(2)/√(σ).06cm ϕ) +1/2ϕln[( 2β/σϕ^2+β)^2] - 2/ϕ}, 𝒰(ϕ)=2^^9/2/105ℓ^2σ^7/2[π/2-tan^^-1(√(2)/√(σ).06cm ϕ)]ϕ^5/4ℓ^2(3/10σ+5ϕ^2/7+7σϕ^4/24) ln[( 2β/σϕ^2+β)^2] -ϕ/3ℓ^2(16/35σ^3-8ϕ^2/105σ^2+31ϕ^4/70σ+11ϕ^6/28). The NLED Lagrangian term that reduces to Maxwell's electrodynamics in the weak field regime has the following form ℒ_𝒩ℒℰ𝒟=ℱ/8-s^^1/2( 1 +37/210σ_∗+2/525σ_∗)ℱ^^5/4 - σ_∗ s ℱ^^3/2/16 +𝒪(ℱ^^7/4), with the electromagnetic invariant ℱ=q^2/r^4. And the above parameters have the relations σ=σ_*, l=s=q, β=β_* and ϕ(r)=q/r. The metric describing the 4D ESTGB black hole can be written as ds^2=-f(r)dt^2+f^-1(r)dr^2+r^2dθ^2+r^2 sin^2θ dϕ^2, with f(r)=1-R_s/r-q^3/r^3, where R_s=2M, M is ADM mass and q is magnetic charge. Since the weak energy condition (WEC) should be satisfied by both the corresponding effective energy-momentum tensor and that of nonlinear electrodynamics, the value of q<0 is permitted. Without losing generality, we consider the case that is a non-extreme black hole. This means that the value of the magnetic charge is limited to this range -2^5/3/3 < q < 0 when M is set to 1. We know that photons will follow the null geodesics of the effective spacetime metric in the presence of NLED instead of the original spacetime metric. However, we need to state that the metric describing the 4D ESTGB-NLED spacetime is obtained in the weak field where the NLED reduces to Maxwell's theory (see Ref. <cit.> for more detail). Therefore, photons still follow the null geodesics of the original spacetime metric in the weak field. Now, a general approach <cit.> is introduced to derive the deflection angle in the uniform or non-uniform plasma. We have the metric coefficients under the weak field approximation, which are given by g_αβ=η_αβ + h_αβ, where η_αβ is the Minkowski metric, i.e., (-1,1,1,1), h_αβ is perturbation metric. Note that h_αβ≪ 1, h_αβ→ 0 where x^α→∞, g^αβ=η^αβ-h^αβ, h^αβ=h_αβ. The refractive index of the static inhomogeneous plasma that relies on the photon frequency ω(x^i) and space location x^α has the following form n^2=1-ω^2_e/ω^2(x^i),   ω^2_e=4π e^2 N(r)/m=K_e N(r), where ω_e is the electron plasma frequency, N(r) is the electron density in the inhomogeneous plasma, e and m denote the charge and mass of the electron, respectively. It is worth noting that when ω_e< ω the electromagnetic waves can propagate in the such plasma. That is to say, the plasma medium has a reflective medium effect when ω_e< ω where ω(∞)≡ω. Considering the effect of the plasma on the deflection angle in the weak field limit, we get the expression of deflection angle in the following form α̂_k=1/2∫_-∞^∞(h_33,k+ h_00,k/1-ω^2_e/ω^2-K_eN_,k/ω^2-ω_e^2)dz, for k=1,2. The deflection angle with the impact parameter b found in Ref.<cit.> for more detail, can be written as α̂_k=1/2∫_-∞^∞b/r×(dh_33/dr+1/1-ω^2_e/ω^2dh_00/dr-K_e/ω^2-ω_e^2dN/dr)dz. The location of the photon is presented by b and z under the axially symmetric case, and then the magnitude of the radius-vector is written as r=√(b^2+z^2) <cit.>. It is worth noting that the negative value of α̂_b indicates the bending of the photon trajectory towards the compact object, and the positive value indicates the opposite. In the weak gravitational field regime, we can rewrite the metric around the 4D ESTGB black hole as ds^2=ds_0^2+(R_s/r+q^3/r^3)(dt^2+dr^2), where ds^2_0 is the flat part of metric, and it has the following form ds^2_0=-dt^2+dr^2+r^2(dθ^2+sin^2θ dϕ^2). The components h_αβ can be expressed in the Cartesian frame as h_00=R_s/r+q^3/r^3, h_ik=h_00n_in_k, h_33=h_00cos^2χ, where cosχ=z/√(b^2+z^2) and r=√(b^2+z^2). By substituting Eq.(<ref>) into Eq.(<ref>), we have the concrete form of the deflection angle in the following expression <cit.> α̂_b=∫_-∞^∞b/2r(∂_r((R_s/r+q^3/r^3)cos^2χ) +∂_r(R_s/r+q^3/r^3)1/1-ω^2_e/ω^2-K_e/ω^2-ω^2_e∂_rN)dz. In what follows, we will calculate the integrals about the deflection angle considering the three specific plasma distributions, viz., uniform plasma, singular isothermal sphere (SIS), and non-singular isothermal sphere (NSIS) medium. §.§ Uniform plasma In the subsection, we will calculate the deflection angle using Eq.(<ref>) for the photon propagating in the 4D ESTGB spacetime surrounded by uniform plasma, which can be expressed as α̂_uni=α̂_uni1+α̂_uni2+α̂_uni3. The first term is the influence of the gravitational field of the ESTGB black hole α̂_uni1=∫_-∞^∞b/2r∂_r(R_s/r^3+q^3/r^5)z^2dz = -R_s/b-2q^3/3b^3. Note that when q=0 the spacetime will recover to the Schwarzschild spacetime, and we will obtain α̂_uni1=R_s/b. The second term includes the influence of the gravitational field and plasma medium, which can be written as α̂_uni2=∫_-∞^∞b/2r∂_r(R_s/r+q^3/r^3)1/1-ω^2_e/ω^2dz =-(R_s/b+q^3/b^3)1/1-ω^2_e/ω^2. Because the last term is the influence of the inhomogeneity of plasma, we get ∂_rN=0 for uniform plasma. In the relevant literature about weak gravitational lensing, the deflection angle is usually defined as a positive one <cit.>. Thus, we have the following expression about the uniform plasma α̂_uni=R_s/b+2q^3/3b^3+(R_s/b+2q^3/b^3)1/1-ω^2_0/ω^2, where ω_0=ω_e(∞). In Fig.<ref>, we plot the deflection angle α̂_b with respect to the impact parameter b for different values of magnetic charge q at ω_0^2/ω^2=0.5, and plasma medium parameter at q=-0.5. The deflection angle diminishes with an increase in the impact parameter b. As can be seen from Fig.<ref>, when b≫ R_s, we can neglect the effect of the magnetic charge on the deflection angle. In addition, it is easy to see from Eq.(<ref>), the deflection angle is very small or even disappear when the impact parameter b is large. Fig.<ref> demonstrates the dependence of the deflection angle from the uniform plasma parameter and magnetic charge at b=3. We can see in the left figure that the deflection angle increases rapidly when ω_0^2/ω^2 increases to 1. As the absolute value of magnetic charge decreases, the deflection angle slightly increases. §.§ Singular isothermal sphere In the subsection, we consider the case of an SIS around the 4D ESTGB black hole. The SIS is primarily introduced in Refs.<cit.> and <cit.> to study the lens systems of the galaxies and clusters of galaxies. The density distribution of the SIS is written as ρ(r)=σ_v^2/2 π r^2, where v is the one-dimensional velocity dispersion. We can obtain the plasma concentration by making use of Eq.(<ref>) and the following relation N(r)=ρ(r)/κ m_p, in which κ is a coefficient which is related to the contribution of dark matter, called by 1D coefficient, and m is the mass of proton. The plasma frequency has the expression ω^2_e=K_eN(r)=K_eσ^2_v/2πκ m_pr^-2. Using Eq.(<ref>), we can calculate the deflection angle for an SIS. Due to the fact that the first term is the effect of the gravitational field, it has the same expression as Eq.(<ref>) α̂_sis1=α̂_uni1. For the other terms, we calculate the integrals and obtain the following results α̂_sis2 =∫_-∞^∞b/2r∂_r(R_s/r+q^3/r^3)(1+ω^2_e/ω^2)dz =-((R_s/b+2q^3/b^3)+(2R_s/3π b+8q^3/5π b^3)ω^2_cR_s^2/ω^2 b^2), α̂_sis3=-K_eb/2ω^2∫_-∞^∞1/rdN(r)/drdz=ω_c^2R_s^2/2ω^2b^2, where ω_c^2 is defined as <cit.> ω_c^2=K_eσ^2/2κ m_p R^2_s. We obtain the deflection angle about the SIS, which can be written as α̂_sis=((2R_s/b+8q^3/3b^3)+(-1/2+2R_s/3π b+8q^3/5π b^3)ω_c^2R_s^2/ω^2b^2). To simulate the effect of SIS on the trajectory of light, we demonstrate the deflection angle α̂ versus the impact parameter b for different values of magnetic charge when ω_c^2/ω^2 is set to 0.5, and the SIS parameter for fixed q=-0.5 in Fig.<ref>. It's not hard to get that when we increase the impact parameter the deflection angle decreases. Fig.<ref> is the visualization of deflection angle to SIS parameter and magnetic charge, respectively. It is straightforward to show that the deflection angle diminishes when ω_c^2/ω^2 increases (left figure), however, when the absolute value of magnetic charge decreases the deflection angle increases (right figure). This means that the existence of a SIS around the black hole reduces the deflection angle in comparison to the vacuum or uniform cases. §.§ Non-singular isothermal sphere In the subsection, we aim to give the exact expression of the deflection angle of the ESTGB black hole in the presence of the NSIS. The plasma distribution can be expressed as <cit.> ρ(r)=σ^2_v/2π(r^2+r_c^2), where r_c is the core radius, and the concentration becomes N(r)=σ^2/2πκ m_p(r^2+r_c^2). The corresponding plasma frequency has the following form ω_e^2=K_eσ^2_v/2πκ m_p(r^2+r_c^2). Similarly to the last subsection, the first term remains unchanged, and other terms of Eq.(<ref>) will have the expressions α̂_nsis2 =∫_-∞^∞b/2r∂_r(R_s/r+q^3/r^3)(1+ω^2_e/ω^2)dz =-(R_s/b+q^3/b^3)-(R_s/bπ r_c^2+b R_sarctan(r_c/√(b^2+r_c^2))/π r^3_c√(b^2+r_c^2)) ×ω_c^2R^2_s/ω^2-(-1/b^2 r_c^4 +2/3 b^4 r_c^2 +arctan(r_c/√(b^2+r_c^2))/r_c^5√(b^2+r_c^2))×3 q^3 b R_s^2ω_c^2/ω^2π, α̂_nsis3=-K_eb/2ω^2∫_-∞^∞1/rdN(r)/drdz=b/2(b^2+r_c^2)^3/2ω_c^2R^2_s/ω^2, where ω_c^2=K_eσ^2_v/2κ m_p R^2_s. One can obtain the following form of the deflection angle by summing all the integrals α̂_nsis=(2R_s/b+8q^3/3b^3)+(R_s/bπ r_c^2-b/2(b^2+r_c^2)^3/2+b R_sarctan(r_c/√(b^2+r_c^2))/π r^3_c√(b^2+r_c^2))ω_c^2R^2_s/ω^2       +(-1/b^2 r_c^4 +2/3 b^4 r_c^2arctan(r_c/√(b^2+r_c^2))/r_c^5√(b^2+r_c^2))3 q^3 b R_s^2ω_c^2/ω^2π. The variation of the deflection angle α̂_b with the impact parameter b is shown in Fig.<ref>, where the ESTGB-NLED black hole is surrounded by NSIS medium. From Fig.<ref>, we can conclude that the increase of the impact parameter leads to the diminishing of deflection angle. And we can see from the right panel that the difference in the deflection angle becomes more and more obvious with an increase in the impact parameter for the different values of the NSIS medium. In Fig.<ref>, we plot the dependence of the deflection angle on the NSIS parameter for the different magnetic charges (left panel) and on the magnetic charge for the different NSIS parameters (right panel). In these two cases we fix b=3 and r_c=3. The effect of NSIS on the deflection angle is similar to that of the SIS case by comparing Figs.<ref> and <ref>. In the above three subsections, we studied the effect of the different distributions of the plasma and magnetic charge on the deflection angle in detail. To directly compare the effects of different plasmas, i.e., uniform plasma, SIS, and NSIS media, we study the dependence of the deflection angle on different parameters. The comparison results are shown in Fig.<ref> where we fix the corresponding parameters, viz., ω_0^2/ω^2=ω_c^2/ω^2=0.5, impact parameter b=3 and the core radius r_c=3. The uniform plasma medium exhibits better refraction properties than the SIS and NSIS models, as shown in Fig.<ref>. It is easy to see that the magnetic charge has a small effect on the deflection angle of the black holes in different plasma distributions. We also notice from the right figure that when we increase the plasma parameter ω_0^2/ω^2 or ω_c^2/ω^2, the deflection angle in the presence of the SIS or NSIS medium diminishes, whereas the deflection angle of the uniform plasma has the opposite trend. Finally, the deflection angle decreases with the increase of the impact parameter for the three models. In a word, the bending degree of deflection can be expressed mathematically as, α̂_uni > α̂_sis> α̂_nsis. § MAGNIFICATION OF IMAGE In this section, we will analyze in detail the magnification of image for the ESTGB black hole in the presence of the different plasma using the formula of the deflection angle studied in our previous section. The lens equation has the form <cit.> θ D_s=β D_s+α̂_b D_d s, where D_s is the distance from the observer to the distant light source, and D_d s is the distance from the lens object to the distant light source (see Fig.<ref>). θ denotes the angle of the apparent source image for the observer lens axis, β denotes the angle of the light source with respect to the observer lens axis, and α̂_b is the angle between the apparent source image and light source, i.e., deflection angle. We make use of the relationship between the impact parameter and angle θ, and θ possesses the expression b=D_dθ where D_d is the distance from the lens object to the observer, to rewrite the expression (<ref>), into the form <cit.> β =θ-D_ds/D_sF(θ)/D_d1/θ, and F(θ)=|α̂_b|b=|α̂_b(θ)|D_dθ. Note that when the light source, lens object, and observer remain in a straight line, the angle β is equal to zero. In such a case, the relativistic image will form a relativistic ring known as an Einstein ring. The radius of the Einstein ring R_0=D_dθ_0, where θ_0 denotes the Einstein angle. The Einstein angle in the context of the Schwarzschild black hole can be expressed as <cit.> θ_0=√(2R_sD_ds/D_dD_s). The Einstein angle θ_0 is small but can be solved with modern telescopes. However, we can detect the gravitational lensing owing to the changes in the apparent brightness of the source, namely magnification of the image brightness. The basic equation of the magnification of the image brightness is expressed as <cit.> μ_Σ=I_tot/I_*=∑_k|(θ_k/β)(dθ_k/dβ)|,  k=1,2,...,s, where I_tot and I_* refer to the total brightness of the image and unlensed brightness of the pure source, respectively. k is the number of the images and s is the total number of the images. Next, we will study the effect of the different distribution plasma around the ESTGB black hole on the magnification of the images. §.§ Uniform plasma We first calculate the expression of the Einstein angle θ^pl_0 in the context of the uniform plasma. We have the form by using Eqs.(<ref>) and (<ref>) as follows (θ^pl_0)_uni=θ_0{1/2((1+2q^3/3R_sb^2)+(1+2q^3/R_s b^2)1/1-ω_0^2/ω^2)}^1/2. We obtain the magnification of image by bring the above Eq.(<ref>) into Eq.(<ref>), which is given by <cit.> μ_tot^pl=μ_+^pl+μ_-^pl=x^2+2/x√(x^2+4). Here μ_+ is the magnification factor of the primary image, which is located on the same side of the light source with respect to the lens object <cit.> μ_+=1/4[x/√(x^2+4)+√(x^2+4)/x+2], and μ_- is the magnification factor of the secondary image, which is situated on the opposite side μ_-=1/4[x/√(x^2+4)+√(x^2+4)/x-2], where x denotes the dimensionless parameter in the presence of the uniform plasma. It has the following form x_uni=β/(θ^pl_0)_uni=x_0{1/2((1+2q^3/3R_sb^2)+(1+2q^3/R_s b^2)1/1-ω_0^2/ω^2)}^-1/2, with x_0=β/θ_0. For a better understanding of the effect of the magnetic charge and plasma on the magnification of image, in Fig.<ref>, we plot the variation of the total magnification of image with the magnetic charge for the different values of the uniform plasma parameter (left figure) and the uniform plasma parameter for the different values of magnetic charge (right figure) for fixed R_s=2, b=3 and x_0=0.055. We can see that the total magnification exhibits a small increase as the absolute value of magnetic charge decreases and reaches a maximum when it returns to the Schwarzschild black hole. It is easy to see from the right panel that the total magnification increases exponentially with the increase of uniform plasma distribution. In other words, the existence of uniform plasma usually increases the magnification. Besides, we also plot the ratios μ_+^pl/μ_+ (lower curves) and μ_-^pl/μ_- (upper curves) of the magnification with the given parameters q=-0.5, b=3 and R_s=2 in Fig.<ref>, for more details about the effect of the plasma on the magnification. It is evident that when the value of the uniform plasma density distribution increases, the magnification ratio increases. The behavior of the magnification ratio of the image brightness corresponds to the fact that the deflection angle is increased by ω_0^2/ω^2. In addition, the magnification ratio of the secondary image μ_-^pl/μ_- becomes larger, while the magnification ratio of the primary image μ_+^pl/μ_+ tends to unity when x increases. §.§ Singular isothermal sphere We have calculated the deflection angle for the case that the 4D ESTGB black hole surrounded by the uniform plasma in the last subsection. So in the subsection, we consider the influence of the SIS on the total magnification and the magnification ratio of image brightness. The expression of the Einstein angle θ^pl_0 in the context of the SIS medium can be expressed as (θ^pl_0)_sis=θ_0{1/2((2+8q^3/3R_sb^2)+(-1/2+2R_s/3π b+8q^3/ 5π b^3) R_sω_c^2/b ω^2)}^1/2. Since the calculational part is similar, we have x in the presence of the SIS plasma medium, which has the following form x_sis=β/(θ^pl_0)_sis=x_0{1/2((2+8q^3/3R_sb^2)+(-1/2+2R_s/3π b+8q^3/ 5π b^3) R_sω_c^2/b ω^2)}^-1/2, where x_0=β/θ_0. Fig.<ref> shows the changes in the total magnification of image as the function of the magnetic charge (left figure) for the different parameter values of the SIS parameter, and the SIS parameter (right figure) for the different values of the magnetic charge where corresponding fixed parameters are b=3, x_0=0.055 and R_s=2. From Fig.<ref>, we can see that when we increase the SIS medium, the total magnification decreases gradually. Because the plasma density decreases with the radius (dN/dr<0), α̂_sis3 is negative which is opposite to the gravitational deflection (see Refs.<cit.> and <cit.>). If α̂_sis3 is positive, the total magnification of image as the function of ω_c^2/ω^2 has the opposite direction (see Refs.<cit.>). Fig.<ref> demonstrates the magnification ratio, i.e., the primary image μ_+^pl/μ_+ (lower curves) and the secondary image μ_-^pl/μ_- (upper curves) in the case we fix the parameters as q=-0.5, b=3 and R_s=2. Because the effect of the SIS medium, the behavior of the magnification ratio is opposite to that of the uniform plasma. §.§ Non-Singular isothermal sphere In this subsection, we focus on the total magnification and the magnification ratio of image brightness for the ESTGB black hole surrounded by the NSIS medium. The Einstein angle θ_0^pl can be written as (θ^pl_0)_nsis =θ_0{1/2((2+8q^3/3b^2R_s)+(R_s/bπ r_c^2-b/2(b^2+r_c^2)^3/2+b R_sarctan(r_c/√(b^2+r_c^2))/π r^3_c√(b^2+r_c^2)) ×ω_c^2R_s b/ω^2 +(-1/b^2 r_c^4 +2/3 b^4 r_c^2+arctan(r_c/√(b^2+r_c^2))/r_c^5√(b^2+r_c^2))3 q^3 b^2 R_sω_c^2/ω^2π)}^1/2. The dimensionless parameter x has the form x_nsis =β/(θ^pl_0)_nsis =x_0{1/2((2+8q^3/3b^2R_s)+(R_s/bπ r_c^2- b/2(b^2+r_c^2)^3/2+b R_sarctan(r_c/√(b^2+r_c^2))/π r^3_c√(b^2+r_c^2)) ×ω_c^2R_s b/ω^2+(-1/b^2 r_c^4 +2/3 b^4 r_c^2+arctan(r_c/√(b^2+r_c^2))/r_c^5√(b^2+r_c^2))3 q^3 b^2 R_sω_c^2/ω^2π)}^-1/2, where x_0=β/θ_0. In Fig.<ref>, we show the graph of the total magnification for the case that the black hole is surrounded by the NSIS medium. By analyzing the behavior shown in Fig.<ref>, one can see that the change is similar to the case of the singular isothermal sphere. The presence of a NSIS reduces the total amplification in comparison with vacuum circumstance, i.e., ω_c^2/ω^2=0. This is because α̂_nsis3 is negative. We also plot the changes of the magnification ratio of the primary and secondary images with fixed R_s=2, b=3, x_0=0.055 and r_c=3 in Fig.<ref>. It is observed that μ_-^pl/μ_- (upper curves) tends to unity as larger x. And the ratio μ_+^pl/μ_+ (lower curves) is less than 1. We compare the magnification ratio of image brightness of the Schwarzschild black hole and ESTGB black hole in the uniform plasma in Fig.<ref>. We see that at large x the ratio of the magnification μ_+^pl/μ_+ tends to unity for the Schwarzschild black hole and ESTGB black hole; the ratio of the magnification μ_-^pl/μ_- of the Schwarzschild black hole tends to a constant, 2.25. This is consistent with the results of Bisnovatyi-Kogan et al.<cit.>. In addition, the magnetic charge has slight influence on the magnification ratio of the image. To compare the effects of the different plasma models on magnification ratio of image brightness, in Fig.<ref> we plot the magnification ratio of the three plasma distributions, i.e., uniform, SIS and NSIS, with the same parameters q=-0.5, b=3, R_s=2, ω_0^2/ω^2=ω_c^2/ω^2=0.5 and r_c=3. We can obtain from Fig.<ref> that as a consequence of the non-uniform plasma distribution around the black hole, the magnification ratio of the non-uniform plasma is less than that of uniform plasma. This means that only when there is uniform plasma around the black hole, the observer in the distance will perceive a considerable magnification. § CONCLUSION AND DISCUSSION In the work, we discussed the weak gravitational lensing properties of a 4D ESTGB black hole immersed in different plasma distribution models. We studied in detail the effect of the different plasma distribution models, i.e., uniform, SIS and NSIS medium, and the magnetic charge on the deflection of light. We found that the deflection angle increases slightly with the decrease of the absolute values of the magnetic charge. That is, the black hole has the maximum deflection angle when it returns to the Schwarzschild black hole. We showed that the presence of uniform plasma leads to an increase in the deflection angle. However, due to the fact that α̂_sis3 (α̂_nsis3) caused by the plasma inhomogeneity is less than zero , the deflection angle of the non-uniform plasma medium slightly diminishes with the increase of the plasma parameter. Moreover, compared with the SIS model, we found that the deflection angle is more sensitive to parameters b and ω_c^2/ω^2 in the NSIS model. We investigated the total magnification of image due to the weak gravitational lensing effect around a plasma-surrounded black hole. We observed that the change of the total magnification is similar to that of the deflection angle. In other words, for the uniform plasma model, the magnification of image increases, while for SIS or NSIS model, the magnification of image decreases. This result is also indicated by the magnification ratio of the image source. Finally, according to the influence of three plasma models on the deflection angle and the magnification of image, we can qualitatively understand the uniform plasma as a concave lens, while the SIS and NSIS plasma models as a convex lens in the context of the refractive index n<1. § ACKNOWLEDGMENTS This work was supported partly by the National Natural Science Foundation of China (Grant No. 12065012), Yunnan High-level Talent Training Support Plan Young & Elite Talents Project (Grant No. YNWR-QNBJ-2018-360) and the Fund for Reserve Talents of Young and Middle-aged Academic and Technical Leaders of Yunnan Province (Grant No. 2018HB006). 99 Akiyama2019 K. Akiyama et al. [Event Horizon Telescope], Astrophys. J. Lett. 875, L1 (2019). LIGOScientific:2016aoc B. P. Abbott et al. [LIGO Scientific and Virgo], Phys. Rev. Lett. 116, 061102 (2016). Doneva:2018rou D. D. Doneva, S. Kiorpelidi, P. G. Nedkova, E. Papantonopoulos and S. S. Yazadjiev, Phys. Rev. D 98, 104056 (2018). Doneva:2017bvd D. D. Doneva and S. S. Yazadjiev, Phys. Rev. Lett. 120, 131103 (2018). Heydari-Fard:2016nlj M. Heydari-Fard, H. Razmi and M. Yousefi, Int. J. Mod. Phys. D 26, 1750008 (2016). Canate:2020kla P. Cañate and S. E. Perez Bergliaffa, Phys. Rev. D 102, 104038 (2020). Li:2022jda Q. Li, C. Ma, Y. Zhang, Z. W. Lin and P. F. Duan, Chin. J. Phys. 77, 1269-1277 (2022). Ma:2022gzr C. Ma, Y. Zhang, Q. Li and Z. W. Lin, Commun. Theor. Phys. 74, 065402 (2022). Lin:2022eix Z. W. Lin, Y. Zhang, Q. Li, C. Ma and P. F. Duan, Int. J. Theor. Phys. 61, 199 (2022). Eiroa:2005ag E. F. Eiroa, Phys. Rev. D 73, 043002 (2006). Wei:2011bm S. W. Wei and Y. X. Liu, Phys. Rev. D 85, 064044 (2012). Virbhadra:1999nm K. S. Virbhadra and G. F. R. Ellis, Phys. Rev. D 62, 084003 (2000). Virbhadra:2022iiy K. S. Virbhadra, Phys. Rev. D 106, 064038 (2022). Sereno:2003nd M. Sereno, Phys. Rev. D 69, 023002 (2004). Jusufi:2017vta K. Jusufi, A. Ovgün and A. Banerjee, Phys. Rev. D 96, 084036 (2017). Ovgun:2018oxk A. Övgün, Universe 5, 115 (2019). Li:2020wvn Z. Li, G. Zhang and A. Övgün, Phys. Rev. D 101, 124058 (2020). Fu:2021akc Q. M. Fu, L. Zhao and Y. X. Liu, Phys. Rev. D 104, 024033 (2021). Javed:2020pyz W. Javed, J. Abbas, Y. Kumaran and A. Övgün, Int. J. Geom. Meth. Mod. Phys. 18, 2150003 (2021). Javed:2021arr W. Javed, A. Hamza and A. Övgün, Universe 7, 385 (2021). Li:2021xhy Z. Li and J. Jia, Phys. Rev. D 104, 044061 (2021). Crisnejo:2019xtp G. Crisnejo, E. Gallo and J. R. Villanueva, Phys. Rev. D 100, 044006 (2019). Crisnejo:2019ril G. Crisnejo, E. Gallo and K. Jusufi, Phys. Rev. D 100, 104045 (2019). Jha:2021eww S. K. Jha, S. Aziz and A. Rahaman, Eur. Phys. J. C 82, 106 (2022). Virbhadra:2002ju K. S. Virbhadra and G. F. R. Ellis, Phys. Rev. D 65, 103004 (2002). Rahvar:2018nhx S. Rahvar and J. W. Moffat, Mon. Not. Roy. Astron. Soc. 482, 4514-4518 (2019). Bozza:2010xqn V. Bozza, Gen. Rel. Grav. 42, 2269-2300 (2010). Virbhadra:2008ws K. S. Virbhadra, Phys. Rev. D 79, 083004 (2009). Chen:2013vja S. Chen and J. Jing, Class. Quant. Grav. 30, 175012 (2013). Ji:2013xua L. Ji, S. Chen and J. Jing, JHEP 03 (2014), 089 (2014). Chen:2015cpa S. Chen and J. Jing, JCAP 10, 002 (2015). Chen:2016hil S. Chen, S. Wang, Y. Huang, J. Jing and S. Wang, Phys. Rev. D 95, 104017 (2017). Zhang:2017vap R. Zhang, J. Jing and S. Chen, Phys. Rev. D 95, 064054 (2017). Abbas:2019olp G. Abbas, A. Mahmood and M. Zubair, Chin. Phys. C 44, 095105 (2020). Abbas:2021whh G. Abbas, A. Mahmood and M. Zubair, Phys. Dark Univ. 31, 100750 (2021). Hensh:2021nsv S. Hensh, J. Schee, A. Abdujabbarov and Z. Stuchlík, Eur. Phys. J. Plus 137, 242 (2022). Synge:1960ueh J.L. Synge, Relativity: the general theory (1960). Perlick2000 V. Perlick, Ray optics, Fermat's principle, and applications to general relativity (Springer Science & Business Media, 2000). Bisnovatyi-Kogan:2008qbk G. S. Bisnovatyi-Kogan and O. Y. Tsupko, Grav. Cosmol. 15, 20-27 (2009). Bisnovatyi-Kogan:2010flt G. S. Bisnovatyi-Kogan and O. Y. Tsupko, Mon. Not. Roy. Astron. Soc. 404, 1790-1800 (2010). Schee:2017hof J. Schee, Z. Stuchlík, B. Ahmedov, A. Abdujabbarov and B. Toshmatov, Int. J. Mod. Phys. D 26, 1741011 (2017). Turimov:2022iff B. Turimov, Y. Turaev, B. Ahmedov and Z. Stuchlík, Phys. Dark Univ. 35, 100946 (2022). Kala:2022uog S. Kala, H. Nandan and P. Sharma, Eur. Phys. J. Plus 137, 457 (2022). Zhang:2022osx Z. Zhang, H. Yan, M. Guo and B. Chen, Phys. Rev. D 107, 024027 (2023). Atamurotov:2021byp F. Atamurotov, S. Shaymatov and B. Ahmedov, Galaxies 9, 54 (2021). Atamurotov:2021qds F. Atamurotov, A. Abdujabbarov and J. Rayimbaev, Eur. Phys. J. C 81, 118 (2021). Babar:2021exh G. Z. Babar, F. Atamurotov and A. Z. Babar, Phys. Dark Univ. 32, 100798 (2021). Babar:2021nst G. Z. Babar, F. Atamurotov, S. Ul Islam and S. G. Ghosh, Phys. Rev. D 103, 084057 (2021). Hensh:2019ipu S. Hensh, A. Abdujabbarov, J. Schee and Z. Stuchlík, Eur. Phys. J. C 79, 533 (2019). Atamurotov:2021hoq F. Atamurotov, A. Abdujabbarov and W. B. Han, Phys. Rev. D 104, 084015 (2021). S1958 S. Chandrasekhar and S. Chandrasekhar, An introduction to the study of stellar structure (Courier Corporation, 1957). J1987 J. Binney and S. Tremaine, Galactic dynamics (Princeton university press, 2011). Morozova V. S. Morozova, B. J. Ahmedov and A. A. Tursunov, Astrophys. Space Sci. 346, 513-520 (2013). Bisnovatyi-Kogan:2015dxa G. S. Bisnovatyi-Kogan and O. Y. Tsupko, Plasma Phys. Rep. 41, 562 (2015).
http://arxiv.org/abs/2307.06144v1
20230712125939
On Anick resolution: from the original setting to the language of non-commutative Groebner bases
[ "Adya Musson-Leymarie" ]
math.KT
[ "math.KT" ]
title author Univ. Limoges, CNRS, XLIM, UMR 7252, F-87000 LIMOGES, France email: Anick introduced a resolution, that now bears his name, of a field using an augmented algebra over that field. We present here what one could call a dictionary between Anick's original paper <cit.> and the other resources on the matter, most of which use the language of non-commutative Gröbner bases. § INTRODUCTION tocsectionIntroduction When the task at hand is to effectively compute some homology groups of an associative algebra, one is presented with a choice: which resolution to use. The bar resolution <cit.> is a resolution that always exists but is usually far too large for practical purposes. One prefers resolutions that are as close as possible to the minimal one. However, there are no definite algorithm known to compute the minimal resolution in general. Anick resolution <cit.> offers an alternative (for a certain class of algebras) that involves usable amount of data to be constructed and used to complete our goals of computation. It is not in general minimal but it is still smaller than the bar resolution and is computed quite easily algorithmatically. Anick resolution acts upon an augmented algebra A with a given presentation whose generators extend in a free monoid of monomials that needs be equipped with a monomial order. The resolution can be taken in the category of right A-modules as well as left A-modules. It consists of free modules whose bases are formed from n-chains <cit.>, a construction that acts purely combinatorially on another one called obstructions (or tips). These two are the fundamental concepts surrounding Anick resolution. An algorithmic method to compute the so-called obstructions is through the use of non-commutative Gröbner bases, as done by Ufnarovski in <cit.>. This is why the Anick resolution is nowadays mostly known through the lense of non-commutative Gröbner bases. Our purpose in this paper is to give proofs and explanations from known facts in the folklore surrounding the topic of Anick resolution, that had yet to be written explicitly. In particular, we wish to emphasise the bridge existing between Anick's original paper <cit.> and the subsequent resources expressed in the language of non-commutative Gröbner bases. Following Anick's paper structure, we will introduce our setting while showing how it translates with his work. In the first section, the two main results are Proposition <ref> and Proposition <ref> that state, respectively: * the normal words are exactly the words that have no leading monomials of the relations as subwords, * the obstructions are the leading monomials of the minimal non-commutative Gröbner bases of the ideal of relations. The second section introduces the notion of n-chains from two different perspectives: one due to Anick and the other one due to Ufnarovski in terms of a graph. We show in Proposition <ref> that those two are equivalent. The third section presents the resolution in itself and gives the proof written by Anick, as a matter of completeness, with somewhat more details to help the reader. Throughout this paper, we will follow an example to show how the different constructions concretely shape out to be. § NOTATIONS AND CONVENTIONS tocsectionNotations and conventions * We will denote by X (where X is a non-empty set) the free monoid on X consisting of the words on the alphabet X. We will write 1 for the empty word. Alternative notations often found in the litterature are X^∗ (Kleene star) for the free monoid and ϵ for the empty word. * Writing S where is a field and S is a set will denote the -vector space with basis S, thus consisting of the finite formal linear combinations of elements of S with coefficients in . * We identify []X, where is a field and X is a non-empty set of indeterminates, with the free algebra consisting of all the polynomials with non-commutative indeterminates in X and coefficients in . It is the -algebra on the free monoid X (see for instance <cit.> for the construction of the free algebra). Alternative notations in the litterature include X^∗ or T(V), where T(V) denotes the tensor algebra constructed from a vector space V whose basis is in bijection with X. The tensor algebra so-defined and []X are isomorphic as -algebras. * The notation I(R), where R is a subset of []X, means the two-sided ideal generated by R in the ring []X, the set of finite combinations of elements from R with left and right coefficients in []X. * Let A be a -algebra. A presentation X|R of A is given by a set X of indeterminates called generators and a subset R of []X called relations such that A is isomorphic to the quotient algebra []X/I(R). It is easy to see that any algebra is of this form (refer to <cit.> for details). Given a presentation X|R of A, we will write g, where g is a polynomial or a word in []X, for the image of g in A by the natural projection π induced by the presentation. * Let us fix throughout this paper ≺, a monomial order on X, a well-order on X compatible with left- and right-multiplication of words. * If f is a nonzero polynomial in []X, then the highest monomial for ≺ in the support of f (the set of monomials in f appearing with a nonzero coefficient) will be written f. We will write similarly F := ff ∈ F for any set F of nonzero polynomials in []X. * For a set of generators X, we will call monomial ideal any monoidal ideal in X, any subset I of X such that for every word w in I and every word u and v in X, the word uwv is also in I. * We remind the reader that, given an ideal I in []X, a non-commutative Gröbner basis of I according to the monomial order ≺ is any subset G of I such that the leading monomials G generates I as a monomial ideal, every leading monomial in I has a leading monomial in G as a subword. We will call a non-commutative Gröbner basis G minimal if no leading monomial in G is divisible by another leading monomial in G. If moreover no monomial in the support of any element of G is divisible by a leading monomial in G, it is said to be reduced. Note that all the minimal non-commutative Gröbner bases of a same ideal I share the same set of leading monomials. § SETTING Let us fix a field throughout this paper. This corresponds to the field k in <cit.>. Suppose we have an associative unitary -algebra denoted A (instead of G in <cit.>) that we assume is augmented by the augmentation map []A; it is a surjective homomorphism of -algebras. We will denote [η]A the section of defined by η(1_) = 1_A, as a -linear map. Throughout this paper, we will assume that A is defined by a fixed presentation X|R in the sense that A is actually equal to the quotient algebra []X/I(R). In <cit.>, Anick uses presentations implicitly: he picks out a set X (he denotes S) of generators for A, and considers the canonical surjective morphism of algebras [f][]XA that ensues, that subsequently, by the First Isomorphism Theorem, gives rise to an isomorphism []X/(f)≅ A. In our setting, this surjective morphism f is exactly the natural projection π from the free algebra to the quotient algebra. It follows that the kernel (f) in <cit.> is nothing other than the two-sided ideal I(R) generated by R in our notations. It is worth noting that most subsequent sources make the assumption that is zero on the set X of generators. This indeed is the case when we take the very common augmentation of a presented algebra as the evaluation of polynomials at 0, assuming our relations do not contain any constant terms. It is also the case when we consider the common case of a connected graded algebra augmented by its natural augmentation, as done in <cit.>[Where connectivity of graded algebras is always assumed at page 24.]. However, with very little efforts, mostly during the initialisation of the proof of exactness, the general case, without any assumptions on the augmentation map , remains true as shown in <cit.>. In <cit.>, Anick uses a specific kind of monomial order that is graded by a certain function he denotes e. In particular, if we follow his suggestion of setting e(x) = 1 for all x ∈ X, we obtain the order commonly known as deglex (lexicographic order graded by degree). In <cit.>, Anick introduces a set M, defined as the set of words w whose images in A cannot be expressed as a finite linear combination of images in A of words that are smaller than w by ≺. These words are called normal words in the litterature. This vocabulary comes from the algebraic rewriting area where a word is called a normal form according to a set of rewriting rules when no more of the rules can be applied to it. We shall later see (Corollary <ref>) that the set M is exactly the set of normal form monomials according to R (the rewriting rules induced by the relations) if, and only if, R is a non-commutative Gröbner basis of I(R) according to ≺. Therefore, by existence and uniqueness of a reduced non-commutative Gröbner basis of I(R), it makes sense to talk about normal words solely based on the ideal rather than the generating set of rewriting rules. What Anick calls an admissible monomial in <cit.> is thus in our language a normal word. In our setting, we will define and write M in the same manner, explicitly: M := w ∈X∀ (w_1w_n) ∈X^n w_i ≺ w ∀ (λ_1λ_n) ∈^n w≠∑_i = 1^nλ_i w_i. It is well known in the litterature on non-commutative Gröbner bases <cit.> that the set M is the complement of I(R) in X, usually denoted O(I(R)). We prove this fact in the next proposition. M-oir With the same previous notations, we have: M = X∖I(R) =: O(I(R)). As a consequence, we have the very well-known direct sum decomposition of -vector spaces: []X = I(R) ⊕ M. Let w ∈ M. Suppose there exists a nonzero polynomial g ∈ I(R) such that w = g. Therefore, we can write g = λ_w w + ∑_w' ≺ wλ_w' w' with λ_w ≠ 0. Applying the natural projection π, on one hand, we get g = 0 because g ∈ I(R) and on the other hand, g = λ_w w + ∑_w' ≺ wλ_w'w' because π is linear. By rearranging, we have exhibited that: w = -1/λ_w∑_w' ≺ wλ_w'w'. Therefore, w ∉ M which is a contradiction, so M ⊆ O(I(R)). Conversely, if w ∉ M, we can write w = ∑_w' ≺ wλ_w'w', then consider the polynomial g = w - ∑_w' ≺ wλ_w' w'; it is non-zero and is trivially sent to zero under π, therefore we exhibited g ∈ I(R) such that g = w, which means w ∈I(R), X∖ M ⊆I(R) from which we deduce O(I(R)) ⊆ M and the result follows. R-is-grobner-basis The set R of relations is a non-commutative Gröbner basis of I(R) if and only if M is the set of normal form monomials according to R. Normal form monomials according to R are exactly the monomials that are not in the monomial ideal generated by R. Therefore, the corollary is a consequence of Proposition <ref>, by the definition of non-commutative Gröbner bases given previously. Since A is isomorphic to []X/I(R), it follows, from the decomposition in Proposition <ref>, that A is isomorphic to M as -vector spaces, the family M := mm ∈ M is a -basis of A. In particular, the cardinality of the set of normal words gives the dimension of A. Some authors (see for instance <cit.>, page 28) write N for the space spanned by normal words M and call it the normal complement of the ideal I(R): it is isomorphic to A as -vector spaces and allows therefore, by identification, to perform most of our computations inside the free algebra. The set M has a special structure that Anick calls order ideal of monomials (or "o.i.m." for short). That structure is defined as a subset W of words in X such that every subword of a word in W is also in W. He proceeds by mentionning that giving an o.i.m. is equivalent to giving an anti-chain with respect to the subword partial order (a set of words that are pairwise not subwords of one another). We prove that result here in the more general context of any poset with a well-founded relation (o.i.m.'s and anti-chains are defined for any poset). If Ê := (E, ≤) is a partially ordered set, consider the following sets: I_Ê := F ⊆ E∀ x ∈ F, ∀ y ∈ E, y ≤ x y ∈ F, J_Ê := F ⊆ E∀ x ∈ F, ∀ y ∈ F, x ≮ y y ≮ x. Notice that the set I_Ê is the set of o.i.m.'s of Ê and the set J_Ê is the set of anti-chains of Ê. Then, define the following map: [f_Ê]J_ÊI_Ê [F][f_Ê(F) := y ∈ E∀ x ∈ F (x ≤ y y ≤ x) y < x.] This translates as saying that an element y is in the image of an anti-chain F if and only if, granted y is comparable with an element from F, then it is necessarily smaller. This means that f_Ê(F) is exactly the union of the set of the elements incomparable with any element from F and of the set of the elements that are smaller than an element from F. The map is well-defined because if F is an anti-chain, then, for any y ∈ f_Ê(F) and x ≤ y: * if y is comparable to an x' ∈ F, then y < x'. By transitivity, x < x' and thus x ∈ f_Ê(F). * if y is incomparable with every element of F, then ∀ x' ∈ F, x' ≰x. Otherwise there would exist x' ∈ F such that x' ≤ x ≤ y, a contradiction. Therefore, if x is comparable with a x' ∈ F, it is necessarily such that x < x'. This means exactly that x ∈ f_Ê(F). Hence, f_Ê(F) is an o.i.m. oim-antichain Let Ê = (E, ≤) be a partially ordered set. If ≤ is a well-founded relation on E, then f_Ê is a bijection and its inverse is given by: [g_Ê]I_ÊJ_Ê [F'][g_Ê(F') := y ∈ E ∖ F'∀ x ∈ E ∖ F' x ≮ y.] Notice that the map g_Ê sends an o.i.m. F to the set of minimal elements in E ∖ F, which is an anti-chain by construction; hence g_Ê is well-defined. Since ≤ is a well-founded relation, note that g_Ê(F) is non-empty if and only if F ≠ E. Let us show both assertions of the proposition at once by proving that g_Ê is indeed a two-sided inverse for f_Ê. Consider F' ∈ I_Ê an o.i.m. Define F as g_Ê(F'), as the set of minimal elements of E ∖ F'. If F' = E, then F = ∅. In that case, f_Ê(F) is equal to E since there are no elements in F to compare the elements of E with. Consider thus F' ≠ E. Hence, F is non-empty. It follows that see that f_Ê(F) = F'. Indeed: * Suppose y ∈ f_Ê(F). On one hand, if y is not comparable with any elements of F, then y cannot possibly be in E ∖ F' since there exists elements in F and they are minimal in E ∖ F'; y would therefore be comparable with one of them. On the other hand, if it is comparable to an x ∈ F, then y < x but y cannot be in E ∖ F' since x is minimal. Therefore, f_Ê(F) ⊆ F'. * Suppose now y ∈ F'. Assume y is comparable with some x ∈ F x ≤ y y ≤ x. If x ≤ y then it would follow that x ∈ F' since F' is an o.i.m. Therefore, since x ∉ F', we must necessarily have y < x and thus y ∈ f_Ê(F). Hence, F' ⊆ f_Ê(F). Hence, f_Ê∘ g_Ê = 𝕀_I_Ê. Consider now F ∈ J_Ê an anti-chain. Define F' as f_Ê(F). Note that F ⊆ E ∖ F'. * Suppose y ∈ g_Ê(F'). In particular, y ∉ F'. Then y is comparable with an x ∈ F such that necessarily x ≤ y. But, on one hand, x ∈ F implies x ∉ F', on the other hand, y is minimal in E ∖ F', therefore x = y and thus y ∈ F. * Suppose y ∈ F. Then y ∉ F'. Suppose we would have x ∈ E ∖ F' such that x < y. Then x would be comparable with a z ∈ F such that z ≤ x and thus we would have z < y. However, F is anti-chain and z, y ∈ F, so that's a contradiction. We conclude that there are no elements x ∈ E∖ F' such that x < y, which exactly means y ∈ g_Ê(F'). Hence, we have g_Ê∘ f_Ê = 𝕀_J_Ê. V_M-anti-chain Denote by X̂ the free monoid X equipped with the well-founded relation of subwords. Define V_M as g_X̂(M), the unique anti-chain in X̂ associated to the o.i.m. M of normal words where g_X̂ is the map defined in (<ref>). The elements of V_M are called the obstructions (or tips) of the o.i.m. M. It is well known in the litterature <cit.> that V_M is the minimal generating set of I(R) as a monomial ideal. We prove this fact in the next proposition. It shows furthermore the connection between Anick's original setting and the language of non-commutative Gröbner bases. V_M-basis The set V_M is the unique minimal set generating I(R) as a monomial ideal. In particular, for any minimal non-commutative Gröbner basis G of I(R), we have: V_M = G. Recall by Definition <ref>, V_M is the minimal elements of the complement of M. But, by Proposition <ref>, we have M = X∖I(R). Therefore, V_M is exactly the set of minimal elements (in terms of the subword relation) of I(R), which is exactly equivalent to saying that V_M generates I(R) as a monomial ideal. Moreover, V_M being an anti-chain, removing any element from V_M implies losing the ability to generate I(R), hence V_M is minimal as a generating set. In particular, if R is indeed a minimal non-commutative Gröbner basis as it usually is, then we have V_M = R. In general, we can use the set of obstructions as a one-to-one index set for the reduced non-commutative Gröbner basis of I(R). It can be also useful in certain contexts to consider the associated monomial algebra presented by X | V_M, for instance to compute more easily the Hilbert series (see <cit.>). The Proposition <ref> is equivalent to, but expressed in a different way than, the Lemma 1.2 from <cit.> stating that every non-normal word contains an obstruction. § N-CHAINS AND CRITICAL BRANCHINGS The idea of Anick resolution is to construct free A-modules with as close to the minimal amount of generators as possible that still allow us to define differentials in a way that gives rise to a resolution with an explicit contracting homotopy. We will consider here the case of right modules (refer to <cit.> for an adaptation to left modules). In order to do so, Anick introduces the notions of n-prechains and n-chains through a top-down definition. Chains (top-down)chains-top-down Let w = x_1 ⋯ x_ℓ be a word in X. Let n ∈. We say that w is a n-prechain if there exists two n-tuples (a_1a_n) et (b_1b_n) of integers such that: 1 = a_1 < a_2 ≤ b_1 < a_3 ≤ b_2 < a_4 ≤ b_3 < ⋯ < a_n ≤ b_n-1 < b_n = ℓ and ∀ i ∈1n x_a_i x_a_i + 1⋯ x_b_i - 1 x_b_i∈ V_M. A n-prechain is called a n-chain if: ∀ m ∈1n∀ i ∈1b_m - 1 x_1 x_2 ⋯ x_i is not a m-prechain. Intuitively, a n-prechain is a sequence of n obstructions where two obstructions in a row overlap each other by at least a character while obstructions separated by at least one obstruction in the sequence do not overlap. A n-chain is a n-prechain such that the consecutive overlaps are "maximal" in the sense that no other overlap with the same obstructions could have been longer while still satisfying the condition that the obstructions one apart do not overlap. All of the obstructions need not appear in each prechain and chain and the same obstruction can appear several times within a single prechain or chain. Notice that the set of 1-chains according to this definition is exactly the set of obstructions V_M. chains On the alphabet X = x, y, z with the anti-chain V_M = xxx, xxyx, yxz, we have xxxx is a 2-chain, xxxxx is a 2-prechain but not 2-chain nor a 3-prechain. Similarly, xxxyx is a 2-chain but xxxxyx is a 2-prechain but not a 2-chain, since xxxx is a 2-prechain contained in it and shorter but with the same number of obstructions. It is also not a 3-prechain because the first obstruction xxx would overlap with the third obstruction xxyx. A 3-chain is for instance xxyxxyxz. By convention, let the set of (-1)-chains be exactly 1 and the set of 0-chains be exactly X. Anick establishes a result in <cit.> in the form of Lemma 1.3 stating that for an n-chain, the n-tuples (a_1a_n) and (b_1b_n) are uniquely determined. In particular, this means that: split-chains For any n ∈, any n-chain w = x_1 ⋯ x_ℓ (defined with (a_1a_n) and (b_1b_n)) can be uniquely expressed as w = vu, where v = x_1 ⋯ x_b_n-1 is an (n-1)-chain and u = x_b_n-1 + 1⋯ x_ℓ is a normal word. Following the examples given in Example <ref>: * for the 2-chain w = xxxyx, v = xxx, u = yx. * for the 2-chain w = xxxx, v = xxx, u = x. * for the 3-chain w = xxyxxyxz, v = xxyxxyx, u = z. The top-down Definition <ref> is not particularly easy to grasp, as such, conceptually and even less algorithmically. We will prefer the bottom-up definition given in most other sources and present it here. First, let us warn that we will be using the numbering proposed in <cit.> rather the one proposed in <cit.>: what Anick calls 0-chains in the top-down Definition <ref> will be 1-chains for us, 1-chains will be 2-chains, and so on. That way, the numbering will match the homology degrees conveniently. Chains (bottom-up) due to Ufnarovski <cit.>chains-bottom-up With previous notations and remarks, construct a simple directed graph Q whose nodes are: Q_0 = 1∪ X ∪s ∈Xs is a proper suffix of an obstruction. The directed edges are defined as follows: Q_1 = (1, x)x ∈ X∪(s, t) ∈ (Q_0 ∖1)^2st contains only one obstruction and it is a suffix. For any non-negative integer n ∈, we define the set of n-chains as: C_n := ∏_i = 0^n w_i(1 = w_0, w_1, ⋯, w_n) are nodes in a path of length n in Q starting at 1. In other words, an n-chain is the product of nodes travelling through a path of length n starting at the node 1. Note that the nodes that are not in the connected component of the node 1 have no use for our purpose and can therefore be omitted. Note also that we have C_0 = 1, C_1 = X, and C_2 = V_M. This definition can be rephrased with ease in terms of a recursive definition of n-chains with tails as done in <cit.>. Top-down and bottom-up definitions matchchains-definitions-match Let us denote by Ĉ_n the set of n-chains defined in Definition <ref> for n ≥ -1. We have: ∀ n ∈Ĉ_n-1 = C_n. We see easily that this is true for n ∈0, 1, 2. By induction, suppose this is true for a certain n ≥ 2. Let w ∈Ĉ_n defined by w = x_1 ⋯ x_ℓ and the n-tuples (a_1a_n) and (b_1b_n) for Definition <ref>. By Proposition <ref>, we have: v := x_1 ⋯ x_b_n-1∈Ĉ_n-1 and u := x_b_n-1 + 1⋯ x_ℓ∈ M. Since u is a proper suffix of s = x_a_n⋯ x_b_n (an obstruction), then u ∈ Q_0. By induction hypothesis, v ∈ C_n. Moreover, the last node in the path for v is t := x_b_n-2 + 1⋯ x_b_n-1. Since w is a n-prechain, it follows that b_n-2 < a_n, thus tu evidently contains s as a suffix. Furthermore, the last axiom of n-chains (top-down) ensures that s is the only obstruction that tu contains. This means exactly that there is an edge between t and u such that the path v ∈ C_n can be extended with u and gives w = vu ∈ C_n+1. Let w ∈ C_n+1. It is thus defined as a path of n+1 length. Let u be the last node of that path and t the node before that. Denote by v the path of length n when we omit u. We have v ∈ C_n. By induction hypothesis, it follows that v ∈Ĉ_n-1. Let (a_1a_n-1) and (b_1b_n-1) be the (n-1)-tuples defining v in Definition <ref>. Denote by s the obstruction linking t and u. We know that tu contains s as a suffix and that ℓ(s) > ℓ(u)[ℓ(w') is the length of the word w' ∈X, the number of letters from X that constitutes the word.] because u is either a letter or a proper suffix of an obstruction. Define a_n := b_n-1 + ℓ(u) - ℓ(s) + 1 ≤ b_n-1 and b_n := b_n-1 + ℓ(u) > b_n-1. Then the tuples (a_1a_n) and (b_1b_n) make w = vu into a n-chain (top-down) since x_a_n⋯ x_b_n = s ∈ V_M and no other obstructions is contained in x_b_n-2 + 1⋯ x_b_n. Therefore, w ∈Ĉ_n. Consider again the example X = x, y, z and V_M = xxx, xxyx, yxz. We have: Q_0 = 1, x, y, z, xx, xyx, yx, xz. The graph is then given by Figure <ref>. Each arrow that does not start from 1 is to be understood as indexed by an obstruction, the obstruction satisfying the condition for the directed edge. Let us now introduce some useful notations. bracket-notation Let n ∈, m ∈0n and c^(n)∈ C_n be a n-chain according to Definition <ref>. Let us explicitly fix (a_1a_n-1) and (b_1b_n-1) the uniquely determined tuples of integers defining c^(n) = x_1 ⋯ x_ℓ as in Definition <ref>. Write: c^(n)^m := 1 if m = 0 x_1 if m = 1 x_1 ⋯ x_b_m - 1 if 1 < m ≤ n ∈ C_m. This designates the m-chain that is a prefix of c^(n). c^(n)_m := w if m = 0 x_2 ⋯ x_ℓ if m = 1 x_b_m - 1 + 1⋯ x_ℓ if 1 < m < n 1 if m = n∈X. That corresponds to the left-over part of the n-chain c^(n) after removing the m-chain prefix. In particular, this means that for all n ∈ and c^(n)∈ C_n then: ∀ m ∈0n c^(n) = c^(n)^m c^(n)_m ∈X. As a remark, notice the link with algebraic rewriting. Let us use some terminology from that field. The relations from R define rewriting rules: λ w_1 g w_2 + h R→ λ/g w_1 r(g) w_2 + h where w_1, w_2 ∈X, λ∈∖0, g ∈ R, h ∈[]X such that w_1gw_2 does not belong to its support and r(g) := gg - g with g the coefficient of g in g. In that context, a word is said to give rise to a critical pair if two (or the same one twice) of those rewriting rules can be applied on parts of the word that overlap, while overall going from beginning to end of the word, giving possibly different results. If three rules can be applied, we talk about critical triples, if four, critical quadruples and so on. In general, these are called critical branchings. In the common case where R is a minimal non-commutative Gröbner basis (and thus V_M = R), let us now note that the set C_n of n-chains for n ≥ 3 is a subset of the words that give rise to a critical branching, 3-chains give rise to critical pairs. Indeed, a rewriting rule is applied on a word if it contains a leading monomial of R an obstruction in our case. Therefore, two rules will be simulaneoulsy applied on a 3-chain because it contains exactly two obstructions, and so on and so forth for higher degrees. For more details on algebraic rewriting and its connections with non-commutative Gröbner bases and Anick resolution, see <cit.>. § ANICK RESOLUTION We can now introduce the Anick resolution. It will be a resolution of the field made out of free right A-modules. Indeed, once A is augmented by []A, we can equip with a structure of right A-module using the external law of composition × A ∋ (λ, a) ↦λ(a) ∈. Similarly, we could define a structure of left A-module (or even of A-bimodule). Hence, even if in this paper we present a resolution of by right A-modules, with a few minor adaptations, one by left A-modules would work just as well (see <cit.>). The free modules in the resolution are defined from the linear hull of (the vector space generated by) the sets of chains in each degree. The differentials are defined inductively at the same time as the contracting homotopy proving that the complex is a resolution. It is helpful to define an order on the bases of the free modules. In order to do so, we will make use of the monomial order ≺ at hand. order-basis Let n ∈. Let C_n be the set of n-chains on an anti-chain V, as defined in Definition <ref>. We define an order < on the basis of the free right A-module C_n ⊗_ A as: ∀ c_1, c_2 ∈ C_n ∀ s_1, s_2 ∈ O(I(R)) c_1 ⊗s_1 < c_2 ⊗s_2def c_1s_1 ≺ c_2s_2. The order < is well-defined and total because V is an anti-chain (since it entails that c_1 ⊗s_1≠ c_2 ⊗s_2 implies that c_1 s_1 ≠ c_2 s_2). Moreover, it is a well-order (induced by the properties of ≺). It follows that, for every element in C_n ⊗_ A, there is a greatest term according to the order <, since such an element is a finite sum of terms. An alternative way to define this greatest term would be to use the monomial order on a polynomial that we associate to the element of C_n ⊗_ A, as we do in the next definition. LM The leading monomial (or high-term, as called by Anick <cit.>) of an element P := ∑_iλ_i c^(n)_i ⊗r_i∈ C_n ⊗_ A is defined as P := ∑_iλ_ic^(n)_ir_i∈X, where r_i is the unique normal form of r_i. We can now formulate the Anick resolution and prove its exactness. Anick resolution Let be a field. Let A be a -algebra augmented by with the section defined by η(1_) = 1_A. Let X|R be a presentation of A such that R is a minimal non-commutative Gröbner basis according to the monomial order ≺. Let O(I(R)) := X∖I(R) be the set of normal words. Let V := R be the set of leading monomials in R, called obstructions. For any n ∈, let C_n denote the set of n-chains on V as defined in Definition <ref>. The following is a free resolution of in the category of right A-modules: ⋯→ C_n+1⊗_ A d_n+1→ C_n ⊗_ A →⋯→ C_2 ⊗_ A d_2→ C_1 ⊗_ A d_1→ C_0 ⊗_ A →→ 0, where for n ≥ 1, the map of right A-modules d_n satisfies: ∀ c^(n)∈ C_n [d_n]c^(n)⊗ 1_A := c^(n)^n-1⊗c^(n)_n-1 + ω_c^(n), with either ω_c^(n)= 0 or its high-term verifies ω_c^(n)≺ c^(n). The proof is done by induction by constructing the differentials and contracting homotopy at the same time. Note that to prove exacteness at E in ⋯→ F δ_1→ E δ_0→⋯, it suffices to prove that δ_0 δ_1 = 0 and that there exists a -linear map [ι_0](δ_0)F such that δ_1 ι_0 = 𝕀_( δ_0). Since C_0 = 1, we can identify C_0 ⊗_ A with A for the initialisation, as a matter of simplifying notations. Then, define [d_1] C_1 ⊗_ AA as the map of right A-modules with: ∀ x ∈ C_1 = X d_1(x ⊗ 1_A) = x - η(x). Firstly, it is evident that d_1 = 0 since is -linear map and η is a section of it. The kernel of is spanned by the elements from A of the form s - η(s) where s ∈ O(I(R)). Indeed, every element of a ∈ A is written η(a) + (a - η(a)) and we have the decomposition from the augmentation: A = 1_A ⊕(). Defining i_0 on those elements: ∀ s = x_1 ⋯ x_ℓ∈ O(I(R)) i_0(s - η(s)) := ∑_j = 1^ℓ[]x_1⋯ x_j-1x_j ⊗x_j+1⋯ x_ℓ, (with the convention that x_1 ⋯ x_j-1 = 1 if j ≤ 1 and x_j+1⋯ x_ℓ = 1 if j ≥ℓ) and extending by -linearity on () gives us a map satisfying d_1 i_0 = 𝕀_(). Indeed for s = x_1 ⋯ x_ℓ∈ O(I(R)), we have: d_1 i_0(s - η (s)) = [d_1]∑_j = 1^ℓ[]x_1⋯ x_j-1x_j ⊗x_j+1⋯ x_ℓ = ∑_j = 1^ℓ[]x_1⋯ x_j-1[d_1]x_j ⊗x_j+1⋯ x_ℓ d_1-linear, = ∑_j = 1^ℓ[]x_1⋯ x_j-1x_j ⋯ x_ℓ - η(x_j) x_j+1⋯ x_ℓ definition of d_1, = ∑_j = 1^ℓ[]x_1⋯ x_j-1x_j ⋯ x_ℓ - η(x_1⋯ x_j) x_j+1⋯ x_ℓ η-linear, algebra morphism. Since η(1_) = 1_A[This is a requirement, trivially verified when η is considered as a morphism of unitary algebras and not simply -linear. Otherwise, in that latter case, η, being a section of , could satisfy η(1_) = 1_A + ω, where ω∈()], then η(x_1⋯ x_j) x_j+1⋯ x_ℓ = (x_1⋯ x_j) x_j+1⋯ x_ℓ, therefore the right-most and left-most of two consecutive terms in the sum cancel out. Remain only the left- and right-most terms of the entire sum : d_1 i_0(s - η (s)) = x_1 ⋯ x_ℓ - η((x_1 ⋯ x_ℓ)) = s - η(s). This proves exacteness of the sequence at C_0 ⊗_ A. Now, suppose that, for n ∈, the sequence: ⋯→ C_n+1⊗_ A d_n+1i_n⇄ C_n⊗_ A d_ni_n-1⇄ C_n-1⊗_ A ⇄⋯⇄ C_1 ⊗_ A d_1i_0⇄ C_0 ⊗_ A η⇄→ 0 has been proven exact up to C_n-1⊗_ A by defining the differentials d_1, ..., d_n and the contracting homotopy maps i_0, ..., i_n-1 that verify for all m ∈1n: (letting d_0 :=) * d_m-1 d_m = 0. * ∀ c^(m)∈ C_m [d_m]c^(m) = c^(m)^m-1⊗c^(m)_m-1 + ω_c^(m) where either ω_c^(m) = 0 or ω_c^(m)≺ c^(m). * d_m i_m-1 = 𝕀_(d_m-1). * ∀ v ∈(d_m - 1) ∖0i_m-1(v) = v (equality in X as in Definition <ref>). We have proven Properties <ref>, <ref>, and <ref> are satisfied at initialisation. Moreover, it is quite evident from the definition of i_0 that Property <ref> is verified. Let us define d_n+1 and i_n such that they prove exactness at C_n ⊗_ A. Define: [d_n+1] C_n+1⊗_ A C_n ⊗_ A [c^(n+1)⊗ 1_A] [c^(n)⊗t - [i_n-1 d_n]c^(n)⊗t] where c^(n) := c^(n+1)^n and t := c^(n+1)_n. By Property <ref> for m = n, it is evident that d_n d_n+1 = 0, proving Property <ref> for d_n+1. Letting ω_c^(n+1) := -[i_n-1 d_n]c^(n)⊗t we obtain the desired expression for d_n+1. Indeed, by Property <ref> for m = n, ω_c^(n+1) = d_n(c^(n)⊗t). But by Property <ref>, d_n(c^(n)⊗t) matches with c^(n-1)⊗r where c^(n-1) := c^(n+1)^n-1 and r := c^(n+1)_n-1. But since there are no overlaps between the last obstructions of c^(n+1) and of c^(n-1), r contains the last obstruction of c^(n+1) and is therefore not normal. Reducing it to its normal form r̂ implies that c^(n-1)r̂ is smaller than c^(n+1). Thus ω_c^(n+1)≺ c^(n+1), proving Property <ref> for d_n+1. Let us define recursively the -linear map i_n on (d_n). Let v = ∑_iλ_i c^(n)_i ⊗s_i∈(d_n). Assume that i_n has been defined for all v' ∈(d_n) with v'≺v such that it satisfies Properties <ref> and <ref> on those elements. Without loss of generality, assume that v coincides with c^(n)_0 ⊗s_0 where s_0 is normal. Then, since d_n(v) = 0, it follows that c^(n-1)_0 ⊗r_0 = - ω_v where [d_n]c^(n)_0 ⊗s_0 = c^(n-1)_0 ⊗r_0 + ω_c^(n)_0 in such a way that c^(n-1)_0 = c^(n)_0^n-1 and r_0 = c^(n)_0_n-1 s_0, as well as, ω_v = 1/λ_0ω_c^(n)_0 + [d_n]∑_i ≠ 0λ_i c^(n)_i ⊗s_i and thus by Property <ref>, c^(n-1)_0 ⊗r_0≺ c^(n)_0 s_0. This implies, since c^(n-1)_0 r_0 = c^(n)_0 s_0, that r_0 is not normal and thus contains an obstruction. Consider the obstruction in r_0 starting the furthest to the left. It will overlap with the last obstruction in c^(n)_0 since s_0 is normal. Therefore, we obtain an (n+1)-chain c^(n+1) and t a proper suffix of s_0 such that c^(n-1)_0 r_0 = c^(n)_0 s_0 = c^(n+1)t in X. Define: i_n(v) := v c^(n+1)⊗t + [i_n]v - v[d_n+1]c^(n+1)⊗t. This works because v - v[d_n+1]c^(n+1)⊗t < v since d_n+1 verifies the Property <ref> and thus cancellation on the leading term occurs. It follows that i_n verifies Property <ref> since we assumed i_n satisfies Property <ref> for elements with a smaller leading monomial. Finally, by recursive hypothesis, we have that Property <ref> is verified on elements with a smaller leading monomial. Hence: d_n+1 i_n (v) = v[d_n+1]c^(n+1)⊗t + v - v[d_n+1]c^(n+1)⊗t = v and thus, d_n+1 and i_n will ultimately verify Property <ref> on (d_n). This concludes the inductive proof. Let us consider, in the continuity of the examples throughout this paper, the algebra presented by X|R, where X = x, y, z and R = xxyx, xxx - xx, yxz - yx with the deglex monomial order induced by x ≻ y ≻ z augmented with the evalutation of polynomials at zero. We have V := R = xxyx, xxx, yxz and the graph of n-chains is given in Figure <ref>. We have, for all ζ∈ X and x_1 ⋯ x_ℓ∈ O(I(R)): d_1(ζ⊗1) = ζ i_0(x_1 ⋯ x_ℓ) = x_1 ⊗x_2 ⋯ x_ℓ Then: d_2(xxx ⊗1) = x ⊗xx - i_0 d_1 (x ⊗xx) definition of d_2 = x ⊗xx - i_0 (xxx) definition of d_1 = x ⊗xx - i_0 (xx) reduction = x ⊗xx - x ⊗x definition of i_0 Similarly, we compute: d_2(xxyx ⊗1) = x ⊗xyx d_2(yxz ⊗1) = y ⊗xz - y ⊗x The 3-chains are xxyxxyx, xxyxxx, xxyxz, xxxyx, xxxx. Then: d_3(xxxyx ⊗1) = xxx ⊗yx - i_1 d_2 (xxx ⊗yx) definition of d_3 = xxx ⊗yx - i_1 (x ⊗xxyx - x ⊗xyx) definition of d_2 = xxx ⊗yx - i_1 (x ⊗ 0 - x ⊗xyx) reduction = xxx ⊗yx + xxyx ⊗1 definition of i_1 (see (<ref>)) In an analoguous manner, we compute: d_3(xxyxxyx ⊗1) = xxyx ⊗xyx d_3(xxyxxx ⊗1) = xxyx ⊗xx - xxyx ⊗x d_3(xxyxz ⊗1) = xxyz ⊗z - xxyx ⊗1 d_3(xxxx ⊗1) = xxx ⊗x The 4-chains are: {xxyxxyxxyx, xxyxxyxxx, xxyxxyxz, xxyxxxyx, xxyxxxx, xxxyxxyx, xxxyxxx, xxxyxz, xxxxxyx, xxxxxx} We thus have: d_4 (xxxyxxx ⊗1) = xxxyx ⊗xx - i_2 d_3 (xxxyx ⊗xx) definition of d_4 = xxxyx ⊗xx - i_2 (xxx ⊗yxxx + xxyx ⊗xx) definition of d_3 = xxxyx ⊗xx - i_2 (xxx ⊗yxx + xxxyx ⊗xx) reduction = xxxyx ⊗xx - xxxyx ⊗x - i_2 (xxyx ⊗xx - xxyx ⊗x) definition of i_2 (see (<ref>)) = xxxyx ⊗xx - xxxyx ⊗x - xxyxxx ⊗1 definition of i_2 (see (<ref>)) We compute in the same way: d_4 (xxyxxyxxyx ⊗1) = xxyxxyx ⊗xyx d_4 (xxyxxyxxx ⊗1) = xxyxxyx ⊗xx - xxyxxyx ⊗x d_4 (xxyxxyxz ⊗1) = xxyxxyx ⊗z - xxyxxyx ⊗1 d_4 (xxyxxxyx ⊗1) = xxyxxx ⊗yx + xxyxxyx ⊗1 d_4 (xxyxxxx ⊗1) = xxyxxx ⊗x d_4 (xxxyxxyx ⊗1) = xxxyx ⊗xyx - xxyxxyx ⊗1 d_4 (xxxyxz ⊗1) = xxxyx ⊗z - xxxyx ⊗1 - xxyxz ⊗1 d_4 (xxxxxyx ⊗1) = xxxx ⊗xyx d_4 (xxxxxx ⊗1) = xxxx ⊗xx - xxxx ⊗x We can compute in that fashion any differential, by computing all the previous ones that are needed. § ACKNOWLEDGEMENTS I would like to thank Cyrille Chenavier for his continuous guidance and support all along the process of writing this paper, providing many relevant remarks and insight on the subject. I would also like to thank Thomas Cluzeau for proofreading this note. plain
http://arxiv.org/abs/2307.04253v1
20230709193405
The equality case in the substatic Heintze-Karcher inequality
[ "Stefano Borghini", "Mattia Fogagnolo", "Andrea Pinamonti" ]
math.DG
[ "math.DG", "math.AP", "49Q10 (Primary) 53C24, 58J32, 53E10, 53C21 (Secondary)" ]
We provide a rigidity statement for the equality case for the Heintze-Karcher inequality in substatic manifolds. We apply such result in the warped product setting to fully remove assumption (H4) in the celebrated Brendle's characterization of constant mean curvature hypersurfaces in warped products. Relativistic time dilation as a quantum mechanism Esteban Martínez Vargas August 12, 2023 ================================================= MSC (2020): 49Q10, 53C24 , 58J32, 53E10, 53C21. Keywords: Heintze-Karcher inequality, substatic manifolds, constant mean curvature, Alexandrov theorem. Relativistic time dilation as a quantum mechanism Esteban Martínez Vargas August 12, 2023 ================================================= § INTRODUCTION AND MAIN STATEMENTS The Heintze-Karcher inequality usually denotes the geometric inequality that, in its more simple form for domains Ω sitting in ^n, with smooth and strictly mean-convex boundary Σ, reads as n-1/n∫_Σf/ d σ≥Ω. This inequality, that was essentially contained in the seminal earlier paper by Heintze-Karcher <cit.> was first pointed out in this very form by Ros <cit.>, where it was also observed that it holds in a general manifold with nonnegative Ricci curvature. Moreover, he showed that equality in <ref> is in force only if Ω is a flat Euclidean balls. Li-Xia <cit.> very vastly generalized Ros' Heintze-Karcher inequality to the setting of substatic Riemannian manifolds with horizon boundary. We recall that with this locution we mean a Riemannian manifold (M, g) endowed with a nonnegative smooth function f (the substatic potential) satisfying f - ∇∇ f + Δ f g ≥ 0, and where ∂ M = {f = 0} is a compact, minimal, regular level set of f (i.e. with ∇ f≠ 0 on ∂ M). In particular, the boundary of such a manifold is empty if and only if f is strictly positive. We are occasionally referring to the tensor on the left-hand side of <ref> as substatic Ricci tensor. Very interestingly, it can be in fact checked to arise as Ricci tensor of a suitable affine connection on (M, g), see <cit.>. The condition <ref> stems naturally from the Einstein Fields Equations of General Relativity, and it is easily observed to hold for initial data sets of static spacetimes. We leave the interested reader to <cit.> and <cit.> for the explicit computations. Letting Σ be a smooth strictly mean-convex hypersurface homologous to ∂ M, and Ω the bounded set enclosed by Σ and ∂ M, the substatic Heintze-Karcher inequality <cit.> has been sharpened in <cit.> as n-1/n∫_Σf/ d σ≥∫_Ω f d μ + c_∂ M∫_∂ M∇ f dσ, where c_∂ M =n-1/n∫_∂ M∇ f dσ/∫_∂ M∇ f[Δ f/f - ∇∇ f/f(∇ f/∇ f, ∇ f/∇ f)] dσ . The above constant has been shown to be well defined and strictly positive in <cit.>, given the existence of a strictly mean-convex Σ homologous to M as in our case. Our first result shows that a strong rigidity is triggered when <ref> holds with equality sign. Let (M, g) be a substatic Riemannian manifold with connected horizon boundary ∂ M, such that the substatic potential f satisfies ∇∇ f/f∈ C^0, α(M ∪∂ M) for α∈ (0, 1). Let Σ be a connected, smooth strictly mean-convex hypersurface homologous to ∂ M Then, the Heintze-Karcher inequality <ref> holds with equality if and only if the domain Ω such that ∂Ω = Σ⊔∂ M is isometric to ([s_0, s] ×∂ M, ds ⊗ ds/f(s)^2 + s^2 g_∂ M). A version of inequality <ref> was originally obtained by Brendle <cit.> as the crucial step for obtaining an Alexandrov-type Theorem in warped product manifolds. For proving <ref>, such warped products were assumed to satisfy a set of assumptions (H1)-(H3), recalled and discussed in <ref> below. As we are going to recall, if they are satisfied, then in particular the warped product is substatic with horizon boundary. Thus, <ref> directly yields a rigidity statement for Brendle's Heintze-Karcher inequality if the additional, technical <ref> is satisfied. Such assumption was fundamental in the elliptic proof of <ref> conceived by Li-Xia <cit.> and reworked by the second and third named authors <cit.>. Exploiting a new synergy between <ref> and the geodesic flow proof worked out in <cit.> to provide <ref> in the warped product setting, we are able to remove assumption <ref> in the special warped product geometry, endowing <cit.> of an optimal rigidity statement. Let (M, g) be a substatic warped product with cross section ∂ M = N, of the form ([s_0, s) × N, ds ⊗ ds/f(s)^2 + s^2 g_N). Let Σ be a connected, smooth, strictly mean-convex hypersurface homologous to N satisfying <ref> with the equality sign. Then, Σ = {s = c} for some c ∈ (s_0, s). We address the reader to the beginning of <ref> for a more detailed presentation of the very peculiar proof of the above result. It was already observed in <cit.> that a constant mean curvature hypersurface must fulfil the identity in <ref>, as a consequence of a straightforward Minkowski identity <cit.>. Thus, <ref> directly provides the following characterization of hypersurfaces of constant mean curvature in substatic warped product, improving on <cit.>. Let (M, g) be a substatic warped product with a connected horizon boundary, of the form ([s_0, s) × N, ds ⊗ ds/f(s)^2 + s^2 g_N). Let Σ be a connected smooth hypersurface homologous to ∂ M = N of constant mean curvature. Then, Σ = {s = c} for some c ∈ (s_0, s). As clarified with additional details in <ref>, the substatic warped products of the form <ref> correspond precisely to the family of warped products considered in the Alexandrov-type Theorem <cit.>. On the other hand, such result was proved under an additional assumption, (H4), substantially prescribing the Ricci curvature being smallest in the radial direction. While (H4) is verified on a number of known model solutions, such as the Schwarz­schild–de Sitter and Reissner–Nordström manifolds mentioned as applications in <cit.>, there are important examples where (H4) does not hold. Indeed, the Schwarz­schild–Anti de Sitter warped product M=[s_0,+∞)× N , g=ds⊗ ds/f^2+s^2 g_N , f=√(-1+s^2-2ms^2-n) , with cross-section N satisfying _g_N≥ -(n-2)g_N and such that -√((n-2)^n-2/n^n)<m≤ 0 is a substatic manifolds with horizon boundary that does not satisfy (H4). In the special case where _g_N= -(n-2)g_N, the warped product (<ref>) is a well known vacuum static solution of the Einstein Fields Equations that has been investigated to some extent in the literature (see e.g. <cit.> and references therein) and constituted the model for the Lee-Neves Riemannian Penrose Inequality <cit.>. <ref>, stemming from our novel proof, allows to fully drop the extra hypothesis (H4), hence in particular also applies, for example, to the metric <ref>. §.§ Further directions and remarks We conclude mentioning, without any attempt to be complete, a couple of papers where the extra assumption <ref> or (H4) is added in connection with <cit.> and <cit.>. Namely, in <cit.>, the authors provided a quantitative version of Brendle's Alexandrov Theorem by exploiting the alternative proof through elliptic techniques devised in <cit.>; consequently they assume <ref>. In <cit.>, a far-reaching nonsmooth version of Brendle's geodesic flow technique is worked out, leading to a characterization of sets with finite perimeter with a distributional notion of constant mean curvature; as in <cit.>, (H4) is assumed, or some suitable weaker variant (see <cit.> and <cit.>). It may then be fruitful to elaborate on our arguments leading to <ref>, based on the combination of the elliptic and geodesic flow techniques, in order to go beyond <ref> and (H4) also in these kinds of more technical results. §.§ Structure of the paper In <ref> we provide the results of an elementary but, to our knowledge, not yet available analysis of warped product manifolds, and furnish a comparison with Brendle's set of assumptions (H1)-(H4). In <ref> we prove a generalized version of <ref>, where hypersurfaces Σ with several connected components as well as disconnected horizons are taken into account. In <ref> we prove <ref>, and deduce <ref>. We conclude the work with an Appendix containing the proofs of the computational results gathered in <ref>. §.§ Acknowledgements This work was initiated during a visit of A. P. at the Centro De Giorgi in Pisa. He warmly thanks the institution for the excellent working conditions offered. Part of this work has been carried out while S. B. and M. F. were attending the Thematic Program on Nonsmooth Riemannian and Lorentzian Geometry that took place at the Fields Institute in Toronto. They warmly thank the staff, the organizers and the colleagues for the wonderful atmosphere and the excellent working conditions set up there. During the preparation of the work, M. F. was supported by the European Union – NextGenerationEU and by the University of Padova under the 2021 STARS Grants@Unipd programme “QuASAR". The authors are members of Gruppo Nazionale per l’Analisi Matematica, la Probabilità e le loro Applicazioni (GNAMPA), which is part of the Istituto Nazionale di Alta Matematica (INdAM), and are partially funded by the GNAMPA project “Problemi al bordo e applicazioni geometriche". A.P. and S.B. are also supported by MIUR and the University of Trento, Italy. The authors are grateful to Luciano Mari, Lorenzo Mazzieri, Mario Santilli, Alessandro Savo and Mingxuan Yang for the many useful conversations had during the preparation of the work, that substantially helped to improve the quality. § SUBSTATIC WARPED PRODUCTS We consider warped product manifolds (I × N, dr ⊗ dr + h^2(r) g_N), for (N,g_N) a closed (n-1)-dimensional Riemannian manifold, and I = [0, r), with r > 0. We will always assume that {r=0} is either a horizon boundary or a single point representing the origin of the polar coordinates. Of course, in the latter case, in order for the metric to be smooth at the point with r=0, the cross section (N,g_N) must be homothetic to a round sphere. When ḣ≠ 0, we will find convenient to write warped products also in the equivalent form (ds⊗ ds)/ḣ(r)^2+s^2 g_N, for s=h(r). Since we will see in <ref> and <ref> below that the models we are interested in satisfy f=ḣ, an advantage of this form is that the metric now depends directly on the substatic potential f, without the need of the auxiliary warping function h. Furthermore, more importantly, the new coordinate s allows to write the function f and the metric g in the explicit form (<ref>). Let (M, g) be a substatic warped product of the form <ref> with positive nondecreasing h, with either a empty boundary or a horizon boundary. If the substatic potential f is a function of the coordinate r only and [f - ∇∇ f + Δ f g](∇ r, ∇ r) = 0 then, up to multiplying f and/or g_N by a positive constant, the manifold (M,g) and the substatic potential f satisfy one of the following: (i) there exists c∈ such that f=f(r) satisfies f̈+(n-2)c f≥ 0 and g=dr⊗ dr+ g_N , _g_N≥ (n-2)c g_N , (ii) there exist c∈ and a function η:[h(r̅)^-n,h(0)^-n]→ with η”≥ 0 such that g=ds⊗ ds/f^2(s)+s^2 g_N , _g_N≥ (n-2)c g_N , f = √(c+s^2η(s^-n)) . In particular, s=h(r) and f(s)=ḣ(r). We point out that the family of warped products considered by Brendle <cit.> correspond to the family in (ii) above, see <ref> for more details on this. In the proof of <ref> we are going to exploit the following strengthening of Proposition <ref>, in force when the substatic Ricci tensor vanishes in an additional direction. Under the assumptions of <ref>, if we also assume that h is not constant and that for every t∈[a,b], a,b∈ there exist x∈ N and a nontrivial X∈ T_xN⊂ T_(t,x)M such that [f - ∇∇ f + Δ f g](X, X) = 0 , then, up to multiplying f by a positive constant, in the domain [a,b]× N the metric g and the function f have the form g=ds⊗ ds/f^2+s^2 g_N , _g_N≥ (n-2)c g_N , f = √(c-λ s^2-2m s^2-n), where f = ḣ and λ, m ∈. The proofs of <ref> and <ref> involve elementary but lengthy calculations and have been included in the Appendix. The potential f given in <ref> coincides with that of the de Sitter/Anti de Sitter–Schwarzschild manifold. When the cross-section is Einstein, these are known to be, together with cylinders, the only static warped product manifolds with compact horizon boundary, that is, with vanishing substatic Ricci tensor (see <cit.> or <cit.>). <ref> constitutes thus a more general warped product classification result. It is worth discussing in some detail the regularity of η at the horizon boundary. Let s_0 be the value of s corresponding to the horizon. We can write η in terms of f as η(s^-n)=s^-2(f^2-c), hence in particular the value η(s_0^-n)=-cs_0^-2 at the boundary is finite. We can also show that η'(s_0^-n) is well defined. In fact, we can easily compute | f|=f(s)f'(s)=sη(s^-n)-n/2s^1-nη'(s^-n) , hence the regularity of η' up to the boundary follows from the smoothness of f up to the boundary. On the other hand, there does not seem to be an easy way to show the regularity of the second and higher derivatives of η up to the boundary. This is the very issue that in the end does not allow us to infer that <ref> holds in the warped product case. We are able to show that it holds under the assumption that η is C^2, α up to the boundary. Let (M, g) be a substatic warped product of the form (<ref>). If the function η appearing in (<ref>) is C^2,α up to the boundary then ∇∇ f/f ∈ C^0,α(M ∪∂ M). We compute ∇∇ f/f = (f'(s)^2/f(s)^2 + f”(s)/f(s)) ds ⊗ ds + s f(s) f'(s) g_N = [f'(s)^2 + f”(s) f(s)] dr ⊗ dr + s f(s) f'(s) g_N. Moreover, for any function f having the form (<ref>), it holds sf(s)f'(s) = s^2η(s^-n)-n/2s^2-nη'(s^-n) , f'(s)^2 + f”(s) f(s) = η(s^-n)+n(n-3)/2s^-nη'(s^-n)+n^2/2s^-2nη”(s^-n) . Since s=s_0>0 at the horizon, from these computations we immediately see how the assumed regularity of η implies that of ∇∇ f / f up to the boundary ∂ M. The warped products in <ref> satisfy η(t) = -λ -2mt, hence η” = 0. In particular, if such splitting takes place up to the horizon boundary, then <ref> implies that <ref> holds. §.§ Comparison with Brendle's assumptions. In the case of nonempty boundary, in <cit.>, warped products of the form <ref> with _g_N≥ (n-2) c g_N for some c∈ are assumed to satisfy the following set of assumptions. (H1) ḣ(0) = 0, ḧ(0) > 0, (H2) ḣ(r) > 0 for any r ∈ (0, r) (H3) The function F(r) = 2 ḧ(r)/h(r) -(n-2) c -ḣ(r)^2/h^2(r) is nondecreasing, (H4) It holds ḧ(r)/h(r)+c-ḣ(r)^2/h(r)^2 > 0 . Assumptions (H1)-(H3) correspond precisely to case (ii) in <ref>. Indeed, since f = ḣ(r), assumption (H1) implies that {f= 0} coincides with the boundary ∂ M, that its mean curvature =(n-1)ḣ(0)/h(0) vanishes and that | f|=ḧ(0)≠ 0 at the boundary. In other words, (H1) is equivalent to the request that the boundary is of horizon type. (H2) entails the request that f is positive in the interior of M, while (H3) is instead equivalent to the substaticity of g, with substatic potential f = ḣ, as shown in <cit.>. It is finally worth pointing out that, remarkably, for a warped product <ref>, equation <ref> is always satisfied for f = ḣ. This is due to the following formula, contained in the proof of <cit.> f - ∇∇ f + Δ f g = f(_g_N - (n-2) c g_N) + 1/2ḣ(r)^2 Ḟ(r) g_N. As pointed out in the introduction, we will never need (H4) in our analysis. For warped products having the form (<ref>), (H4) is equivalent to sf(s)f'(s)>-c+f(s)^2. Substituting the formula for f into (<ref>) this is in turn equivalent to η'<0. In particular, the model solutions described in (<ref>) satisfy (H4) if and only if m>0. § WARPED PRODUCT SPLITTING OF SUBSTATIC MANIFOLDS In this section, we are going to prove the following result, more general than <ref>, since it deals with possibly disconnected hypersurfaces and explicitly treats the case of components of Σ that are homologous to a point. In particular, it fully encompasses the case of an ambient M with empty boundary. Let (M, g) be a substatic Riemannian manifold with possibly empty horizon boundary ∂ M such that the substatic potential f satisfies ∇∇ f/f∈ C^0, α(M ∪∂ M) for α∈ (0, 1). Let Σ be a smooth strictly mean-convex hypersurface homologous to a possibly empty union N = N_1 ⊔…⊔ N_l of connected components of ∂ M. Let Σ_1, …, Σ_k, k ≥ l, the connected components of Σ. Assume that, for 1 ≤ j ≤ l, each Σ_j is homologous to the component N_j of ∂ M, while for j > l Σ_j is null-homologous. Let Ω_j be the connected region enclosed by Σ_j and N_j if 1 ≤ j ≤ l, and the connected region enclosed by Σ_j if l < j ≤ k. Let also f_j the restriction of f on Ω_j. Then, the Heintze-Karcher inequality <ref> holds with equality if and only if the following hold. (i) For 1≤ j≤ l, (Ω_j, g) is isometric to ([s_0^j, s_1^j] × N_j, ds ⊗ ds/f_j(s)^2 + s^2 g_N_j). (ii) For l < j ≤ k , (Ω_j, g) is isometric to ([0, s_1^j] × N_j, ds ⊗ ds/f_j(s)^2 + (s/f_j(0))^2 g_^n-1). (iii) We have f_1(s_0^1)f_1'(s_0^1)/s_0^1=… =f_l(s_0^l)f_l'(s_0^l)/s_0^l. To prove <ref> we start from the full statement of the Heintze-Karcher inequality <ref>, given in <cit.>. It involves the solution u to the boundary value problem Δ u = - 1 + Δ f/f u Ω u=c_N N u = 0 Σ, where ∂Ω = Σ⊔ N, with N union of connected hypersurfaces N_1 ⊔…⊔ N_l, l ∈ and c_N is the constant given by <ref>. It reads n-1/n∫_Σf/ dσ -∫_Ω f dμ - ∑_j ∈ J c_j ∫_N_j∇ f dσ ≥ n/n-1∫_Ω |∇∇ u - Δ u/n g - u(∇∇ f/f - Δ f/nf g)|^2 + Q(∇ u - u/f∇ f, ∇ u - u/f∇ f) dμ, where Q = f - ∇∇ f + Δ f g. It yields in particular the following. Let (M, g) be a substatic Riemannian manifold with horizon boundary satisfying <ref>, and let Σ be a smooth hypersurface homologous to N = N_1 ⊔…⊔… N_l for l ∈. Let c_N be given by <ref>, and let u be the solution to <ref>. Then, if equality holds in <ref>, then ∇∇ u - Δ u/ng - u (∇∇ f/f - Δ f/nf g) = 0 and [f - ∇∇ f + Δ f g](∇ u - u/f∇ f, ∇ u - u/f∇ f) = 0 in Ω∖ N. The following basic yet fundamental observation provides a conformal warped product splitting for the metric in Ω. It exploits (<ref>) only. In the remainder of this section, we agree to define N_j = ∅ when j > l. In the assumptions and notations of <ref>, let ϕ = u/f on Ω∖ N. Then, there exists a coordinate ρ on Ω_j∖ N_j such that ϕ depends on ρ alone and Ω_j∖ N_j splits as [0, ρ_j) ×Σ_j endowed with the metric g_j = f^2_j (dρ⊗ dρ + ϕ_j^2(ρ)/f_j^2(0, θ) g_Σ_j), where g_Σ_j is the metric induced on Σ_j by g, ϕ_j is the restriction of ϕ on Ω_j∖ N_j, and θ = (θ^1, …, θ^n-1) are coordinates on Σ_j. Moreover, ρ_j = +∞ for 1≤ j ≤ l, and finite for l< j ≤ k. We focus on a single Ω_j∖ N_j, and drop for notational convenience the subscript j. Consider the conformal metric g̃ = f^-2 g. Then, it is readily checked that (<ref>) is equivalent to ∇∇_g̃ ϕ - Δ_g̃ϕ/n g̃ = 0 in Ω∖ N. We recall moreover that a Hopf Lemma holds for u on Σ <cit.>, and consequently Σ is a regular level set for ϕ. Then, by classical results (<cit.>, see otherwise <cit.> or <cit.>), there exists a coordinate ρ such that ϕ is a function of ρ alone and such that (Ω, g̃) splits as [0, ρ) ×Σ endowed with g̃ = dρ⊗ dρ + ϕ^2(ρ)/ϕ^2(0)g̃_Σ, with ρ being infinite if (and only if) N is nonempty. In <ref>, g̃_Σ is the metric induced on Σ by g̃. This proves <ref>. Before providing the proof of <ref>, we point out, as another fundamental, although almost trivial, consequence of the assumption <ref>, that ∇ f is constant on each connected component of ∂ M. Let (M, g) be a substatic Riemannian manifold with potential f and nonempty horizon boundary. Assume that (∇∇ f)/f is continuous up to the boundary ∂ M. Then, ∇ f is constant on each connected component of ∂ M. Let N be a connected component of ∂ M, and X any vector field in TN. We have ⟨∇∇ f^2 , X⟩ = 2 ∇∇ f(∇ f, X). on N. On the other hand, since ∇∇ f /f is continuous up to the boundary {f=0}, we necessarily have ∇∇ f = 0 on N. By (<ref>), we conclude that ∇ f is constant on N. In order to complete the proof of <ref>, we are substantially left to prove that the substatic potential f depends on ρ alone. This information, plugged into <ref>, implies that (Ω, g) is in fact a warped product, and the conclusion follows from <ref>. To achieve this goal, we are going to exploit <ref>. Again, we drop the dependency on j. We consider again the conformal setting g̃ = f^-2 g. Observe that the substatic Ricci tensor just translates into f - ∇∇ f + Δ f g = f _g̃ - (n-1) ∇∇_g̃ f + 2(n-1)df ⊗ df/f. Let T be the tensor in the right-hand side above, and observe that substaticity amounts to T ≥ 0. Moreover, letting again ϕ = u/f, the condition <ref> reads T (∇ϕ, ∇ϕ) = (ϕ'(ρ))^2 T (∇ρ, ∇ρ) = 0, where ∇ϕ =ϕ'(ρ) ∇ρ is due to ϕ being a function of ρ alone, by <ref>. Inspired by the proof of <cit.>, consider now, for θ^i, i ∈{1, …, n-1} a local coordinate on Σ and λ∈, the vector field Y_i = ∇ρ + λ∂_i, where we denoted ∂_i = ∂ /∂θ^i. The condition T(Y_i, Y_i) ≥ 0, coupled with (<ref>), yields, at any fixed point p ∈Ω, T(Y_i, Y_i) = 2λ T_iρ + λ^2 T_ii≥ 0 for any λ∈. This can actually happen only if T_iρ = 0. Such condition reads (∇∇)^g̃_iρ f = 2 ∂_ρ f ∂_j f. Computing the g̃-Hessian of f from the expression <ref>, the above identity becomes ∂^2_iρ f - ϕ”(ρ)/ϕ'(ρ)∂_i f = 2 ∂_ρ f/f. One can now directly check that, as a consequence of the above relation, we have ∂_ρ(∂_i f/f^2 ϕ'(ρ)) = 0, so that, given 0 ≤ρ_1 < ρ_2 < ρ, we get ∂_i (1/f) (ρ_2, θ) = ϕ'(ρ_2)/ϕ'(ρ_1)∂_i (1/f) (ρ_1, θ). Integrating both sides along θ^i in (θ^i_0, θ^i_1), and omitting to explicate the dependencies on θ^m for m ≠ i, we finally get ϕ'(ρ_2) f(ρ_2, θ^i_0) - ϕ'(ρ_2)f(ρ_2, θ^i_1)/(ϕ'(ρ_2))^2f(ρ_2, θ^i_0)f(ρ_2, θ^i_1) = ϕ'(ρ_1) f(ρ_1, θ^i_0) - ϕ'(ρ_1)f(ρ_2, θ^i_1)/(ϕ'(ρ_1))^2f(ρ_1, θ^i_0)f(ρ_1, θ^i_1). We are now going to show that the left hand side converges to 0 as ρ_2 →ρ. Since we will need to tell between N_j and N = N_1 ⊔…⊔ N_l, we restore the dependencies on j when dealing with a specific connected component N_j of N. Recall also that we say that N_j is empty when j > l. If N_j is empty, then the limit ρ_2 →ρ corresponds to approaching a particular point p in each connected component of Ω, and so the numerator on the left hand side of <ref> converges to zero, while the denominator stays bounded away from zero. To understand the case of a nonempty N, recall first that ϕ = u/f, and observe that ∇ρ = 1/f. Then, we have (f ϕ')^2(ρ_2, θ) = f^4 (∇ u^2/f^2 + u^2/f^4∇ f^2 - 2u/f^3⟨∇ u, ∇ f⟩)(ρ_2, θ), where all the quantities are understood in terms of g. Since u attains smoothly the datum on N_j (see <cit.>), and f → 0 on N_j, that we are approaching as ρ_2 →∞, we deduce that the limit of the above quantity is given by c_N^2 ∇ f^2_| N_j > 0, that crucially is constant by <ref>. Thus, again, this implies that the left hand side of <ref> vanishes in the limit as ρ_2 →∞, since so does the numerator, while the denominator tends to c_N^2 ∇ f^2_| N_j > 0. We conclude, finally, that the numerator of the right hand side of <ref> vanishes for any ρ_1 ∈ [0, ρ), implying that f does not depend on θ^i. Being i ∈{1, …, n-1} arbitrary, we deduce that f depends on ρ only. This also implies that f depends on the g-distance from Σ only, and thus allows to write g on Ω as a warped product based at N_j if this is nonempty, and on a spherical cross-section otherwise. We can thus invoke our characterization of substatic warped products <ref>, since <ref> just amounts to <ref>. Moreover, since f is also constant on Σ, the cylindrical situation <ref> cannot arise either, since this would imply that the mean curvature of Σ is zero, against the assumption of strict mean-convexity. We are thus left with <ref>. Since the above argument works for any j, it follows that every connected component Ω_j must have the structure prescribed by points (i) and (ii) of the theorem. To prove point (iii), we start by observing that all pieces having the form (i) or (ii) also saturate the Heintze-Karcher inequality individually. Imposing equality in (<ref>) in all connected components Ω_j for 1≤ j≤ l as well as equality in the whole domain Ω, we deduce c_N ∫_N ∇ f dσ = ∑_j=1^l c_N_j∫_N_j∇ f dσ. Using the explicit expression of the metric in the domain Ω_j, provided by point (ii) of the theorem proved above, we also compute c_N = 1/n∑_j=1^l k_j|N_j|/∑_j=1^lk_j^2/s_0^j|N_j| , c_N_j = 1/ns_0^j/k_j , where k_j is the (constant) value of | f| on N_j={s=s_0^j}∩Ω_j. More explicitly k_j=f(s_0^j)f'(s_0^j). Equation (<ref>) can then be rewritten as (∑_j=1^l k_j|N_j|)^2 = (∑_j=1^lk_j^2/s_0^j|N_j|)(∑_j=1^l s_0^j|N_j|) . We now show that this equality forces k_1/s_0^1=… =k_l/s_0^l, concluding the proof of point (iii) of the Theorem. To this end, we actually prove the following more general statement: given positive numbers α_1,…,α_l,β_1,…,β_l, if it holds (∑_j=1^lα_j)^2 = (∑_j=1^lα_jβ_j)(∑_j=1^lα_j/β_j) , then β_1=…=β_l. A way to show this is via a direct computation: expanding the above terms we have ∑_j=1^lα_j^2+∑_i≠ jα_iα_j = ∑_j=1^lα_j^2+∑_i≠ jβ_i/β_jα_iα_j . Simplifying the equal terms on the left and right hand side and exploiting the symmetry in the indexes i and j, the above formula gives 2∑_i< jα_iα_j = ∑_i< j(β_i/β_j+β_j/β_i)α_iα_j = ∑_i< jβ_i^2+β_j^2/β_iβ_jα_iα_j . Since β_i^2+β_j^2≥ 2β_iβ_j, with equality if and only if β_i=β_j, the wished result follows at once. § HEINTZE-KARCHER RIGIDITY IN SUBSTATIC WARPED PRODUCTS We start by exploiting Brendle's monotonicity formula to deduce some useful geometric properties along an evolution of a hypersurface fulfilling the identity in <ref>. We focus on the case of a nonempty horizon boundary ∂ M = N and of Σ homologous to it; the case of Σ null-homologous and in particular the case of an ambient M with empty boundary is already fully encompassed by <ref>. Consider the conformal metric g̃ = f^-2 g, and let Ω_t = {x ∈Ω | ρ(Σ, x) ≥ t}, where ρ is the g̃-distance, and Ω as above is the region enclosed between Σ and ∂ M. Let Σ_t = ∂Ω_t ∖∂ M. Crucially, the mean curvature of Σ_t is easily seen to remain strictly positive if the initial Σ is strictly mean-convex (see <cit.>). Let Q(t) = ∫_Σ_tf/ dσ, where all the integrated quantities are expressed in terms of the original metric g. Then, utilizing, as in <cit.>, classical evolution equations (see e.g. <cit.>) for our normal flow of speed f, and plugging in the identity Δ_Σ_t f = Δ f - ∇∇ f (ν, ν) - (∇ f, ν) (again in terms of g), one gets Q'(t) = - n/n-1∫_Σ_t f^2 dσ - ∫_Σ_t(f/)^2 [^2 + ( - ∇∇ f/f +Δ f/f g)(ν, ν)] dσ. Let now t ∈ (0, ∞) be such that Σ_τ is smooth for any τ∈ [0, t]. One has, applying the coarea formula, Q(0) -Q(t) = -∫_0^t Q'(τ) dτ = n/n-1∫_Ω∖Ω_t f dμ + ∫_0^τ∫_Σ_τ(f/)^2 [^2 + ( - ∇∇ f/f +Δ f/f g)(ν, ν)] dσ. As long as Q(t) is smooth, Brendle's Heintze-Karcher inequality <cit.> states that Q(t) ≥n/n-1∫_Ω_t f dμ + c_N∫_∂ M∇ f dμ. Since equality holds in the Heintze-Karcher inequality for the initial Σ, that is Q(0) = n/n-1∫_Ω f dμ + c_N∫_∂ M∇ f dμ, we get, applying <ref> and <ref> to the left hand side of <ref>, that ∫_Σ_τ(f/)^2 [^2 + ( - ∇∇ f/f +Δ f/f g)(ν, ν)] dσ = 0 for any τ∈ [0, t]. We deduce the following information on the evolution of Σ, as long as it remains smooth. Let (M, g) be a substatic warped product of the form <ref>, with a nonempty connected horizon boundary ∂ M. Let Σ = ∂Ω∖∂ M be a smooth, embedded, connected hypersurface homologous to ∂ M, such that <ref> holds with equality sign. Let Σ_t = {x ∈Ω | ρ(Σ, x) = t}, where ρ is the distance in the conformal metric g̃ = f^-2 g. Then, Σ_t is a totally umbilic hypersurface such that [f - ∇∇ f + Δ f g] (ν, ν) = 0, as long as Σ_t evolves smoothly. We now illustrate how we are going to get <ref>. We first show that Σ_t remains smooth for all of its evolution, in <ref>. This is fundamentally due to the total umbilicity of the evolution coupled with the Heintze-Karcher inequality itself, preventing the second fundamental form to blow up, see <ref>. Then, we adapt to the substatic setting an argument of Montiel <cit.>, yielding in our case a very peculiar dichotomy: if a totally umbilic hypersurface satisfying <ref> is not a cross-section of the warped product, then a vector field X tangent to ∂ M is found on the region spanned by Σ such that the condition <ref> in <ref> is also satisfied (see <ref>). But then, having showed that Ω is foliated by such hypersurfaces, this region must split as prescribed by <ref>. However, as observed in <ref>, this metric satisfies <ref>, and we conclude that the only possibility in the dichotomy is that in fact the initial Σ was isometric to a cross-section. §.§ The g̃-flow remains smooth. In order to show that the second fundamental form does not blow up along a smooth evolution Σ_t starting at a hypersurface Σ fulfilling the equality in Heintze-Karcher, we first observe that the diameters remain bounded. In this subsection, we are always denoting with C_t some positive constant possibly depending on t ∈ (0, +∞). In the assumptions of <ref>, let t < +∞ be such that Σ_τ is smooth for any τ∈ [0, t). Then, the metric g_τ induced by g on Σ_τ satisfies |g_τ| ≤C_t , for any τ∈ [0, t) where the norm of g_τ is induced by the norm of the diffeomorphic surface g_Σ_0. Moreover, the intrinsic diameter of Σ_τ satisfies diam_g_τ(Σ_τ) ≤C_t for any τ∈ [0, t). Both <ref> and <ref> holds with g_τ replaced by g̃_τ, corresponding to the metric induced by the underlying conformal metric g̃ = f^-2 g. As long as the flow is smooth, each level Σ_τ is diffeomorphic to Σ=Σ_0. In particular, for all τ∈[0,t), there exists a metric g_τ on Σ such that (Σ,g_τ) is isometric to Σ_τ endowed with the metric induced on it by g. Obviously the same holds for the conformal metrics g̃_τ induced by g̃. This allows us to work on a fixed hypersurface Σ, letting the metrics g_τ, g̃_τ vary in time. Notice that =f-(n-1)⟨ f | ν⟩>-(n-1)| f|≥ -K, where K>0 is the (finite) maximum value of (n-1)| f| in Ω, and where we have used that is strictly positive along the flow by <cit.>. By the evolution _τ (g̃_τ)_ij=- (g̃_τ)_ij we have _τlog|(g̃_τ)_ij|=-< K, hence |(g̃_τ)_ij| < e^Kτ|(g̃_0)_ij|≤C_t . Since f is bounded in the compact domain Ω enclosed by Σ, the above bound implies a fully equivalent one in terms of the metric g_τ induced by g on Σ_τ. This proves (<ref>). For a fixed τ∈[0,t), let x_τ,y_τ∈Σ be two points realizing the diameter diam_g_τ(Σ) that we want to estimate, and let γ:[0,ℓ]→Σ be the g_0-unit length geodesic minimizing the distance between x_τ and y_τ, with respect to the starting conformal metric g_0. By <ref>, the length of γ is directly estimated as follows: |γ|_g_τ = ∫_0^ℓ |γ̇(s)|_g_τds = ∫_0^ℓ√((g_τ)_ijγ̇^iγ̇^j)(γ(s))ds ≤ℓ C_t By construction, the diameter diam_g_τ(Σ) coincides with the g_τ-distance between the endpoints of γ, so diam_g_τ(Σ) must be less than or equal to the g_τ-length of γ. Moreover, by construction, we have ℓ≤ diam_ g_0(Σ), and so we have shown diam_g_τ(Σ) ≤γ_g_τ≤C_t. This provides the desired uniform bound on the diameter diam_g_τ(Σ). The following is the main observation triggering the smooth long time existence along the g̃-distance flow. In the assumptions of <ref>, let t be such that Σ_τ is smooth for any τ∈ [0, t). Then, the second fundamental form _τ of Σ_τ satisfies _τ≤C_t for any τ∈ [0, t). Assume by contradiction that there exists a sequence τ_j → t < +∞ as j → +∞ and points x_τ_j∈Σ_τ_j such that _τ_j(x_τ_j) blows up as j → +∞. Then, since the Σ_τ's are totally umbilical, the mean curvature _τ_j(x_τ_j) blows up too. We first show that, then, inf_x ∈Σ_τ_j_τ_j (x) → +∞. Indeed, by the Gauss-Codazzi equations and exploiting the total umbilicity we immediately get ∇_i (τ_j) = -n-2/n-1_iν for any i ∈{1, …, n-1}. Since the right hand side is uniformly bounded in Ω, we deduce that ∇ is uniformly bounded along the evolution. Let then x ∈Σ_τ_j different from x_τ_j. We have (x) ≥(x_τ_j) - diam(Σ_τ_j)sup_y∈Σ_τ_j∇(y) ≥(x_τ_j) - C_t, where the bound on the diameter is <ref>; <ref> follows. On the other hand, recall that by the Heintze-Karcher inequality we have ∫_Σ_τ_jf/ dσ≥ c_∂ M∫_∂ M∇ f dσ. Now, the evolution equations for Σ_τ imply that Σ_τ≤Σ, while, since t<+∞ and ρ(Σ, ∂ M) = +∞, sup_Σ_τ_j f ≤C_t for some finite C_t > 0. Exploiting this information, we get at once from <ref> that inf_x ∈Σ_τ_j(x) ≤C_t Σ (c_∂ M∫_∂ M∇ f dσ)^-1, yielding a contradiction with <ref> that completes the proof. Concluding from the above that Σ_t remains smooth for any t ∈ (0, +∞) turns out to be slightly technical, but very classical in nature. The arguments employed were pioneered by Hamilton <cit.> in an intrinsic flow setting, and adapted to extrinsic flows by Huisken <cit.>. Let (M, g) be a substatic warped product of the form <ref>, with a nonempty connected horizon boundary ∂ M. Let Σ = ∂Ω∖∂ M be a smooth, embedded, connected hypersurface homologous to ∂ M, such that <ref> holds with equality sign. Let Σ_t = {x ∈Ω | ρ(Σ, x) = t}, where ρ is the distance in the conformal metric g̃ = f^-2 g. Then, Σ_t is a smooth, embedded, totally umbilic hypersurface such that [f - ∇∇ f + Δ f g] (ν, ν) = 0, for any t ∈ [0, ∞). We are going to show that the nonempty set T⊆ [0, +∞) defined by T = {t∈ [0, +∞) | Σ_τ is smooth and embedded for τ∈ [0, t]} is both open and closed in [0, +∞), inferring the eternal smoothness of the flow. The identity <ref> is then a direct consequence of <ref>. The openness of T is well-known in general; if a closed hypersurface Σ is smooth and embedded, then so are the equidistant hypersurfaces Σ_r = {x ∈ M | dist(Σ, x) = r} for any Riemannian metric-induced distance dist, see e.g. <cit.>. In our case, such result is applied to Σ_t with t ∈ T and with respect to the distance induced by g̃. The closedness of T constitutes the bulk of the proposition, and will be substantially ruled by <ref> only. We are repeatedly employing the evolution equations for Σ_τ along the g̃-distance flow, in the conformal background metric g̃. Indeed, in this setting such equations are simpler to handle, and, since we are staying away from ∂ M = {f = 0} the estimates we are inferring will automatically hold also in terms of g, and viceversa. In the remainder of this proof, all the quantities taken into account are thus understood as referred to g̃, even when not explicitly pointed out. Let T ∋ t_j → t^- as j → +∞. Then, Σ_τ is smooth and embedded for any τ∈ [0, t) We want to show that Σ_t is smooth and embedded. To accomplish this task, we are going to show that ∇^(k)_τ≤C_t for an arbitrary k ∈, where ∇^(k) is the k-th covariant derivative induced by g̃ on Σ_τ. Indeed, if this holds, then all the derivatives of the functions whose graphs describe Σ_τ would be uniformly bounded as τ→ t, implying that Σ_t would be actually smooth. We are going to prove <ref> by induction. The case k = 0 corresponds to the <ref>, and we assume ∇^(l)_τ≤C_t holds for any l ∈{0, …, k-1}. We employ the concise notation T * Q to indicate, at some fixed point, linear combinations of contractions of a tensor T with a tensor Q through the metric tensor. The uniform bound on the evolving metric tensors g_τ was observed in <ref>. We have ∂/∂τ∇^(k)_τ = ∇^(k)∂/∂τ_τ + ∇^(l_1)∂/∂τΓ*∇^(l_2)_τ, where l_1, l_2 ∈ satisfy l_1 + l_2 = k-1, and the components of Γ are the Christoffel symbols of the evolving metric that g̃ induces on Σ_τ. We recall that the variation of the components of Γ are in fact components of a tensor, and that it holds ∂/∂τΓ^i_jm = 1/2g^ir(∇_j∂/∂τg_mr + ∇_m ∂/∂τ g_jr - ∇_r ∂/∂τ g_jm), for i, j, l, r ∈{1, …, n-1} and where we meant with g the metric induced by g̃ on Σ_τ. Moreover, the second fundamental from _τ induced by g̃ on Σ_τ roughly evolves by (see e.g. <cit.>) ∂/∂τ_τ = _τ * _τ + Riem, where Riem denotes some component of the Riemann tensor of the ambient g̃. Observe that, since t is finite, Riem, as well as any of its g̃-covariant derivative, remains bounded on Σ_τ as τ→ t^-. Plugging <ref> and <ref> into <ref>, and directly estimating by means of <ref>, we get that ∂/∂τ∇^(k)_τ = ∇^(k)_τ * T + Q, where T and Q are tensors uniformly bounded on Σ_τ also as τ→ t^-. Then, taking into account once again that ∂_τ g_τ = -_τ g_τ, and that is an uniform bounded quantity as τ→ t^- thanks to <ref>, we deduce from <ref> ∂/∂τ∇^(k)_τ^2 ≤C_1∇^(k)_τ^2 + C_2, where both C_1 and C_2 are constants uniformly bounded as τ→ t^-. Integrating <ref> for τ∈ [0, t) provides the claimed <ref>, inferring the smoothness of Σ_t. We are left to discuss the embeddedness of Σ_t. Since Σ_t is compact, it is sufficient to show that Σ_t has no self intersections. Without loss of generality, suppose that t is the first time such that Σ_t has a self-intersection x. Then, in a neighborhood of x, by smoothness and compactness, Σ_t is a finite union of smooth embedded hypersurfaces S_1,…, S_k. First of all, if two of these hypersurfaces, say S_1 and S_2, have different tangent spaces, then it is easily seen by continuity of the flow that Σ_t- also has self-intersections for times t- close to t (see Figure <ref>), against our assumption that t is the first value having them. Thus, it only remains to analyze the case where the tangent space to S_1,…,S_k is the same. If the outward pointing normal is the same for S_1 and S_2, then it is easy to see that the level set Σ_t- must intersect Σ_t (this case is exemplified in Figure <ref>). This is impossible since Σ_t must be contained in the interior of the compact domain Ω_t- enclosed by Σ_t- for >0. The only case that is left to rule out is the case where there are exactly two hypersurfaces S_1 and S_2, with outward normals pointing in opposite directions. With similar reasonings as in the previous cases, we can conclude that the situation is as in Figure <ref>. Namely, we can find coordinates (x^1,…,x^n) centered at x such that S_1={x^1= 0} and its outward normal points towards {x^1≤ 0}, whereas S_2 is contained in {x^1≥ 0} and its outward normal points towards {x^1≥ 0}. Recalling that Σ_t is mean convex, this configuration is clearly ruled out by the maximum principle. §.§ Montiel-type argument and conclusion. Thanks to <ref> and <ref>, we have two directions along which the tensor f- f+Δ f g vanishes. A crucial observation is that, if Σ is not a cross-section, these two directions are distinct at almost all points of Σ. This follows from an argument of Montiel <cit.>. In our substatic setting, thanks to <ref>, this forces the region Σ lives in to be of the special form <ref>. Let (M, g) be a substatic warped product of the form <ref>, with a nonempty connected horizon boundary ∂ M. Let Σ = ∂Ω∖∂ M be a smooth, embedded, orientable, connected hypersurface homologous to ∂ M. Suppose that Σ is totally umbilical, that it holds [f - ∇∇ f + Δ f g](ν,ν) = 0 and that Σ is not a cross-section. where ν is a unit normal to Σ. Then, the function f has the form (<ref>) in the region [s_ min,s_ max]× N, where s_ min = min{s(x), x ∈Σ}, s_ max = max{s(x), x ∈Σ}. The function f depends on the coordinate s only, so it is well defined (up to a constant) the function ϕ=∫ (s/f) ds. We can compute rather easily ϕ=f g. In other words, the vector Y=ϕ=sf/ s satisfies Y =ϕ=fg. Let Y^⊤=Y-g(Y,ν)ν be the projection of Y on Σ. If ∇_Σ is the covariant derivative induced by ∇ on Σ, then Y^⊤=∇_Σϕ. We are assuming that Σ is not a cross section, that is, Y^⊤ does not vanish pointwise on Σ. Following the argument in <cit.>, for every vector field Z∈Γ(TΣ) we compute _Σ_Σϕ(Z,·) =∇_Σ Y^⊤(Z,·) =[∇_Z(Y-g(Y,ν)ν)]^⊤ =(∇_Z Y)^⊤-[∇_Z g(Y,ν)ν+g(Y,ν)∇_Zν]^⊤, where we are using the notation ^⊤ to denote the projection on Σ (namely, for a vector field X∈Γ(TM), we denote by X^⊤ the vector X-g(X,ν)ν∈Γ(TΣ)). Since ∇ Y=fg, ν^⊤=0, and (∇_Zν)^⊤=(Z,·), where is the second fundamental form of Σ, we deduce ∇_Σ∇_Σϕ=fg_Σ-g(Y,ν). In particular, if Σ is umbilical, then =/(n-1)g_Σ and we obtain ∇_Σ_Σϕ=[f-g(Y,ν)/n-1]g_Σ. Since Σ is compact and Y^⊤ does not vanish pointwise (meaning that ϕ is nontrivial), it is then well known <cit.> that Σ must be a warped product with spherical cross-sections. Namely Σ=[0,R]×𝕊^n-2, g_Σ=dρ⊗ dρ+λ^2g_𝕊^n-2, where ρ is the coordinate on [0,R] and λ=λ(ρ) is positive in (0,R), λ(0)=λ(R)=0, λ'(0)=-λ'(R)=1. Furthermore the following relations hold: λ'=f-g(Y,ν)/n-1, λ=/ρϕ_|_Σ, Y^⊤=∇_Σϕ=λ/ρ. In particular, Y^⊤ is different from zero on Σ∖{x, y}, with x, y being the two points corresponding to ρ=0 and ρ=R. Since we are assuming that the tensor f - ∇∇ f + Δ f g vanishes in the ν direction and we know this holds also in the Y direction (recall <ref>), we must also have [f - ∇∇ f + Δ f g](Y^⊤,Y^⊤)=0 at all points of Σ. We can then apply <ref> with X = Y^⊤ to conclude. The above proof intriguingly shows also that a hypersurface Σ as in the statement of <ref> is itself a warped product. We are now ready to conclude the proof of <ref>. We can restrict our attention to the case of nonempty boundary ∂ M = {f =0} with Σ homologous to ∂ M, since the empty boundary (or null-homologous) case is fully covered by <ref>. We consider the evolution of Σ given by the Σ_t ⊂Ω at g̃-distance t. By <ref>, Σ_t is smooth for any t ∈ [0, +∞). Suppose by contradiction that Σ is not a cross section. Then, Σ_t is not a cross-section for any t ∈ (0, +∞), for otherwise all of its g̃-equidistant hypersurfaces would be cross-sections, including Σ. Moreover, as recalled in <ref>, the Σ_t's are totally umbilical and satisfy [f - ∇∇ f + Δ f g](ν,ν)=0 . We can then apply <ref> to any Σ_t, and deduce that in the region foliated by such evolution f can be written as (<ref>). Since, as t → +∞, Σ_t by construction gets closer and closer to ∂ M, this holds in the whole of Ω. But then, as observed in <ref>, by <ref> the condition <ref> is satisfied. We can then apply <ref> to conclude that Σ is a cross-section, a contradiction that concludes the proof. § APPENDIX: PROOF OF PROPOSITION <REF> AND LEMMA <REF> We consider warped products M=I× N , g=dr⊗ dr+h^2g_N , with h=h(r) positive, satisfying the substatic condition -∇∇ f/f+Δ f/fg≥ 0 for some function f=f(r) that is assumed to be nonnegative and zero exactly on the (possibly empty) boundary of M. Our aim will be that of proving Proposition <ref>. To this end, we start by writing down the components of the relevant quantities in the substatic condition. The Ricci tensor of a warped product is known to satisfy _rr = -(n-1)ḧ/h , _ir = 0 , _ij = ^N_ij-[hḧ+(n-2)ḣ^2]g^N_ij . Since both f and h are functions of the coordinate r only, the Hessian and Laplacian are given by the following formulas ^2_rrf = f̈ , ^2_irf = 0 , ^2_ijf = hḣḟ g^N_ij , Δ f = f̈+(n-1)ḣ/hḟ . Substituting in (<ref>), we find out that the substatic condition is equivalent to the following two inequalities: ḣḟ/f ≥ ḧ , _g_N ≥ h^2[ḧ/h-f̈/f+(n-2)ḣ^2/h^2-(n-2)ḣ/hḟ/f]g_N . We are now ready to provide the proof of <ref>, telling between the case of a constant h, that is the cylindrical splitting of, and of a nonconstant h. The product case. We impose ḣ=0. Up to a rescaling of g_N we can then just set h≡ 1. The first identity in (<ref>) is trivial when ḣ=0. The second inequality in (<ref>) instead reduces to _g_N ≥ -f̈/fg_N . In particular, if c is the minimum value such that there exists X∈ TN with _g_N(X,X)=(n-2)c |X|^2_g_N, we must have f̈+(n-2)cf ≥ 0 . For any f satisfying the above inequality and any (n-1)-dimensional Riemannian manifold (N,g_N) with _g_N≥ (n-2)cg_N the product manifold (I× N,dr⊗ dr+g_N) is substatic. The warped product case. We now consider the case where h is not constant. Since we are assuming [f - ∇∇ f + Δ f g] ( r, r) = 0, the first inequality in (<ref>) is saturated, forcing f=kḣ, for some constant k∈. Letting also c∈ be the minimum value such that there exists X∈ TN with _g_N(X,X)=(n-2)c |X|^2_g_N, we can rewrite the second inequality in (<ref>) as follows ⃛ h/ḣ+(n-3)ḧ/h-(n-2)ḣ^2-c/h^2 ≥ 0 . This inequality also appears in <cit.>. Since f is assumed to be positive in M, notice that in particular this forces ḣ to have a sign. Up to changing the sign of the coordinate r, we can assume ḣ>0. In particular, h is a monotonic function of r, which means that we can use h as coordinate in place of r. We use ' to denote derivative with respect to h. Considering then the function ψ = ḣ^2-c/h^2 , observing that ψ'=ψ̇/ḣ, we compute (h^n+1ψ')' = 2(h^n-1ḧ-h^n-2(ḣ^2-c))' = 2h^n-1(⃛ h/ḣ+(n-3)ḧ/h-(n-2)ḣ^2-c/h^2) . Therefore, inequality (<ref>) gives (h^n+1ψ')' ≥ 0 , which in turn tells us that ψ = ∫μ/h^n+1dh , where μ=μ(h) satisfies μ'≥ 0. This is equivalent to asking ψ=η(h^-n), with η”≥ 0. More explicitly, the substatic potential f and the warping function h must satisfy f = kḣ = k√(c+h^2η(h^-n)) , η”≥ 0 . Summing all up, we have found that all substatic warped products (I× N,g,f) with f radial and such that f - ∇∇ f + Δ f g vanishes in the radial direction are isometric to a solution having the following form M=[a,b]× N , g=k^2ds⊗ ds/f^2+s^2 g_N , _g_N≥ (n-2)c g_N , f = k√(c+s^2η(s^-n)) , where 0<a<b, c>0, k>0 are constants, s is a coordinate on [a,b], and η:[b^-n,a^-n]→ satisfies η”≥ 0. We conclude with the proof of <ref>. If we further assume, as in the statement, that at the point p with coordinates (s_0,x) there exists a vector X∈ T_pN such that [f - ∇∇ f + Δ f g](X,X)=0, then it easily follows that the second inequality in (<ref>) is saturated. Retracing the computations above one then deduces that η”=0 at s=s_0. If it is possible to find such a vector X for every s in an interval [s_0,s_1]⊂ [a,b], then we can integrate the identity η”=0 in [s_1^-n,s_0^-n], obtaining η(t)=-λ-2mt for constants m,λ. Substituting in the formula for f we then obtain f = k√(c-λ s^2-2m s^2-n) , as claimed.
http://arxiv.org/abs/2307.04034v1
20230708191132
Robust Universal Inference
[ "Beomjo Park", "Sivaraman Balakrishnan", "Larry Wasserman" ]
stat.ME
[ "stat.ME" ]
Robust Universal Inference Beomjo Park, Sivaraman Balakrishnan, and Larry Wasserman Department of Statistics & Data Science Machine Learning Department Carnegie Mellon University Pittsburgh, PA 15213. August 12, 2023 In statistical inference, it is rarely realistic that the hypothesized statistical model is well-specified, and consequently it is important to understand the effects of misspecification on inferential procedures. When the hypothesized statistical model is misspecified, the natural target of inference is a projection of the data generating distribution onto the model. We present a general method for constructing valid confidence sets for such projections, under weak regularity conditions, despite possible model misspecification. Our method builds upon the universal inference method of <cit.> and is based on inverting a family of split-sample tests of relative fit. We study settings in which our methods yield either exact or approximate, finite-sample valid confidence sets for various projection distributions. We study rates at which the resulting confidence sets shrink around the target of inference and complement these results with a simulation study. § INTRODUCTION One of the broad goals of statistical inference is to draw conclusions about a population from a sample of the population. This goal is typically facilitated by the use of a statistical model 𝒫, a collection of distributions, which the statistician hypothesizes will contain a useful approximation to the data generating distribution. The well-specified case is when ∈ and the misspecified case is when this does not necessarily hold. In the misspecified case, the target of inference is usually a projection distribution. Formally, given a divergence ρ which maps a pair of distributions to ℝ_+, we can define the (forward) projection[We tacitly assume that the projection exists and is unique. When the projection is not unique our inferential guarantees always hold for any (arbitrary) fixed choice of the projection . Characterizing the existence of a projection distribution (for f-divergences) has received some attention in past work <cit.>.] of the distribution onto the statistical model as: := _P ∈ρ( P). The general goal of our paper is to construct uniformly valid confidence sets for assuming only weak regularity conditions on the distribution and the statistical model . We let X_1,…, X_n be an i.i.d sample from a distribution ∈ defined on ℝ^d, where 𝒬⊇𝒫 is a class of distributions satisfying weak regularity conditions. We wish to construct (honest) confidence sets, C_α(X_1,…, X_n) such that, inf_∈ℙ_(∈ C_α(X_1,…, X_n)) ≥ 1 - α. In parametric statistical models, in the well-specified case, the likelihood-ratio test, and confidence sets obtained from asymptotically Gaussian estimators, are the main inferential tools for constructing hypothesis tests and confidence intervals. In the misspecified case, one can develop analogous tools for constructing tests and intervals for the Kullback-Leibler (KL) projection parameter, using sandwich estimates for the variance <cit.>. The validity of these methods, in both the well-specified and misspecified cases, relies on large sample asymptotic theory and requires that the statistical model 𝒫 and the sampling distribution satisfy strong regularity conditions. In recent work <cit.>, we introduced a procedure (described in more detail in Section <ref>) based on data-splitting to construct uniformly, finite-sample valid likelihood-ratio confidence sets under no regularity conditions. This work showed that, in the well-specified setting, sample-splitting can yield practical, finite-sample valid inference, even for irregular statistical models, often at a surprisingly small statistical price. The challenges of inference under weak regularity conditions are exacerbated in the misspecified setting. In contrast to the well-specified case where the target of inference is unambiguous, in the misspecified case there are many natural targets of inference. Each choice of the divergence ρ in (<ref>), yields a different target and in most cases these targets will have drastically different properties. This in turn poses significant challenges in constructing a unified framework for statistical inference in the misspecified setting. Under weak regularity conditions, the KL projection distribution can be an unstable inferential target, wherein small perturbations to the data-generating distribution P^* can lead to dramatic shifts in the target P. From a theoretical standpoint, this makes finite-sample valid inference for the KL projection distribution challenging, unless strong regularity conditions are imposed. From a practical standpoint, these instabilities can make the KL projection distribution an undesirable target, and in these cases it is essential to develop a flexible family of methods that can target other (more stable) projection distributions. To address these challenges, we develop a re-interpretation of the universal inference method <cit.> as inverting a particular family of pairwise likelihood-ratio tests. This interpretation brings into focus the key building block of universal inferential methods – pairwise hypothesis tests. Building on this insight we show that one can develop robust universal inference procedures by inverting appropriate families of robust pairwise tests. We then study the design and properties of robust pairwise tests, and relate them to the coverage and size properties of our proposed robust universal inference method. §.§ Related Work Asymptotic statistical inference, in both the well-specified and misspecified cases, is a topic of classical interest. Some entry points to the vast literature on this topic include the reference books <cit.>. Results in this literature <cit.> typically leverage strong regularity conditions to determine the asymptotic distribution of a point estimate (such as the Maximum Likelihood Estimator (MLE)), and use the asymptotic distribution of the estimate to construct (asymptotically valid) confidence sets. Our work is motivated in part by a recent line of work <cit.>, and more classical work <cit.>, where sample-splitting is used to avoid the strong regularity conditions typically needed for valid statistical inference. The focus on statistical inference under weaker regularity conditions, despite model misspecification, is the central theme of work in robust statistics <cit.>. One of the best understood methods for constructing robust estimators is to select, from a set of candidates, one which wins a carefully setup tournament – an idea which goes back to <cit.>, and others. At the heart of these tournament estimators are pairwise selectors, which attempt to robustly select one of a pair of candidates, which provide a better relative fit to the sampling distribution. These robust pairwise tests have been used to great effect in robust estimation, and our work highlights their usefulness in constructing assumption-light confidence sets. §.§ Outline The rest of this paper is organized as follows. Section <ref> provides some background. We briefly introduce the universal inference procedure, and develop a new perspective on it. Section <ref> motivates the methods we study in this paper by pinpointing some of the failures of universal inference. Section <ref> describes a general strategy to construct confidence sets for projection distributions, and highlights the importance of designing tests of relative fit. Section <ref> highlights some important examples where we are able to build on prior work in order to design exact and approximate tests of relative fit for different choices of the underlying divergence measure. Section <ref> studies the size of the resulting confidence sets. Section <ref> demonstrates some of the strengths of our proposed inference methods based on illustrative numerical examples. We conclude in Section <ref> with a brief discussion of future work. § BACKGROUND We let X_1,…, X_n be an i.i.d sample from a distribution ∈ defined on ℝ^d, and we let denote our working statistical model. Throughout the paper, the collection of distributions will be quite general, typically only satisfying some weak regularity conditions. §.§ Universal Inference Our starting point is our prior work <cit.> which introduced a procedure based on data-splitting to construct uniformly, finite-sample valid confidence sets under weak regularity conditions. Importantly, the validity guarantees of universal inference require the statistical model to be well-specified. The universal inference procedure is to: * Split the data := {X_1,…,X_n} into two sets _0 and _1. * On the set _1 calculate any estimate (e.g., could be the MLE in the model ). * Assume that the distributions in 𝒫 have densities (denoted with lower-case symbols) with respect to a dominating measure λ. We let ℒ_0(P) denote the likelihood of the distribution P evaluated on the samples in _0: ℒ_0(P) := ∏_i ∈_0 p(X_i), and define ℒ_0() analogously. Then construct the confidence set, C_α(X_1,…,X_n) = { P: ℒ_0(P)/ℒ_0()≥α}. In the well-specified case, <cit.> show (in their Theorem 1) that, under no additional regularity conditions, C_α is a finite-sample valid 1 - α confidence set for the distribution . §.§ A Re-Interpretation of Universal Inference To motivate the main proposal of this paper it is useful to re-visit (and generalize) the procedure described above, via the lens of inverting a family of hypothesis tests. The basic idea is classical, and is sometimes referred to as the duality between confidence sets and hypothesis tests. Formally, given samples X_1,…, X_n ∼, suppose we have a family of tests ϕ_P: {X_1,…,X_n}↦{0,1} for testing the null hypothesis H_0: = P. Here the test function ϕ_P takes the value 1 to indicate a rejection of the null hypothesis and takes the value 0 otherwise. If the family of tests is valid, i.e. they control the Type I error, _P [ϕ_P(X_1,…,X_n) ]≤α,    ∀ P ∈, then the following confidence set, C_α(X_1,…,X_n) := { P ∈: ϕ_P = 0 }, is uniformly valid when the statistical model is correctly specified, i.e. inf_∈ℙ_(∈ C_α(X_1,…, X_n)) ≥ 1 - α. Although this is a general recipe for constructing valid confidence sets, it does not provide the statistician much guidance in designing tests which might lead to small confidence sets. Universal inference is based on the idea that one can use a separate sample to construct an accurate estimate . We can then construct our family of tests, on the remaining samples, to have high power in distinguishing the sampling distribution from this pilot estimate. Formally, we could choose our family of tests to have high power to distinguish the hypotheses: H_0:   = P, versus H_1:   = . This use of a separate sample to construct a pilot estimate, simplifies the design of the tests to invert considerably since now we can focus on tests that have strong guarantees for distinguishing this simple null-versus-simple alternative. Indeed, universal inference uses the likelihood-ratio test for distinguishing these hypotheses, resulting in tests ϕ_P of the form: ϕ_P = 𝕀[ ℒ_0()/ℒ_0(P) > (P, ) ], for a choice of the threshold (P, ) which ensures that the condition in (<ref>) is satisfied. Although it is possible to determine optimal thresholds in the likelihood-ratio tests above, this can be practically cumbersome since these thresholds depend on both the pilot estimate and the null hypothesis P under consideration. The work of <cit.> further shows that a universal threshold = 1/α suffices to ensure the condition in (<ref>). To summarize, one can view the universal inference confidence set (<ref>) as arising by inverting a family of likelihood-ratio tests designed to distinguish each candidate distribution P from a pilot estimate . We emphasize that the universal inference procedure, and its reinterpretation described above rely crucially on correct model-specification to ensure validity. For instance, inverting a family of tests that satisfies (<ref>) is no longer meaningful when the model is misspecified. However, the testing interpretation suggests that one might develop novel variants of the universal inference procedure which are useful despite model-misspecification, by formulating appropriate robust hypotheses and designing robust tests for distinguishing them. We make these ideas precise in Section <ref>. §.§ Divergences Throughout this paper, we make frequent use of different divergences between pairs of probability distributions. We briefly introduce them here. We let P and Q be distributions defined on ℝ^d with densities p and q with respect to a common dominating measure λ. The Hellinger distance is defined as: (P,Q) = 1/√(2)( ∫ (√(p) - √(q))^2 dλ)^1/2, and the Kullback-Leibler (KL) divergence is defined as: (P Q) = ∫(logp/q) P̣, if P is dominated by Q, ∞, otherwise. The family of density power divergences <cit.>, are defined for a parameter β≥ 0 as, _β (P Q) = ∫{ q^1+β - ( 1+1/β) q^β p + 1/β p^1+β} d λ, β > 0 (P Q), β = 0 where _0 = is defined by taking the limit of β→ 0. Finally, the family of Integral Probability Metrics (IPMs) <cit.>, are defined as _ℱ(P, Q) = sup_f ∈ℱ| _P (f) - _Q (f) | where ℱ is a symmetric class (i.e., f ∈ℱ - f ∈ℱ) of real-valued bounded measurable functions on the domain of P and Q. Important special cases of IPMs include the Total Variation distance (TV, where ℱ is the collection of functions with sup-norm at most 1), the Wasserstein-1 distance (where ℱ is the collection of 1-Lipschitz functions) and the Maximum Mean Discrepancy (MMD, where ℱ is the unit ball of a Reproducing Kernel Hilbert Space with kernel k). § FAILURES OF UNIVERSAL INFERENCE To provide some motivation and intuition for the methods we propose in this paper, it is useful to understand some of the failures of the universal inference framework when the statistical model is misspecified, and the target of inference is the KL projection. §.§ Unbounded Likelihood-Ratios The behavior of likelihood-ratio based methods can be sensitive to the tail behavior of likelihood-ratios. The following simple example illustrates that under model misspecification, universal inference can fail to cover the KL projection parameter. These pathologies do not arise when the statistical model is correctly specified, and the challenges in this example arise due to an interplay between poorly behaved likelihood-ratios and model misspecification. This example also serves to highlight the fact that the KL projection parameter can in some cases be an undesirable inferential target. We let (p) denote the Bernoulli distribution with parameter p. Suppose we observe X_1,…,X_n ∼ := (ϵ_n) for some non-negative 0 < ϵ_n < (1-α)/n. We use the statistical model = {(p) : p ∈{0, 1/2 }}. Suppose we consider the pilot estimator to be the MLE, = _p ∈ℒ_1(p). Then, for all sufficiently large n ≥ n_0 where n_0 only depends on α the split LRT confidence set in (<ref>), with an equal sized split into _0 and _1, fails to cover the KL projection at the nominal level. The proof is in Appendix <ref>. The intuition is however clear. In this example the KL projection distribution is (1/2). For ϵ_n ≪ 1/n, with high probability the samples X_1,…,X_n are all 0. Consequently, the MLE with high-probability will be (0). Furthermore, the split sample likelihood ℒ_0 will be much higher for (0) than (1/2), and consequently (1/2) will not be included in the universal set. In this example likelihood-ratios are unbounded and as a consequence the KL divergence is an unstable function of the model parameters, i.e. when ϵ_n = 0, ((ϵ_n) (0)) is 0, but is ∞ for any ϵ_n > 0. In such cases, the finite-sample (log)-likelihood-ratio is a poor estimate of the population KL divergence, and this poses significant challenges for finite-sample valid inference. From a practical standpoint, a more reasonable inferential target could be a different, stabler projection distribution (e.g., the Hellinger or TV projection distribution) and we address this in Sections <ref> and <ref>. §.§ Failure Despite Bounded Likelihood-Ratios In the previous example it is clear that unbounded likelihood-ratios can result in pathologies which are challenging to address with finite-sample valid inference. However, even when all likelihood-ratios in the model are well-behaved, universal inference can fail to cover the KL projection parameter. It is important to note that except under the stringent condition that the underlying model is convex (see Section 6 of <cit.>), universal inference has no guaranteed coverage when the model is misspecified. Suppose we obtain X_1,…,X_n ∼ := (0.5 + ϵ_n) for some small, positive 0 <ϵ_n ≤ c/n, where c > 0 is a small positive universal constant. Our hypothesized model consists of two distributions, = {(p) : p ∈{1/4, 3/4 }}. Suppose we take the pilot estimator to be the MLE (<ref>). Then, for all sufficiently large n (depending only on α) the split LRT confidence set in (<ref>), with an equal sized split into _0 and _1, fails to cover the KL projection at the nominal level. We give a formal proof in Appendix <ref>. The KL projection distribution is (3/4). We show that the pilot estimate with probability near 1/2 will be the distribution (1/4), and further with probability near 1/2 the KL projection (3/4) will have a much smaller split sample likelihood than . As a direct consequence, universal inference will fail to cover the projection distribution (3/4). In contrast to the previous example, this example is much less pathological. All the relevant likelihood-ratios are bounded, and the log-likelihood is a consistent estimate of the KL divergence. However, even in this relatively benign example universal inference fails. We show in Section <ref> that a simple modification to the universal inference procedure fixes this issue when the relevant likelihood-ratios are bounded, and ensures correct coverage. In order to focus on the main issues, we have illustrated the failure of universal inference when the pilot estimator is the MLE. Indeed, part of the appeal of universal inference is that its coverage guarantees hold, in the well-specified case for any pilot estimate (including the MLE). Though we do not pursue this here, it is straightforward to extend these examples to show that both failures persist irrespective of how the pilot is chosen, i.e. the failures of universal inference that we highlight are driven by the second stage (of constructing the confidence set) and not by the first stage (of constructing a reasonable pilot estimate). These examples set the stage for the methodological development of the rest of the paper. To address problems of the first type we recommend targeting a different projection parameter (for instance, the TV or Hellinger projection, in Sections <ref> and <ref>), and to address problems of the second type we develop methods which guarantee coverage of the KL projection parameter when the likelihood-ratios are uniformly upper bounded or more generally have finite 2 + ξ moments for some ξ > 0 (see Section <ref>). § ROBUST UNIVERSAL INFERENCE In this section, we present a simple but powerful pair of general results which yield exact and approximate universal confidence sets. The workhorse of these results are tests of relative fit which we first briefly introduce before showing how these tests can be inverted to derive robust confidence sets. §.§ Tests of Relative Fit Suppose that we are given samples X_1,…, X_n ∼, together with a pair of candidate distributions (P_0, P_1) ∈^2, and a divergence measure ρ. With this setup in place, we now consider a family of tests ϕ_P_0, P_1 to distinguish the hypotheses: H_0:  ρ( P_0) ≤ρ( P_1), versus H_1:  ρ( P_0) > ρ( P_1). We refer to the tests ϕ_P_0, P_1 as exact tests of relative fit. Notice that in contrast to the classical setting, where we hypothesize that one of the distributions (P_0, P_1) truly generated the samples, in the misspecified setup this assumption is no longer tenable. Instead, we hypothesize that one of the distributions (P_0, P_1) is closer to the data generating distribution. In general, the two hypotheses are no longer simple hypotheses and we need to take some care in designing the family of tests ϕ_P_0, P_1. The design of tests of relative fit (and closely related variants) have a rich history and form the basis for a class of tournament-based robust estimators <cit.>. For divergences like the Total Variation and the Hellinger distance, designing exact tests of relative fit can require strong regularity conditions akin to those that would be required to estimate these divergences. Surprisingly, in these cases, it is still possible to design approximate tests of relative fit under weak regularity conditions. More formally, suppose that for some ν≥ 1, we can design a test for the following null hypothesis: H_0:  νρ( P_0) ≤ρ( P_1). We refer to tests for this hypothesis as approximate tests of relative fit ϕ_P_0, P_1,ν. Under the null hypothesis, the distribution P_0 is closer than P_1 to by a factor ν≥ 1, which can ease the design of valid tests for this hypothesis. Robust tests for null hypotheses of the form in (<ref>) (for the Hellinger distance) were introduced by <cit.> and are discussed in detail in the work of <cit.>. In the context of estimation these approximate tests yield what are known as non-sharp oracle inequalities. In the context of inference, as we explore further in Section <ref>, inverting approximate relative fit tests will yield weaker guarantees. In Section <ref> we consider the design of tests of relative fit in concrete settings, but now proceed to study the implications of designing such tests for the construction of robust confidence sets. §.§ Exact and Approximate Robust Universal Confidence Sets We now propose to construct a confidence set by inverting a family of tests of relative fit. This is similar in spirit to the procedure described in Section <ref>. §.§.§ Exact Robust Universal Confidence Sets Suppose, for every ∈, the family of tests of relative fit ϕ_P_0, P_1 is valid, i.e. it controls the Type I error: _[ϕ_P_0, P_1(X_1,…,X_n) ]≤α,    ∀ (P_0, P_1) ∈_0 where _0 = { (P_0, P_1) ∈^2: ρ( P_0) ≤ρ( P_1)}. Then, for any fixed P_1 ∈, the confidence set we construct is the set of candidates P_0 which we fail to reject: C_α,n≡ C_α(X_1,…,X_n) := { P_0 ∈: ϕ_P_0, P_1 (X_1,…,X_n) = 0 }. The following result shows that irrespective of the choice of P_1 the above construction yields a valid confidence set for the projection distribution: For any fixed P_1 ∈, C_α,n is a uniformly valid (1-α) honest confidence set for the projection . For any ∈, _(∉ C_α,n ) = _( ϕ_, P_1 = 1 ) = _( ϕ_, P_1 ) ≤α using (<ref>) since (, P_1) ∈_0 for any choice of P_1 ∈. As in the well-specified case discussed earlier, this general result does not provide any guidance on how to choose P_1. We follow the idea of universal inference and first construct an accurate estimate of from a separate sample _1 and then construct the family of split tests of relative fit ϕ_P_0, from the remaining samples _0. We call the resulting confidence set the exact Robust Universal Confidence set: C_α,n≡ C_α (X_1,…, X_n) := {P_0∈: ϕ_P_0, (_0) = 0}. Let ∈ be any estimate of based on _1. Then, the exact robust universal confidence set C_α,n is a uniformly valid confidence set for , meaning that inf_∈_ (∈ C_α, n) ≥ 1 - α. The proof is straightforward noticing that conditional on _1, the claim reduces to the claim of Proposition <ref>. Concretely, for any ∈, _ (∉ C_α, n) =_ (ϕ_, ) = __1[ __0(ϕ_, (_0) | _1) ] ≤__1 (α) = α. The robust confidence set will often contain both the pilot estimate as well as the projection distribution (see Proposition <ref> in Appendix <ref> for a formal statement). This is similar to the classical universal inference procedure which in the well-specified case will often contain both the pilot estimate and the true sampling distribution. In universal inference this suggests that in order to obtain small confidence sets, we should aim to design to be a good estimate of the true sampling distribution . On the other hand in the misspecified case, this suggests that we should design to be a good estimate of the projection . Specifically, our pilot estimate should be tailored to the divergence measure ρ. We investigate the choice of and its effect on the size of the resulting confidence set further in Section <ref>. §.§.§ Approximate Robust Universal Confidence Sets In some cases, e.g., for the Hellinger distance and the TV distance, designing exact robust tests will require some (potentially strong) regularity conditions. However, in these cases one can design approximate tests of relative fit straightforwardly. Suppose, for any ∈, the family of approximate tests of relative fit ϕ_P_0, P_1, ν which controls the Type 1 error satisfies (<ref>) with _0 = { (P_0, P_1) ∈^2 : νρ( P_0) ≤ρ( P_1)} for some ν≥ 1. We will additionally make the mild assumption that our tests of relative fit do not reject (with probability at least 1-α) when comparing the relative fit of a distribution to itself, i.e.: sup_∈_ [ϕ_P,P,ν] ≤α for any fixed  P∈. This condition will be true for all the tests we introduce in Section <ref>. Let be any estimate of from _1. Then, the approximate robust universal confidence set, akin to (<ref>), is obtained by inverting the family of valid split tests ϕ_P_0, , ν constructed from the remaining samples _0: C_ν,α,n≡ C_ν,α(X_1,…,X_n) := { P_0 ∈: ϕ_P_0, , ν (_0) = 0 }. This confidence set may not cover the projection distribution . We will relax our goal to instead be to cover an approximate projection distribution. More formally, we relax the target of inference to be the ν-approximate projection set _ν defined as _ν = {P∈: ρ( P) ≤νρ() }. If a set C is a ν-approximate confidence set, we define its coverage by _(Q∈ C for some Q ∈_ν) = _(_ν∩ C ≠∅). Figure <ref> shows a schematic diagram to illustrate the notion of approximate coverage. When ν = 1, i.e. we invert an exact test, we guarantee that with probability at least 1 - α, the set C_ν,α,n contains . On the other hand, when ν > 1 we only guarantee that the intersection of C_ν,α,n with the collection of ν-approximate projections (in cyan) is non-empty. The set _ν is a collection of distributions that are as close to as (up to a factor ν). The approximate confidence set guarantee is most meaningful when ν is close to 1, or when the model misspecification is not too extreme, i.e. ρ() is small. Let ∈ be any estimate of based on _1. Suppose that our approximate relative fit tests are valid, and satisfy the condition in (<ref>). Then, the approximate robust universal confidence set C_ν,α,n is a uniformly valid ν-approximate confidence set for : inf_∈_ (_ν∩ C_ν,α, n∅) ≥ 1 - α. Fix any ∈. Let the event E = {∈_ν}. On the event E, (<ref>) implies _ (∉ C_ν,α,n |  E) = __1 (__0 (ϕ_, ,ν (_0)  | _1, E)  |  E) ≤α. On the complement of E, i.e., ∉_ν, _0 contains (, ). Thus, an analogous argument to that in the proof of Theorem <ref> can be used. Combining the two results, we obtain that, for all ∈, _ (_ν∩ C_ν,α, n = ∅) ≤_ (∉ C_ν,α, n |  E) (E) + _ (∉ C_ν,α, n |  E^∁) (E^∁) ≤α. As in the construction of the exact robust universal confidence set, one should aim to choose the pilot estimate as close as possible to . In the exact setting, the choice of the pilot estimate does not affect the validity of the resulting set and only affects its size. However, in constructing an approximate robust universal set, if we can ensure the pilot is accurate, then our approximate validity guarantees improve. Concretely, for some sequence κ_n we define: (κ_n) := {P∈ : ρ( P) ≤ρ() + κ_n}. If we can ensure that the pilot estimate is contained in (κ_n) with probability at least 1 - β for some sequence κ_n, then we can show that the constructed confidence set C_ν, α,n will intersect (κ_n) with high probability. For instance, if κ_n → 0 as n grows, then rather than simply intersecting the set of approximate projections _ν, we can now show that C_ν,α,n intersects a shrinking neighborhood around . More formally we have the following result (we omit its proof since it follows the same arguments as in Theorem <ref>): Let (κ_n_1) be defined as in (<ref>), and suppose that our pilot is accurate, i.e. we can ensure that with probability at least 1 - β, ∈(κ_n_1). Suppose further that our approximate relative fit tests are valid, and satisfy the condition in (<ref>). Then: inf_∈_((κ_n_1) ∩ C_ν,α, n∅) ≥ 1 - α - β. In this section, we have shown that inverting exact or approximate tests of relative fit yield robust exact or approximate confidence sets despite model-misspecification. We now turn our attention to the design and analysis of these tests. § DESIGNING TESTS OF RELATIVE FIT Our proposed method relies on designing valid tests of relative fit. In this section, we design exact tests of relative fit in KL and the density power divergences, and design approximate tests for the Hellinger, TV and IPM-based divergences. §.§ Kullback-Leibler Divergence To design an exact test of relative fit for the KL divergence we make a simple observation that there is a natural plug-in estimator of the difference in KL divergences. We can rewrite the difference in KL divergences as: ( P) - () = ∫logp_1/p where p and p_1 are the density of P and with respect to a common dominating measure. When we obtain samples from this suggests the following log split likelihood ratio test: ϕ_P = [ 1/n_0∑_i∈_0 T_i (P,) > t_α (P, ) ], T_i(P, ) ≡ T(X_i; P, ) = logp_1 (X_i)/p (X_i), where _0 is an index set of _0 and t_α (P, ) is chosen to ensure validity. This test was called the relative information fit test (RIFT) and studied in the work of <cit.> to study the relative goodness-of-fit of two candidate estimates. In our paper, we invert the same test in order to construct a robust universal confidence set. When the variance of T_i(P, ) (conditional on 𝒟_1) is finite, we can derive the asymptotic distribution (conditional on 𝒟_1) of the log split likelihood ratio via the CLT. Let T_n_0 (P,) = ∑_i∈_0 T_i(P, ) / n_0. Conditional on _1 and assuming that the variance _ [T(P_0, P_1)] < ∞, for any (P_0,P_1) ∈^2, √(n_0)( T_n_0 (P,) - _ T (P, ) ) ⇝(0, s_P^2 ) as n_0 →∞ where s_P^2 ≡ s_P^2 (_1) = _ [T_1^2] - _ T_1^2 can be estimated by ŝ_P^2 = 1/n_0∑_i∈_0 (T_i(P, ) - T_n_0)^2, and ⇝ denotes convergence in distribution (conditional on 𝒟_1). When assessing distributions P that are very similar to the pilot , it might be the case that s_P^2 is vanishingly small. Consequently, it is possible that s_P/s_P does not converge in probability to 1, and the CLT with estimated variance s_P^2 need not hold. Following <cit.> we modify each T_i(P,) by adding a small amount of independent Gaussian noise, i.e. we replace each T_i(P, ) above by T_i(P, ) + δ Z_i where Z_1,…,Z_n_0∼ N(0,1), for some small positive constant δ > 0 (we use δ = 0.01 but note that this has no practical effect and this modification simply eases the theoretical analysis). We denote the resulting statistic by T_n_0,δ(P, ) and the corresponding empirical standard deviation by s_P,δ. Then, we define the KL Relative Divergence Fit () set as _, n≡_α, n () = {P∈ : T_n_0, δ(P, ) ≤z_αŝ_P,δ/√(n_0)} where z_α is a 1-α quantile of standard normal distribution. The following result provides asymptotic and non-asymptotic guarantees for the set _, n. Suppose that 𝒬 is such that for some 0 < ξ≤ 1 the 2+ξ moments M_P_0,P_1 := _ |T(X; P_0, P_1) - _T(X; P_0, P_1)|^2+ξ are finite, for any (P_0,P_1) ∈^2, then inf_∈_ (∈_, n) ≥ 1 - α - C n^-ξ/2, where C < C' (1 + sup_(P_0,P_1) ∈𝒫^2 M_P_0,P_1) /δ^(2+ξ) for a universal constant C'. We give a formal proof in Appendix <ref>. The claim follows as a consequence of the Berry-Esseen bound for the studentized statistic <cit.>. Some care is required to handle the degeneracy (discussed above) when the variance of the summands can be small and to handle the randomness in the pilot estimate . We can now revisit the failures of universal inference discussed in Section <ref>. Recall that Example <ref> illustrates the instability of the KL projection because likelihood ratios may not be bounded. The KL set does not resolve this weakness since the KL set uses the same split likelihood ratio statistic as for the universal confidence set <cit.> and its 2 + ξ moment is not uniformly bounded in Example <ref>. However, the KL set does resolve the failure highlighted in Example <ref>. Assume the same model as in Example <ref>. Suppose we take the pilot estimator to be the MLE. The KL set (<ref>), with an equal sized split into _0 and _1, covers the KL projection at the nominal level asymptotically. This result follows directly from Theorem <ref>, since in this example all of the relevant log likelihood ratios are uniformly upper bounded. It is worth noting that both the standard universal set, and the set _, n are based on essentially the same split likelihood ratio statistic, and it is perhaps surprising that the standard universal set fails but _, n succeeds in guaranteeing coverage. Despite being based on the same statistic, the two sets use very different thresholds. It is easy to see that one can rewrite the split LRT confidence set in universal inference <cit.> as: _sLRT= {P∈ : T_n_0 (P,) ≤log (1/α)/n_0}. The threshold used in (non-robust) universal inference decays at the fast rate of order O(1/n_0) compared to that of the robust universal confidence set _, n whose threshold decays at the rate O(1/√(n_0)). When the model is misspecified the (non-robust) universal set shrinks too rapidly leading to the failure highlighted in Example <ref>. The confidence set _, n is constructed by approximating the distribution of the test statistic in (<ref>). When likelihood ratios are uniformly upper bounded it is straightforward to construct finite-sample valid sets via an exponential tail bound. For example, the finite-sample exact robust universal confidence set based on the Hoeffding bound is: _HF,B,n = {P∈ : T_n_0 (P,) ≤ B√(log(1 / α)/2n_0)}, where B is such that |T_i (P_0, P_1) - 𝔼_ T(P_0,P_1)| ≤ B for all (P_0,P_1)∈^2. In this case we assume that the upper bound B is known to the statistician. One can generalize this construction in various ways. When the statistic is assumed to only have finite variance one can use Chebyshev's inequality to construct a finite-sample valid set. When in addition to boundedness the statistic might have small variance one can use empirical Bernstein-type inequalities to construct finite-sample valid confidence sets. We explore these further in Appendix <ref>. We compare the empirical performance of _, n and these finite-sample valid sets in Section <ref>. §.§ Density Power (DP) Divergences We can construct an exact test of relative fit for the family of DP divergences following the same strategy as in KL case. Let T_n_0(P, ) = _β (_n_0 P) - _β (_n_0) = ∫{ p^1+β - p_1^1+β}λ̣- ( 1+1/β) 1/n_0∑_i∈_0[ p^β - p_1^β] (X_i) := 1/n_0∑_i∈_0 T_i(P, ), where _n_0 is the empirical measure constructed from _0. The split statistics T_i(P, ) encode the difference in average β-powered densities (penalized with L_1+β norm) rather than the log-likelihood ratio evaluated on the sample _0 when β > 0. Then, conditional on 𝒟_1, _ T(P,) = _β ( P) - _β (). We define the DP set _,n exactly as in (<ref>), and observe that the analogue of Theorem <ref> holds (with an identical proof) for _,n. Recall that KL set was unable to resolve the instability problem in Example <ref>. This is because the likelihood ratios in this model can blow up. On the other hand the DP set relies on the statistics in (<ref>), which are bounded for any β > 0, provided the relevant densities are well-defined. Formally, we have the following result: Suppose we have the same model as in Example <ref>. For sufficiently large n, for any pilot estimator , the DP set _,B,n defined as in (<ref>) with B=1 + 1/β, with an equal sized split into _0 and _1, covers the DP projection at the nominal level. A formal proof can be found in Appendix <ref>. The key observation is that the DP projection is (0) for a sufficiently large sample size for any fixed β > 0. The DP projection in this example is more stable than the KL projection (1/2), considering that ϵ_n is much closer to 0 than 1/2. Consequently, we show that the DP set will cover the target of inference (0) with high probability. We emphasize that the MLE is also (0) with high probability, yet both universal split LRT and KL set based on the MLE fail to cover the KL projection due to the instability of the population projection distribution. §.§ Hellinger Distance The Hellinger distance (or the difference in Hellinger distances) does not lend itself to a natural plug-in estimator. The usual method of estimating the Hellinger distance proceeds instead via some type of non-parametric density estimation, which in turn requires additional smoothness assumptions. Since our goal in this paper is to design assumption-light methods, we instead relax the target of inference. This in turn opens the door for designing approximate tests of relative fit. Our strategy will be to modify the ρ-estimator[The name “ρ-estimator” comes from the standard symbol used for the Hellinger affinity.] <cit.> which is a density estimator tailored to the Hellinger loss. Define the split ρ-test statistic T_n_0 (P, ) := Δ (P, ) + 1/n_0∑_i∈_0ψ( √(p_1/p) (X_i) ),    Δ (P_0, ) = 1/√(2)[^2(P_0, P) - ^2(, P) ], where P = (P + ) / 2 and ψ: [0,∞] ↦ [-1,1] is a non-decreasing Lipschitz function satisfying ψ (x) = - ψ (1/x). The choice of ψ we adopt throughout this paper, is to take ψ(u) = (u-1)/√(1+u^2) which comes from work on the ρ-estimator <cit.>. The function ψ is a bounded transformation of the likelihood ratio, and due to this boundedness the split ρ-test statistic is tightly concentrated around its expectation. The following proposition, which follows directly from Proposition 11 of <cit.>, characterizes the expectation of the split ρ-statistic. For any P^*, P_0, P_1, (2 + √(2)) _T_n_0 (P_0,P_1) ≤(3 + 2√(2)) ^2 (, P_0) - ^2 (, P_1). This proposition ensures that _T_n_0(P_0, P_1) is negative for any ∈ when the null hypothesis H_0 : (3+2√(2)) ^2 (, P_0) ≤^2 (, P_1) is true. This proposition in turn suggests that T_n_0(P_0, ) could be a useful statistic for designing an approximate test of relative fit in the Hellinger distance with ν = √(3+2√(2)). We define the Hellinger Relative Distance fit () set _,n exactly analogous to the KL set (<ref>) (obtained from a δ-corrupted version of the statistics T_n_0(P, )). The following result follows by combining Theorems <ref> and <ref>, and noticing that the split statistic is uniformly upper bounded. Let ν = √(3 + 2√(2)). For any 𝒬, inf_∈_ (_ν∩_, n∅) ≥ 1 - α - C/√(n), where C < C'/δ^3 (for a universal constant C'). We are now in a position to revisit Example <ref>. In Proposition <ref>, we showed that changing the target of inference to DP projection could address the failure of universal inference. In a similar vein, targeting the Hellinger projection resolves the failure, but interpreting the resulting guarantee requires some nuance as set may not cover the exact Hellinger projection, and is only guaranteed to cover a ν-approximate projection. In the case of Example <ref>, it will turn out for sufficiently small values ϵ the ν-approximate Hellinger projection set is a singleton (and equal to the exact Hellinger projection). As highlighted earlier, when the amount of model-misspecification is not too large the distinction between the ν-approximate projection set and the exact projection can be small. Assume the same model as in Example <ref>. Suppose we take the pilot estimator to be the Minimum Hellinger Distance estimator <cit.>, = _P ∈ (_n_1 P). For sufficiently large n (> 20), the Hellinger set _,n, with an equal sized split into _0 and _1, covers the Hellinger projection ≡(0) at the nominal level asymptotically. A formal proof is provided in Appendix <ref>. It will turn out that in this example the ν-approximate Hellinger projection is exactly the Hellinger projection when ϵ≤ 0.05, and is the entire model , otherwise. This means that for larger values of ϵ, approximate validity is trivial, yet vacuous, as the target of inference can be any distribution in . This highlights the downside of targeting the ν-approximate projection set: when the model-misspecification is severe the resulting guarantees might be vacuous. §.§ Integral Probability Metrics (IPMs) Our proposal for a ν-approximate test of relative fit for IPMs is inspired by the work of <cit.> and <cit.>, where a similar idea was used to design robust density estimates. Recall the definition of the IPM, _(P_0, P_1) = sup_f ∈( _P_0 (f) - _P_1 (f) ). Associated with any pair of distributions is a so-called witness function f^*_(P,Q) = sup_f ∈ ( _P (f) - _Q (f) ), which witnesses the largest mean discrepancy between the two distributions. The split test statistic is then defined by: T_n_0 (P, ) = ∫ f^*_(P, )P̣ + /2 - 1/n_0∑_i ∈_0 f^*_(P, ) (X_i). The usefulness of this statistic is highlighted by the following characterization of the expectation of the statistic. For any P^*, P_0, P_1, 2 _ T (P_0,P_1) ≤ 3 (, P_0) - (, P_1). See Appendix <ref> for a formal proof. For the TV IPM this result appears in the work of <cit.> and <cit.>, and our result generalizes their argument to other IPMs. Proposition <ref> ensures that _ T(P,Q) is negative for all ∈ under the null hypothesis in (<ref>) with ν=3. We can construct _ by inverting the IPM approximate relative fit test, to obtain an identical guarantee to the one in Corollary <ref> (now with ν = 3). To further illustrate the construction of IPM approximate relative fit tests we consider three widely used IPMs—total variation distance, Wasserstein distance, and maximum mean discrepancy—where the witness functions are more explicit. Total Variation Distance. Suppose ρ(P_0 P_1) = (P_0, P_1) where is the total variation distance. This is an IPM over the function class = {f : f≤ 1}. An equivalent definition is (P_0, P_1) = sup_A | P_0 (A) - P_1(A) | = P_0 () - P_1 () where = {p_0 > p_1} is the Yatracos set with maximal discrepancy between P_0 and P_1. The witness function is f^*_(P_0, P_1) (x) = (x ∈) - 1/2. An immediate advantage of targeting the TV projection comes from that f^* is uniformly bounded. Given samples , consider the following test statistic which referred to as the split Scheffé statistic: T_n_0 (P,) = P () + ()/2 - _n_0(), _n_0 () = 1/n_0∑_i∈_0 (X_i ∈) where is redefined to be = {p > p_1}. The split Scheffé statistic, as the name suggests, is a sample-split analogue of the Scheffé estimate that was originally proposed in <cit.> building upon the work of <cit.>. Wasserstein Distance. Suppose ρ(P_0 P_1) = _1 (P_0, P_1) is the 1-Wasserstein distance (or Kantorovich metric). The associated function class is = {f: Lf≤ 1 } where Lf := sup{ |f(x) - f(y) | / x - y : x y } is the Lipschitz semi-norm. Although the ideas are much more general, we limit our discussion to univariate distributions on a compact support, i.e, = [0,b]. In this case, the witness function is explicit and easy to describe <cit.>. Define t; P_0, P_1 = ( F_P_1(t) > F_P_0 (t) ) - ( F_P_0 (t) > F_P_1 (t) ) ∈{0, ± 1 }, where F_P denotes the CDF of P. The witness function is f^*_(P_0, P_1) (x) = ∫_0^xt; P_0, P_1ṭ <cit.>. A direct application of the split statistic (<ref>) yields T_n_0 (P,) = 1/2∫t; P, ( _n_0 (t) - F_P (t) + F_ (t)/2) ṭ, where _n_0 (t) = 1/n_0∑_i∈_0(X_i ≤ t) is the empirical distribution. This particular split statistic is a sample-split analogue of the ℓ-estimator <cit.>. Maximum Mean Discrepancy. Suppose that is a unit ball of the reproducing kernel Hilbert space (RKHS) , with kernel k(x,y), and RKHS norm ·_ℋ, i.e., = {f: f≤ 1}. Then the corresponding IPM (<ref>) is called the Maximum Mean Discrepancy <cit.>. It was shown by <cit.> that the analytic witness function f^*_(P, ) = μ_P - μ_/μ_P - μ_ where μ_P(·) := 𝔼_P [k(X,·)] is the mean embedding of P. The split statistic T_n_0 (P, ) in this case reduces to an average of the (negative) witness function - _n_0 (f^*_(P, ) ) if the kernel k(·,·) is symmetric. In this case, the sign of the split statistic captures, in expectation, whether the population is closer to P or based on mean embeddings. §.§ Unified Sufficient Conditions for any Divergence Measure In this section we unify some of the treatment of the previous sections by giving conditions on split test statistics which ensure the exact and approximate validity of the resulting confidence sets. Given data , we consider tests of the form: ϕ_P_0, P_1, ν = ( T_n(P_0,P_1) > t_α(P_0,P_1)). We assume that the test statistic satisfies the following two additional conditions: T is anti-symmetric, i.e., T(X; P_0, P_1) = - T(X; P_1, P_0) for all P_0, P_1 ∈. There exists some fixed, positive numbers ν, c_1 ≥ 1 such that for all ∈, and any fixed P_0, P_1 ∈, c_1 _ T (; P_0, P_1) ≤νρ ( P_0) - ρ ( P_1). Assumption <ref> ensures that _ T (; P_0, P_1) is always negative for all ∈ when the null hypothesis (<ref>) is true. For instance, Propositions <ref> and <ref> establish the analogue of Assumption <ref> for Hellinger and IPM projection, respectively. Now, we may define ρ-set _ρ,n as in KL set (<ref>) by inverting the test based on (a δ corrupted version of) the statistic T: _ρ, n := {P∈ : T_n_0, δ(P, ) ≤z_αŝ_P,δ/√(n_0)} If the test statistic is bounded, i.e. T(X;P_0,P_1) ≤ B for any pair of distributions P_0,P_1 ∈𝒫^2 then we can define the finite-sample ρ-set as in (<ref>): _ρ,B,n = {P∈ : T_n_0 (P, ) ≤ B√(log(1 / α)/2n_0)} The following general result holds: Suppose that the test statistic satisfies Assumptions <ref> and <ref>. * Suppose that 𝒬 is such that for some 0 < ξ≤ 1 the 2+ξ moments M_P,Q := _ |T(X; P, Q) - _T(X; P, Q)|^2+ξ are finite, for any (P,Q) ∈^2, then inf_∈_ (∈_ρ, n) ≥ 1 - α - C n^-ξ/2, where C < C' (1 + sup_P,Q M_P,Q) /δ^(2+ξ) for a universal constant C'. * Suppose that T(X; P,Q) ≤ B, then: inf_∈_ (_ν∩_ρ,B, n∅) ≥ 1 - α. The proof of the validity claims follow the same structure as the proof of Theorem <ref>. The crucial Assumption <ref> distills out the key property of the test statistics that is useful in ensuring asymptotic or finite-sample validity. With these general validity results in place, we now turn our attention to studying the size of the resulting robust universal sets. § SIZE OF ROBUST UNIVERSAL CONFIDENCE SETS In the well-specified setting, for statistical models which satisfy classical regularity conditions, <cit.> showed that the Hellinger diameter of the split LRT confidence set depends on two factors: the size of determined by its (local) Hellinger bracketing entropy, and the closeness of to in the Hellinger distance. In a similar vein, in this section we show that the size of the universal sets, under certain regularity conditions, can be upper bounded by two factors: roughly, measuring the quality of the pilot estimate, and the size of statistical model. In the misspecified setting, we would like the robust universal set to shrink around its target at a fast rate. To measure the (directed) divergence between two sets measured in a divergence ρ and with respect to outside of , we define the ρ_^-divergence motivated by the directed Hausdorff distance. For a given divergence ρ and a collection of distributions S_1 ⊂, we define an ϵ-fattening of S_1 by: S_1 ⊕ϵ := ∪_Q ∈ S_1{P ∈ : ρ ( P) ≤ρ ( Q) + ϵ}. Now given two collections of distributions S_0, S_1 ⊂, we define the ρ_^-divergence by ρ^_ (S_0, S_1) = inf{ϵ≥ 0 : S_0 ⊆ S_1 ⊕ϵ}. ρ^_ (S_0, S_1) is the minimum ϵ-fattening of S_1 with reference to which contains S_0. To express the rate at which the robust universal sets shrink, we use the Rademacher complexity of ℱ_T, 𝒫, a function class which depends on the test statistic of choice, and the statistical model 𝒫. Concretely, we define, ℱ_T, 𝒫 := {f: f(x) := T(x; P,Q),  P,Q ∈𝒫}. We denote the Rademacher complexity of this class by ℜ_n(ℱ_T, 𝒫): ℜ_n(ℱ_T, 𝒫) := 𝔼[ sup_f ∈ℱ_T, 𝒫1/n∑_i=1^n R_i f(X_i)], where R_i are i.i.d. Rademacher random variables. In some of the cases we have considered in this paper, under additional regularity conditions the complexity measure ℜ_n(ℱ_T, 𝒫), can be related to a complexity measure of the underlying model 𝒫 using a standard contraction argument <cit.>: Suppose that , and the pilot estimate are distributions supported on some compact set 𝒞, with density with respect to the Lebesgue measure which are upper and lower bounded by constants. Then, for the test statistics introduced in Sections <ref>,<ref> and <ref>, ℜ_n(ℱ_T, 𝒫) ≲ℜ_n(𝒫). Finally, to characterize the quality of the pilot estimator , we say that the is an η_n-consistent estimator if ρ () - ρ() = O_ (η_n), where we use the standard big O in probability notation to indicate stochastic boundedness. With these preliminaries in place, we have the following result for the size of the ρ-set obtained by inverting a finite-sample valid relative fit test. The proof will be given in Appendix <ref>. Suppose that (<ref>) holds and sup_(P, Q)∈^2 |T(P, Q) - 𝔼 T(P,Q)| ≤ B. Fix any projection distribution , and recall the collection _ν in (<ref>). Then the robust universal confidence set _ρ,B,n in (<ref>), for an equal sized split into 𝒟_0 and 𝒟_1, satisfies for any ∈, ρ_^( _ρ,B,n, _ν) ≤ O_( η_n + ℜ_n(ℱ_T, 𝒫) + B√(log(1/α)/n)). Theorem <ref> states that the directed ρ_^-divergence between the exact robust universal confidence set and its target shrinks to zero at the prescribed rate, since _ν is a singleton {} when ν = 1. One can no longer show such a result for the ν-approximate robust universal confidence set even with an infinite number of observations. This is because, conditional on _1, the split test ϕ_P, , ν is guaranteed to achieve (exponentially) small Type 2 error uniformly over ∈ only for distributions P which are at least νρ() away from . Nevertheless, Theorem <ref> characterizes the rate at which _ρ,B,n shrinks to _ν. Theorem <ref> also shows how the size of the set depends on the choice of . When possible we should choose a pilot estimate which converges to the target at a fast rate to ensure that the term η_n is sufficiently small. A sensible choice is often a minimum distance estimator <cit.> which is not only a consistent estimator of under some regularity conditions but is also robust to some misspecification in its corresponding distance <cit.>. § SIMULATIONS In this section, we evaluate our proposed exact and approximate robust universal confidence sets in two particular setups—Overdispersion and Contamination—and demonstrate the advantages of the methods we propose. §.§ Overdispersion Overdispersion is a classic example of model misspecification where the true distribution has larger variance than what can be represented by the hypothesized model. Specifically, consider a case of count data generated from the negative binomial distribution with mean 𝔼_ (X):= θ^* and variance 𝕍_ (X) = κθ^* where the positive constant κ represents the dispersion ratio. Suppose a statistician hypothesized a Poisson model 𝒫_Θ = {Poi(θ) : θ∈ℝ_+} to best describe . Since the mean and the variance are the same for the Poisson distribution (implicitly assuming κ=1), the dispersion ratio κ captures the severity of the model misspecification. Figure <ref> shows ρ (Poi(θ)) with ρ = , , across the dispersion ratio. Notice that KL projection is the true mean θ^* (= 10) regardless of the dispersion ratio whereas Hellinger and TV projection gets smaller as the true variance is more inflated. The split LRT is sensitive to the misspecification. As highlighted in Section <ref>, the split LRT confidence set (_sLRT) may fail to cover the KL projection unlike the KL set (_) even with the same choice of θ_1 and the same log split likelihood-ratio statistic. Figure <ref> contrasts the performance of _sLRT and _ based on 1000 replicates of 200 simulated observations. In computing the confidence sets, the observations are equally split in half and we choose θ_1 to be the sample mean (which is the MLE) of the first half samples. As the misspecification gets more severe (larger κ), the empirical coverage of KL projection parameter (θ̃) decreases for _sLRT. When the dispersion ratio becomes larger than 3, _sLRT fails to achieve the nominal 95% coverage whereas _ maintains the validity regardless of how severe the misspecification is. Both the center and the right panel depict the size of the estimated confidence set varying over the dispersion ratio but from a different perspective. The former is based on the maximal excess KL divergence from the KL projection (which can be at most twice the KL-diameter of the set) whereas the latter is based on the L_2 distance over the parameter space. It is not surprising that compared to _, _sLRT is smaller in the L_2 sense and is closer to in an excess divergence sense. Beyond KL projection Unlike the KL projection, the Hellinger and TV projections are different for different degrees of overdispersion. Our target of inference regarding Hellinger and TV distance is ν-approximate projection rather than the projection as seen in the left panel of Figure <ref>. When the factor κ≥ 6 the ν-approximate target for both Hellinger and TV distance includes any θ∈ℝ_+. For values of dispersion ratio κ≥ 6, the ν-approximate projection for both the Hellinger and TV distances becomes and thus the approximate coverages are trivially 100%. Once again this highlights that the approximate projection is a meaningful target only when the model misspecification is not too severe. Figure <ref> summarizes the performance of approximate sets regarding Hellinger (_) and TV distance (_) based on 1000 replicates of 200 simulated observations. We choose the minimum distance estimator for θ_1 for both _ and _. Both _ and _ yield 100% empirical coverage—defined as a proportion of the confidence set that intersects _ν—across all dispersion ratios except almost well-specified case (0.01% dispersion) with 97.4% and 99.1% coverage, respectively. This conservatism is expected because for these divergences we have relaxed our target of inference to be the set of ν-approximate projections. Nevertheless, this does not mean that the Hellinger and TV sets are vacuously large. The center and right panel of Figure <ref> show the diameter of the set in Hellinger or TV distance sense, or Euclidean sense. The size of the set increases as the misspecification exacerbates regardless of distance measure. In general, _ is larger than _. _ behaves closer to _ under slight to moderate overdispersion and to _ as the overdispersion becomes severe. Comparison between asymptotic and finite sample valid sets Figure <ref> compares the various TV set when the is a 32% variance inflated negative binomial—Berry-Esseen (_), Hoeffding bound (_HF), empirical Bernstein bound <cit.>, and empirical Bentkus bound <cit.>. See Appendix <ref> for explicit forms of each confidence set. In all cases, we choose the same minimum TV distance estimator θ_1. The KL set dominates all finite sample valid confidence sets considered in this section, despite its validity relying on asymptotics. The finite sample valid sets are too conservative (and yield a meaningless set =) when only a few observations are available (n ≤ 50). Although our paper does not primarily focus on obtaining the tightest finite-sample valid confidence set, leveraging the variance _(X) can often be beneficial when constructing the confidence set. In this example, _EBS and _EBK outperform _HF since the Bernstein and Bentkus bounds are more sensitive to the variance. §.§ Contamination Consider the following contaminated data generating distributions which are mixtures of Gaussians. This simulation setup is used in the work of <cit.>. _1 = 0.99 N(0, 1) + 0.01 N(0, 30^2) (Symmetric) _2 = 0.94 N(0, 1) + 0.01 N(20, 20^2) + 0.05 N(-30, 20^2) (Asymmetric) _3 = 0.7 N(2, 1) + 0.2 N(-2, 1) + 0.1 N(0, 30^2) (Heavily Asymmetric) For each case, we denote _ to be an uncontaminated distribution that does not include the outlying noise distributions. Consider a location-scale family of Gaussian distribution 𝒫_Θ = {N(μ, σ^2 ) : (μ, σ)∈Θ} as a working model. (See Appendix <ref> for additional simulations for a location family with fixed scale.) Our goal is to evaluate the empirical performance—coverage and size—of robust universal confidence sets for the (approximate) projection of the various contaminated distributions onto 𝒫. Figure <ref> shows the mean and standard deviation of the projection distribution with respect to the KL, DP, Hellinger and TV distances along with the mean and standard deviation of the contaminated and uncontaminated distributions. The KL projection parameter is the same as the parameters of contaminated distribution in all cases. The DP projection parameters, get closer to uncontaminated parameters as the β parameter increases. The Hellinger projection is the closest to the uncontaminated parameters among all projections we considered, however, the size of _ν is much larger than that of approximate TV projection. The set _ν for both Hellinger and TV distance is quite large for the heavily misspecified case (Case 3). Practically, we recommend targeting DP projection with a reasonable choice of β (> 0.05) for this heavily misspecified case. Figure <ref> illustrates the empirical coverage and size of split LRT and ( and ) sets based on 1000 replications. For split LRT and KL sets, we choose θ̂_1 to be the quasi-MLE, whereas, for the DP set, we use the minimum DP divergence estimator. The split LRT fails to cover KL projection in all cases whereas sets achieve the nominal coverage with large enough sample sizes. The DP sets show superior coverage than KL set across all sample sizes. Such a target coverage improvement is more evident in the smaller sample sizes below 200, and as β gets larger, i.e., the DP set targets a more stable projection. Regardless of what divergence measure ρ is of interest, the size of the confidence set with reference to ρ shrinks to zero as the sample size increases. Again, the extremal values of _^ (_, ) for sample sizes below 500 highlight the instability of KL projection. Figure <ref> shows the maximal ρ-distance of and set from based on 1000 replications along with the ρ(_ν), a set of ρ-distance from to approximate projection _ν. ρ(_ν) illustrates the same phenomena as in Figure <ref> but with respect to each distance. Theoretically, we can only claim the shrinkage of set up to _ν. This can be seen in Figure <ref> for both Hellinger and TV set as the maximum excess distance from reaches νρ(_ν) with large enough samples. sets shrink beyond _ν in this example: the Hellinger set converges to a set quite close to with large enough sample size, while the TV set converges to a set around which does not appear to shrink with sample size. § DISCUSSION In this paper, we presented a general method for constructing uniformly valid exact and approximate confidence sets for various projection distributions under weak regularity conditions in the presence of possible model misspecification. We demonstrated that the universal inference procedure <cit.> can fail catastrophically even in simple examples, under fairly benign model-misspecification. We then showed that the robust universal inference framework can address these failures, providing methods which are robust and can meaningfully target different projection distributions. Despite data splitting playing an essential role in constructing an assumption-light universal confidence set, it also poses inefficiency and algorithmic randomness since only a random subset of observation is used in constructing the split statistics. This can be partially addressed with crossfitting where we average the split statistic with that after swapping the role of _0 and _1. In contrast to the well-specified setting where the validity of the crossfit set is immediate, more care is needed under model-misspecification. We investigate the validity of the crossfit set in Appendix <ref>. The papers <cit.> study many variants of universal inference (including constructing confidence sequences instead of confidence sets, to combining multiple sample-splits) and investigating these variants in the context of the robust universal inference framework of this paper would be interesting. Finally, our paper brings to the forefront the role of pairwise tests of fit (and relative fit) together with sample-splitting, in designing broadly applicable inference methods. We expect this basic insight to have further implications in other contexts, for instance, in designing universal inference procedures in other settings where likelihood-based methods are inapplicable. § ACKNOWLEDGEMENTS This work was partially supported by funding from the NSF grants DMS-1713003, DMS-2113684 and CIF-1763734, as well as an Amazon AI and a Google Research Scholar Award to SB. The authors are grateful to Arun Kuchibhotla, Aaditya Ramdas and Ian Waudby-Smith for helpful discussions regarding finite-sample valid confidence sets. plainnat § PROOFS FROM SECTION <REF> §.§ Example <ref> Note that the KL projection = ( 1/2 ). Consider the event E where all of the observed samples X_1,…,X_n are 0. We can see that, _(E) = 1 - _( ∑_i=1^n X_i > 0 ) ≥ 1 - _[ ∑_i=1^n X_i ] = 1 - n ϵ_n. Now, on the event E, it is clear that the MLE = (0). Let us denote the split-sample universal set by C_α(X_1,…,X_n), where we assume for simplicity that 𝒟_0 and 𝒟_1 each have n/2 samples. We then have, _(∉ C_α(X_1,…,X_n)| E) = _(ℒ_0()/ℒ_0() ≤α | E) = _(1/2^n/2≤α | E) = 1, for n ≥ 2 log_2(1/α). As a consequence, we can upper bound the coverage of the universal set by, _(∉ C_α(X_1,…,X_n)) ≥_(E) _(∉ C_α | E) ≥ 1 - n ϵ_n. Thus, we see that if 0 < ϵ_n ≤β/n for some β > 0, and n ≥ 2 log_2(1/α) then the universal set has coverage at most β. Choosing β < (1 - α) we see that the universal set fails to have its advertised coverage. §.§ Example <ref> The KL projection is ( 3/4 ). For simplicity we suppose that n is even, and that 𝒟_0 consists of the first n/2 samples and 𝒟_1 consists of the remaining samples. For a constant β > 0, let us consider the events E_0, E_1 defined as, E_0 = ( ∑_i=1^n/2 X_i < n/4 - β√(n)) E_1 =( ∑_i=n/2^n X_i < n/4 ). When events E_0 and E_1 hold we can see that the universal set C_α(X_1,…,X_n) fails to cover . In more detail, on the event E_1 the MLE, is (1/4) and thus, _(∉ C_α(X_1,…,X_n) | E_0, E_1) = _(ℒ_0()/ℒ_0() ≤α | E_0, E_1) ≤_(1/3^2β√(n)≤α) = 1, provided that n ≥ (log_3(1/α))^2/(4β^2). Thus, it suffices to show that E_0 and E_1 happen with sufficiently large probability. Using the fact that the Total Variation distance between the n-fold product measures, ((1/2)^n, (1/2 + ϵ_n)^n) ≤ n ϵ_n, we can reason instead about the probability of the events E_0 and E_1 when drawing samples from (1/2), and account for the difference using the Total Variation. Combining this fact with the standard Berry-Esseen bound applied to Bernoulli sums, together with some simple algebraic manipulations, we obtain that for some universal constant C > 0, P(E_0 ∪ E_1) ≥ P(Z < 0) × P(Z < -2√(2)β) - 2 C/√(n) - n ϵ_n. Thus, choosing ϵ_n ≪ 1/n, and β to be a sufficiently small constant, when n is sufficiently large, we obtain that, P(E_0 ∪ E_1) ≥1/8, and thus that, P(∉ C_α(X_1,…,X_n)) ≥ 1/8. § PROOFS FROM SECTION <REF> In this section, we formally verify the claim that the universal set typically includes both the pilot and the projection distribution. We first define the ρ-diameter of the set C as _ρ (C) = sup_P_a, P_b ∈ Cρ(P_a P_b). Let ∈ be any estimate of based on _1. Then, the exact robust universal confidence set C_α,n defined in (<ref>) has diameter at least ρ() with -probability at least 1 - 2α: inf_∈_(_ρ (C_α,n) ≥ρ() ) ≥ 1 - 2 α. Following a similar argument to that in the proof of Theorem <ref>, notice that for any ∈, _ (∉ C_α,n | _1) ≤α. Together with a union bound, we obtain that both and are included in the set C_α,n with -probability at least 1- 2α (conditionally on _1), and on this event, the diameter of the set is at least ρ() in expectation. § PROOFS FROM SECTION <REF> §.§ Proof of Theorem <ref> We first work conditional on the sample 𝒟_1 used to construct the pilot estimate . Let us define M_P,δ,ξ := 𝔼_ [|T_i(P,) + Z_i δ - 𝔼_ T(P,)|^2+ξ | 𝒟_1]. Due to the added Gaussian noise, the variance M_P,δ,0 is always strictly positive (i.e., larger than δ^2). By Minkowski's inequality, conditional on _1, we have M_P,δ,ξ≤[ (_| T_i (P, ) - _ T_i (P, )|^2+ξ | _1)^1/2+ξ + δ( |Z|^2+ξ)^1/2+ξ]^2+ξ. This means that for assumed , there exists a universal constant C_M such that (conditionally on _1) the 2+ξ moment of corrupted statistic T_i (P, ) + δ Z_i is uniformly bounded by C_M for all P∈. Conditionally on _1, the generalized Berry-Esseen bound for the studentized statistic <cit.> yields that, for a universal constant C', sup_t| ℙ_(√(n_0)( T_n_0,δ (P,) - _ T (P, ) )/s_P,δ≥ t  | 𝒟_1) - P(Z ≥ t)| ≤C' M_P,δ,ξ/n_0^ξ/2δ^2+ξ≤ C n_0^-ξ/2, where C = C' C_M δ^-(2+ξ). This holds in particular for ∈𝒫. Consequently, we see that, inf_∈_ (∈_, n) = inf_∈𝔼_ [ _ (∈_, n | 𝒟_1)] ≥ 1 - sup_∈𝔼_ [ _ (∉_, n | 𝒟_1)] ≥ 1 - [α - C n^-ξ /2], as claimed. §.§ Proof of Proposition <ref> Recall that X_i iid∼(ϵ_n) for ϵ_n ≤ (1-α)/n and our hypothesized model is ={(p): p∈{0, 1/2}}. For a fixed β >0, _β((ϵ_n) (p) ) = C + (p^1+β + (1-p)^1+β) - (1 + 1/β) [ϵ_n p^β + (1-ϵ_n) (1-p)^β] where C = ∑_x∈0,1ϵ_n^(1+β)x (1-ϵ_n)^(1+β)(1-x). The DP divergences from to the elements of the working model are _β((ϵ_n) (0) ) ∝ 1 - (1 + 1/β) (1-ϵ_n) = (1 + 1/β) ϵ_n - 1/β _β((ϵ_n) (1/2) ) ∝ - (1/2)^β / β. Therefore, the DP projection is = (0), if ϵ_n ≤ (1 -(1/2)^β) / (1 + β), (1/2), otherwise. Since ϵ_n < (1-α)/n, the projection will be (0) for any β >0, provided that n ≥ (1-α) (1 + β) / (1- (1/2)^β). Now we turn our attention to constructing the DP set. For any fixed (P,Q) ∈^2, the split statistic is uniformly bounded, i.e., |T_i (P,Q) - _ T (P,Q)| ≤ 1 + 1/β since T_i (P, Q) = ∑_x∈{0,1}[ (p^x (1-p)^1-x)^1+β - (q^x (1-q)^x)^1+β] - ( 1+1/β) [ (p^X_i (1-p)^1-X_i)^β - q^X_i (1-q)^1-X_i] (X_i). By Hoeffding's inequality, _,1+1/β,n ensures nominal coverage for any estimator , since we have that: _(∉_,1+1/β,n) = _(_( T_n_0 (,) > β + 1/β√(log(1/α)/2 n_0) | _1 )) ≤α. §.§ Proof of Proposition <ref> Note that Hellinger projection is (0) for n>6 (as long as ϵ_n < 0.146) since ^2((ϵ_n), (0)) = 1 - √(1 - ϵ_n), ^2((ϵ_n), (1/2)) = 1 - √(ϵ_n / 2) - √((1-ϵ_n) / 2). Similarly, ν-approximate Hellinger projection is (0) if ϵ_n < 0.051 or otherwise. Hereafter we only consider n > 20 where _ν =. The minimum Hellinger Distance Estimator (MHDE) is _p∈{0, 1/2}^2 (_n_1, (p)) = _p∈{0, 1/2}√(p X_n_1) + √((1-p) (1 - X_n_1)) where X_n_1 = ∑_i∈_1 X_i / n_1. Thus, = (0), X_n_1 < 0.5-1/(2√(2)) ≈ 0.146 (1/2), Otherwise. This implies that the advertised coverage is guaranteed when X_n_1 < 0.146. Otherwise, Corollary <ref> ensures the asymptotic (approximate) validity. §.§ Proof of Proposition <ref> The proof follows directly by the triangle inequality. 2 _ T (P_0, P_1) = _P_0 f^*_(P_0, P_1) + _P_1 f^*_(P_0, P_1) - 2 _ f^*_(P_0, P_1) = 2 [ _ f^*_(P_0, P_1) - _ f^*_(P_0, P_1)] - _P f^*_(P_0, P_1) - _P_1 f^*_(P_0, P_1) = 2 [ _ f^*_(P_1, P_0) - _P f^*_(P_1, P_0)] - _ (P_0, P_1) ≤ 2 _ (, P_0) - _ (P_0, P_1) ≤ 2 _ (, P_0) - [_(, P_1) - _(, P_0)] (by the triangle inequality) = 3 _(, P_0) - _(, P_1) § PROOFS FROM SECTION <REF> §.§ Proof of Theorem <ref> Recall that the exact robust universal confidence set based on the Hoeffding bound is _ρ,σ,n = { P∈ : T_n_0 (P,) ≤ B √(log (1/α)/2 n_0)}. We denote t_α,n := B √(log (1/α)/2 n_0) throughout the proof, and use C to denote _ρ,B,n. Throughout the proof, we fix a projection distribution and assume an equal split between _0 and _1. Denote δ_ν (P, Q) = ρ( P) - νρ( Q) for any P, Q ∈. We want to show that, for fixed κ > 0, for some finite M > 0, _( sup_P ∈δ_ν(P, ) > M (η_n + ϵ̃_n) ) ≤κ, where ϵ̃_n = ℜ_n(_T,𝒫) ∨ t_α,n and for all n large enough. Let the event E be δ_1 (, ) ≤ (M/ν) η_n which happens with probability at least 1-κ/2. Then, _( sup_P ∈δ_ν(P, ) > M (η_n + ϵ̃_n) ) ≤_( sup_P ∈δ_ν(P, ) > M (η_n + ϵ̃_n)  |  E ) + κ/2 = _( sup_P ∈δ_ν(P, ) > M (η_n + ϵ̃_n) - νδ_1(, )  |  E ) + κ/2 ≤_( sup_P ∈δ_ν(P, ) > M ϵ̃_n  |  E ) + κ/2. Thus, it suffices to show that conditional on E, with -probability at most κ/2, all P∈ such that δ_ν(P, ) > M ϵ̃_n are not included in . Hereafter we condition on event E. Let _ϵ := {P ∈ : δ_ν(P, ) > ϵ}. From Assumption <ref>, we have that _( ∀ P∈_ϵ, P ∈_ρ,B,n | _1 ) = _( sup_P∈_ϵT_n_0(, P) ≥ - t_α,n | _1 ), ≤_( sup_P∈_ϵ[T_n_0(, P) - _ T(, P)] ≥ϵ - t_α,n | _1 ). where the inequality is from noticing that conditional on _1, sup_P∈_ϵ [- _ T(, P)] ≥sup_P∈_ϵδ_ν(P, ) ≥ϵ by Assumption <ref>. To ease the notation, denote the centered statistic as T_P := T_n_0(, P) - _ T(, P). Since |T(,P)| ≤ B, any change in X_i can change sup_P∈_ϵT_P at most 2B/ n_0. By McDiarmid's inequality, we have that _(sup_P∈_ϵT_P ≥ϵ - t_α,n | _1) ≤exp( - n(ϵ - t_α,n - _[sup_P∈_ϵT_P] )^2/2 B^2). Now we focus on bounding _ [sup_P∈_ϵ |T_P|] (which is greater than _ [sup_P∈_ϵT_P ]). Let _T,𝒫 = {T(·; , P) : P ∈}. The symmetrization lemma <cit.> states that _Xsup_f∈_T,𝒫1/n_0| ∑_i=1^n_0[f (X_i) - _ f(X_i)] | ≤ 2 _X,εsup_f∈_T,𝒫|1/n_0∑_i=1^n_0 R_i f(X_i)| := 2 _n_0 (_T,𝒫) where R_i are iid Rademacher random variables. § FINITE-SAMPLE VALID CONFIDENCE SET FOR BOUNDED TEST STATISTIC Suppose the split statistics are uniformly bounded, i.e., |T_i (P)| ≤ B for all i. Classic Cramér-Chernoff bounds yield finite-sample valid exact (ν = 1) or approximate (ν > 1) confidence sets. _HF is a uniformly valid 1-α exact (ν = 1) or approximate (ν > 1) confidence set for where _HF = {P ∈ : T_n_0 (P) ≤√(B^2/2n_0log(1/α))}. Typically, Hoeffding's bound does not scale with the variance which results in a conservative confidence set. Confidence set based on Bernstein's inequality is given as follows. _BS is a uniformly valid 1-α exact (ν = 1) or approximate (ν > 1) confidence set for where _BS = { P∈ : T_n_0(P) ≤√(2 S^2 log (1/α)/n_0 + B^2/9( log (1/α)/n_0)^2) + B log (1/α)/3 n_0} where S^2 = S^2(P) = (c_1 ν)^2 [ρ ( P) + ρ ()]. However, _BS above requires knowledge of to compute S. Empirical Bernstein bounds <cit.> address this issue. Denote T̃_i (P,Q) = (T (X_i; P, Q)+ B) / (2B). _EBS is a valid 1-α confidence set for exact (ν = 1) or approximate (ν > 1) projection where _EBS = { P∈ : ∑_i=1^n_0λ_i T̃_i (P, )≤log(1/α) + ∑_i=1^n_0 v_i ψ_E (λ_i) }, v_i = (T̃_i (P, ) - T̃_i - 1 (P, ))^2, ψ_E(λ) = - (log(1-λ) - λ), and λ_i = √(2log(1/α)/n_0 Ŝ_i - 1^2)∧ c, Ŝ_i^2 = 1/4 + ∑_l=1^i (T̃_l - T̃_l)^2/i + 1, T̃_i = 1/i+1∑_l=1^i T̃_l, for some c ∈ (0,1). When the variance or an upper bound of the variance is known, Bentkus's bound <cit.> is sharper than any Cramér-Chernoff type bounds. See <cit.> for details. Define a Bernoulli random variable G = G(S^2, B) as ( G = B ) = S^2/S^2 + B^2 := p_SB, ( G = - S^2/B) = 1 - p_SB _BK is a valid 1-α confidence set for is a valid 1-α confidence set for exact (ν = 1) or approximate (ν > 1) projection where _BK = { P∈ : T_n_0(P,) ≤ q(α) } where q(α) is the solution to P_2 ( u; ∑_i∈_0 G_i ) := inf_t ≤ u_( ∑_i∈_0 G_i - t )_+^2/(u -t )_+^2 = α, and S^2 = S^2(P) = (c_1 ν)^2 [ρ ( P) + ρ ()]. As in the case of Bernstein's bound (<ref>), Bentkus's bound (<ref>) requires prior knowledge of to compute the variance S. The empirical Bentkus's bound <cit.> addresses this by taking the union bound on variance over-estimation and the Bentkus's inequality. Following <cit.> define the over-estimator of S as, for δ∈[0,1], S_n (δ) = √(S_n_0^2 + g_2,n_0 (δ)) + g_2,n_0(δ), S_n^2 = 1/⌊ n / 2 ⌋∑_i=1^⌊ n / 2 ⌋(T_2i - T_2i-1)^2/2, where g_2,n(δ) := B (√(2) n)^-1√(⌊ n / 2 ⌋)Φ^-1 (1- 2 δ / e^2) and Φ is the cdf of a standard Gaussian. _EBK is a valid 1-α confidence set for is a valid 1-α confidence set for exact (ν = 1) or approximate (ν > 1) projection where for some δ∈[0,1], _EBK = { P∈ : T_n_0(P,) ≤ q(α - δ) } where q(α - δ) is the solution to P_2 ( u; ∑_i∈_0 G_i ( S^2_*(δ), B ) ) = α - δ. with S_* (δ) := min_1 ≤ i ≤ n_0S_i (δ). In Section <ref>, we choose δ = α/3 to construct the empirical Bentkus's bound-based TV set. § CROSSFIT SET Despite universal inference holds for any , let us assume we choose such that sup_P ∈T(·; P, P_1) - T(·; P, )_L_2() = o(1). For any fixed P ∈, consider the following decomposition: _n_0 T(·; P, P_1) - _ T(·; P, ) = (_n_0 - _) [T(·; P, P_1) - T(·; P, )] + _[T(·; P, P_1) - T(·; P, )] + (_n_0 - _) T(·; P, ). The first term is the empirical process which is o_ (1/√(n_0)) applying Lemma 2 of <cit.>. The second part is the bias which is o(1) from our choice of . The last term yields the CLT. Now let T_n_1 (P; P_0) := ∑_i∈_1 T(X_i; P, P_0) /n_1 where we change the role of _0 and _1. Define a cross-fitting estimator as T_n^× (P) = n_1 T_n_1 (P; P_0) + n_0 T_n_0 (P; P_1)/n. The n (T^× (P) - _ T(·; P, )) has the following decomposition: n_0 (_n_0 - _) [T(·; P, P_1) - T(·; P, )] + n_1 (_n_1 - _) [T(·; P, P_0) - T(·; P, )] + n_0 _ [T(·; P, P_1) - T(·; P, )] + n_1 _ [T(·; P, P_0)- T(·; P, )] + n (_n - _) T(·; P, ). Similarly, both empirical process terms in the first line are o_ (1/√(n)), and bias terms in the second line are o (1). Thus, we left with the same CLT term. The decomposition implies that as long as one chooses a “good” candidate estimator, cross-fit estimator also provides an asymptotically (uniformly) valid inference on . Construct a cross-fit ρ-set as follows: C^×_ρ,α, n = {P∈ : T^× (P) ≤ z_αŝ_P^×/√(n)}. where ŝ_P^× 2 = [_ (T (X; P, P_1)) + _ (T (X; P, P_0) )] / 2 is a consistent estimator of _ (T(X; P, )). § ADDITIONAL RESULTS AND TECHNICAL DETAILS ON NUMERICAL STUDIES §.§ Computational detail We adopt a heuristic search method for finding a confidence set in multivariate parameter space. For brevity, we explain the procedure in 2 dimensions, but the procedure can be straightforwardly extended to higher dimensions. From the observation that when _1 is close to θ̃, i.e., ρ( P__1) ≤νρ() as seen in the proof of Theorem <ref>, T_n_0 (_1) = 0 for split statistics that satisfies Assumption <ref> and <ref>. Therefore, we construct a star-convex confidence set that always includes _1. We construct the rays originate from _1, i.e., R_ω = {θ∈Θ : r_ω^⊤ (θ - _1) = 0, r ≥ 0 } where r_ω = (r sinω, - r cosω) for angle ω∈ [- π, π]. For each ω, we find a root of an evidence function (θ) = T_n_0 (θ) - t_α (θ) using Brent's method <cit.> on R_ω constrained with radius r varying from 0 (corresponding θ=_1) to some r_0 > 0 such that the corresponding θ_0 satisfies (θ_0) > 0. §.§ Gaussian contamination - Location family Consider a Gaussian location family = {(θ, 1) : θ∈} where the variance is fixed to that of uncontaminated distributions. Figure <ref> shows the projection parameters along with those of contaminated and uncontaminated distributions. The mean of contaminated distribution and that of uncontaminated distributions are the same for Cases 1 and 3 but not for Case 2. This leads to the interesting observation that forward KL projection is the closest to the uncontaminated distribution in Case 3 unlike location-scale family in Figure <ref>, Section <ref>. Figure <ref> summarizes the performance of confidence sets targeting the forward KL or DP projection over 1000 replications. Clearly, split LRT fails to attain the nominal coverage even for a large enough sample size. All other sets achieve the nominal coverage for moderate to large sample size. _ are shorter than _ and even than the invalid split LRT set for Cases 2 and 3.
http://arxiv.org/abs/2307.06019v1
20230712090150
Higgs amplitude mode in ballistic superconducting hybrid junctions
[ "Pierre Vallet", "Jérôme Cayssol" ]
cond-mat.mes-hall
[ "cond-mat.mes-hall" ]
[email protected] Université de Bordeaux, Laboratoire Ondes et Matière d'Aquitaine, 351 cours de la Libération, 33405 Talence, France [email protected] Université de Bordeaux, Laboratoire Ondes et Matière d'Aquitaine, 351 cours de la Libération, 33405 Talence, France In superconductors (SC), the Higgs amplitude mode is a coherent oscillation of the order parameter typically generated by THz laser irradiation. In this paper we propose to probe the Higgs mode using electronic transport in ballistic superconducting hybrid devices. We first confirm the existence of a non-zero amplitude mode in the clean case using the Keldysh-Eilenberger formalism. We then investigate two different device geometries, respectively a normal-insulating-superconductor (NIS) tunnel junction and a NSN junction with two transparent interfaces, the superconductor being irradiated in both situations. In the NIS case, the Higgs manifests itself in the second-order AC current response which is resonant at the Higgs frequency. In the NSN case, the DC differential conductance allows to probe the gaps dynamically generated by the Higgs mode in the Floquet spectrum. Higgs amplitude mode in ballistic superconducting hybrid junctions J. Cayssol August 12, 2023 =================================================================== § INTRODUCTION Superconductivity is characterized by a spontaneous gauge symmetry breaking from the U(1) group to its ℤ_2 subgroup <cit.>. This leads to the appearance of a massive collective mode, corresponding to the coherent oscillation of the order parameter, the superconducting (SC) gap Δ(t) <cit.>. In SCs, the (Higgs) amplitude mode lies at energy 2 Δ which corresponds to few meV, but surprisingly it was experimentally observed only in 2013 <cit.>. The reason for this late experimental evidence is that the amplitude Higgs mode is a scalar mode with no charge, and therefore no direct linear coupling to electromagnetic probes. Detecting the Higgs mode in SCs requires nonlinear coupling between light and matter only available with strong laser fields. It is the development of THz lasers during the last decade that allowed the detection of the Higgs mode. Today, the Higgs mode has been detected in high-T_c SCs in pump-probe experiments through the measurement of third harmonic generation (THG) <cit.>. Note that the presence of the Higgs mode was reported earlier through Raman spectroscopy, but in SCs showing coexistence between charge density wave order and superconductivity <cit.>. A great deal of effort has gone into understanding the role of impurities in Higgs mode excitation. It is commonly believed that, in a clean system, the Higgs mode has a negligible effect on the optical response compared to the quasiparticles (QPs) excitation (Charge Density Fluctuation) <cit.>. Using path integral formalism, Cea et al. <cit.> found that THz light cannot excite the Higgs mode due to particle-hole symmetry and that THG originates only from charge density fluctuations (CDF). However, the measurement of the THG in a NbN superconducting crystal <cit.> exhibited a strongly isotropic response (as expected for the Higgs mode), contradicting the CDF hypothesis, the latter being anisotropic. To explain this experiment, several scenarios have been put forward. Phonon mediated interactions have been proposed to explain the strong response due to the Higgs mode <cit.>. It has been shown that impurities can drastically modify the excitation of the Higgs mode <cit.>. Silaev, using the Eilenberger formalism, found that impurities are necessary to excite the Higgs mode with light <cit.>. Nevertheless, Yang et al., using a gauge-invariant formalism, came to the opposite conclusion, namely that a finite Higgs mode could be generated even in the ideally clean case, in accordance with the Ginzburg-Landau equations <cit.>. Vanishing of the CDF has also been demonstrated <cit.>, in agreement with the experiment <cit.>. Most of the experimental and theoretical studies were related to the all-optical way to detect the Higgs amplitude mode in SC, typically as a THG <cit.>. Recently, a completely different route has been proposed to detect the Higgs amplitude mode, which consists in using electronic transport measurements in hybrid superconducting devices. For instance, a tunnel interface (NIS junction) between a normal metal and a dirty SC has been studied using Usadel quasiclassical equations <cit.>. In such a DC-biased NIS junction, the presence of the Higgs mode is revealed as a second harmonic in the AC current flowing through the tunnel interface. Due to progress of nanofabrication processes, NIS devices can also be build with ballistic normal parts and clean superconductors separated by interfaces ranging from tunnel to transparent ones. In this paper, we study clean normal-superconducting hybrid junctions. Two geometries are considered, a NIS tunnel interface and a NSN junction with highly transparent interfaces. For the DC-biased NIS junction, the signature of the Higgs mode is seen in the second harmonic in the AC current, as in the dirty case. For the NSN transparent 1D junction, the DC differential conductance provides a spectroscopy of the Floquet gaps which are dynamically induced by the Higgs amplitude mode. The paper is organised as follows. In Sec. <ref> we study the conductance of a DC-biased tunnel NIS junction when the SC amplitude mode is pumped by THz light. We first solve the Eilenberger equations for an irradiated clean SC (<ref>) and demonstrate that the Higgs mode can be excited even in the absence of disorder (<ref>). Then we compute the second harmonic of the current flowing through the clean NIS junction (<ref>). In Sec. <ref> we investigate the NSN ballistic junction with transparent interfaces. We solve the transport equations for this junction and obtain a DC differential conductance revealing the presence of Floquet gaps. § NIS JUNCTION So far the Higgs mode has been mainly studied in bulk superconductors using optical probes <cit.>. Here we consider a NIS junction between a ballistic normal metal (N) and a clean superconductor (SC) connected by a thin insulating (I) tunnel junction. The amplitude mode of the SC is coupled to the electronic current passing through the interface and could be detected in transport experiments. §.§ Model The superconducting region is coupled to THz light with vector potential (t) = _0 e^-iω t. The normal metal part is not irradiated but is connected to the SC by a tunnel interface. In the N region, the electrons are assumed to have a parabolic dispersion ξ()=^2/2m -μ, where μ is the chemical potential. The relevant momenta are close to the Fermi momentum _F and the dispersion can be linearized as ξ()= _F · ( - _F) where _F = _F/m. A static bias potential V is applied to the N region with respect to the grounded SC. Our model is closely related to the NIS junction studied in the dirty case by <cit.>, the main difference being that we treat the clean limit, for both N and SC. To describe the dynamics within the whole NIS structure, we use the quasi-classical (QC) limit of the Eliashberg equations <cit.>, the so-called Eilenberger equations, which are valid when Δ / μ≪ 1. The Keldysh formalism with closed time contour addresses the out-of-equilibrium dynamics of the problem. We therefore introduce the Green functions in Nambu-Keldysh space <cit.> ǧ = [ ĝ^r ĝ^k; 0 ĝ^a ] , each ĝ^i being a 2× 2 matrix in electron/hole Nambu space and where the superscript i=r, a, k stands respectively for retarded, advanced and Keldysh (or kinetic) component. In the SC region, the Eilenberger equation reads <cit.> i {τ̌_3 ∂_t , ǧ} + i Δ(t)τ̌ _2,ǧ + e·_F τ̌_3 , ǧ = 0 , where the Pauli matrices τ_i are embedded in the following 4x4 matrices τ̌_̌ǐ = [ τ_i 0; 0 τ_i ] , the first-order time-derivative operator acts as {τ̌_3 ∂_t,ǧ} = τ̌_3 ∂_t ǧ (t,t') + ∂_t'ǧ (t,t')τ̌_3 , and finally the commutators have to be understood as [𝒪, ǧ] = 𝒪(t) ǧ (t,t') - ǧ (t,t') 𝒪(t') regarding the time arguments. The Higgs mode is non-linearly coupled to the vector potential . The leading nonlinear coupling is a second order one <cit.> with amplitude Δ_2 and pulsation 2 ω, so that the total time-dependent order parameter reads Δ(t) = Δ_0 + Δ_2 e^-2iω t . In the normal metal, the Eilenberger equation reduces to i {τ̌_3 ∂_t , ǧ} = 0 , whose solution in Fourier space simply reads for the retarded and advanced components g^r_n(ϵ) = - g^a_n(ϵ) = τ_3 . The Keldysh component in the N region, g^k_n = tanhβϵ_- /2 - tanhβϵ_+ /21 + tanhβϵ_- /2 + tanhβϵ_+ /2τ_3 , is related to the quasiparticle populations and contains the electrical potential V via the shifted energies ϵ_± = ϵ± eV. The quasi-classical approximation neglects the physics at distances smaller than the superconducting coherence length, and therefore the Eilenberger equation cannot be used directly to describe the interface. Nonetheless, using microscopic Gorkov Green functions, proper boundary conditions have been established by Zaitsev for the Eilenberger Green functions <cit.>. For a tunnel junction between ballistic normal and superconducting electrodes, the Zaitsev boundary conditions <cit.> can be expressed in the following simple form <cit.> ǧ_+ - ǧ_-/2 = ǧ_n,ǧ_s , where ǧ_± is the GF for the right (left) movers, ǧ_s (resp. ǧ_n) being the GF in the SC region (resp. N region). The electric current can be obtain from the kinetic function <cit.> as I = G_t/16e∫ d ϵ⟨τ_3 ǧ_n,ǧ_s^k⟩__F , where G_t is the tunnel conductance of the junction when the SC lead is in normal state. We denote ⟨…⟩ __F = ∫ dΩ_F / 4π (…) the angular average over the Fermi surface. We can write the current up to the second order as the real part of I(t) = I_0 + I_2 e^-2iω t. The second-order current can be written as a sum of two contributions I_2 = I_V + I_H where I_V is the current due to the second-order coupling to the vector potential only while I_H is the current directly associated to the excited Higgs mode (see Appendix <ref>). §.§ Second order perturbative solution Solving (<ref>) for of an arbitrary shape of the time-dependent potential (t) is difficult. Hence, we perform a perturbative analysis with respect to the THz field amplitude, the small parameter being A_F = e_0 ·_F. Note that |A_F| is a typical energy scale and the electromagnetic driven strength is given by the parameter |A_F|/ω. Within a quasi-classical interpretation, the coupling energy |A_F| corresponds to the energy gained by an electron at velocity v_F in a electric field ω_0 during a time 1/ω. The GF can be expressed as a sum of functions scaling as different powers of A_F as ǧ(t,t') = ǧ_0(t,t') + ǧ_1(t,t') + ǧ_2(t,t') , ǧ_i(t,t') being proportionnal to A^i_F. In order to solve the Eilenberger equation (<ref>) in the ϵ-space, we define the Fourier transforms ǧ_0(t,t') = ∫dϵ/2πǧ_0 (ϵ) e^-i(t'-t)ϵ , ǧ_1(t,t') = ∫dϵ/2πǧ_1 (ϵ) e^-it'ϵe^itϵ_1 , ǧ_2(t,t') = ∫dϵ/2πǧ_2 (ϵ) e^-it'ϵe^itϵ_2 , with ϵ_n = ϵ + nω. In the absence of irradiation, the zero-th order retarded (advanced) GF is found to be equal to <cit.> g^α_0(ϵ) = ϵτ_3 + iΔ_0 τ_2/s^α(ϵ), with α = r,a, where s^r(ϵ) = i √(Δ_0^2 - ϵ + iγ^2) and s^a(ϵ) = i √(Δ_0^2 - ϵ - iγ^2) with a branch-cut in the negative real line for the square root. The parameter γ is a small positive energy necessary to impose the proper boundary condition in the ξ-integration. It can also be interpreted as a Dynes parameter <cit.>, i.e. a small phenomenological constant which describes depairing effects in the SC, induces a broadening in the optical response functions, thereby preventing an infinite resonance of the Higgs mode. The first-order contribution to the retarded and advanced Green function reads (see Appendix <ref>): ĝ_1^α(ϵ) = A_F τ_3 - ĝ_0^α(ϵ_1) τ_3 ĝ_0^α(ϵ) /s^α(ϵ_1) + s^α(ϵ). The second-order contribution to the Green function is g^α_2 = g^α_V + g^α_H where ĝ_V^α(ϵ) = A_F^2/s^α_3(ϵ)[Σ^α(ϵ)ĝ^α_0(ϵ_2)ĝ̅^α_0(ϵ_1) ĝ^α_0(ϵ) . . - ξ_2 - ξ_1 - ξ] , ĝ_H^α(ϵ) = i Δ_2/s^α(ϵ_2) + s^α(ϵ)τ_2 - ĝ^α_0(ϵ_2) τ_2 ĝ^α_0(ϵ), with 𝒪 = τ_3 𝒪τ_3, ξ_i = ϵ_i τ_3 + iΔ_0 τ_2 and s^α_3(ϵ) =s^α(ϵ_2) + s^α(ϵ_1)s^α(ϵ_2) + s^α(ϵ)s^α(ϵ_1) + s^α(ϵ), Σ^α(ϵ) = s^α(ϵ) + s^α(ϵ_1) + s^α(ϵ_2). Now let us consider the Keldysh components of the Green functions which describe the non-equilibrium quasiparticle populations. In the absence of irradiation, namely at the zeroth-order in A_F, the stationary Keldysh GF is simply the equilibrium one <cit.> ĝ^k_0(ϵ) = ĝ^r_0 - ĝ^a_0tanhβϵ/2, with β = 1/k_B T. For all orders we define ĝ^k_i = ĝ^reg_i + ĝ^an_i, where ĝ_i^reg = g^r_i (ϵ) tanhβϵ/2- tanhβϵ_i/2 g^a_i (ϵ). The derivation for the other orders can be found in Appendix <ref>. The first-order contribution reads ĝ_1^an(ϵ) = A_Ftanhβϵ_1 / 2 - tanhβϵ / 2/s^r(ϵ_1) + s^a(ϵ) ×τ_3 - ĝ_0^r(ϵ_1)τ_3ĝ_0^a(ϵ) . The second-order Keldysh component is the sum of two terms, ĝ_2^an(ϵ) = ĝ_V^an(ϵ) + ĝ_H^an(ϵ), respectively given by : ĝ_V^an(ϵ) = A_F^2 tanhβϵ_2 / 2 - tanhβϵ_1 / 2/s^r(ϵ_2) + s^a(ϵ_1)s^r(ϵ_2) + s^a(ϵ)s^a(ϵ_1) + s^a(ϵ) ×[s^a(ϵ)+s^a(ϵ_1)+s^r(ϵ_2)ĝ^r_0(ϵ_2)ĝ̅^a_0(ϵ_1) ĝ^a_0(ϵ) - ξ_2 - ξ_1 - ξ] + A_F^2 tanhβϵ_1 / 2 - tanhβϵ / 2/s^r(ϵ_2) + s^r(ϵ_1)s^r(ϵ_2) + s^a(ϵ)s^r(ϵ_1) + s^a(ϵ) ×[s^a(ϵ)+s^r(ϵ_1)+s^r(ϵ_2)ĝ^r_0(ϵ_2)ĝ̅^r_0(ϵ_1) ĝ^a_0(ϵ) - ξ_2 - ξ_1 - ξ] ĝ_H^an(ϵ) = i Δ_2 tanhβϵ_2 / 2 - tanhβϵ / 2/s^r(ϵ_2) + s^a(ϵ)τ_2 - ĝ^r_0(ϵ_2) τ_2 ĝ^a_0(ϵ). Finally, the Higgs mode amplitude Δ_2 is calculated self-consistently from the relation Δ(t) = -iλπ/4∫_-ω_D^ω_Ddϵ/2π⟨τ_2 g^k(ϵ) ⟩__F, ω_D being the Debye cut-off. The equilibrium gap Δ_0 depends on the temperature and we use the well-known BCS interpolation formula Δ_0 (T) = Δ_0,0tanh1.74 √(T/T_c - 1) , where T_c is the critical temperature, Δ_0,0≡Δ_0 (T=0). §.§ Higgs mode A theoretical discussion is currently addressing the possibility to excite the Higgs mode in an ideally clean BCS superconductor. Using different formalisms in the clean regime, some works claimed that the Higgs mode can not be excited using optical techniques <cit.> while others obtained a finite Higgs mode response <cit.>. Using Keldysh real-time formalism to solve the corresponding Eilenberger equations (see previous section), we obtain a non-zero Higgs mode for all frequencies ω and observe a resonance at ω = Δ (Fig. <ref>), as expected from previous theoretical <cit.> results. Nevertheless, we also obtain that the Higgs mode amplitude is in principle smaller (but non zero) in clean SC than in dirty SC. We discuss now the differences between the clean and dirty cases, emphazising the crucial role of the anomalous contributions. First, there is a major difference between the typical energy involved in the excitations induced by irradiation : A_F^c = e |_0| |_F| in the clean case and A_F^d = D ħe_0/ħ^2 in the dirty case, with D the Usadel diffusion constant measuring the amount of disorder. We can write the amplitude in term of a regular and anomalous function B^reg = B^r - B^a and B^an, see Eq. (<ref>). An interesting difference in behavior appears at this level. In the dirty case, the regular and anomalous terms share the same sign and both constructively contribute to the Higgs mode amplitude. On the contrary, in a clean SC, those two terms have different signs and, being of same order, almost compensate each others. This sign difference explains qualitatively why the dirty case can in principle induce a stronger Higgs response from optical excitation. Still, a non-zero Higgs mode is excited in clean SC. Typically the diffusion coefficient D∼ 1 m^2.s^-1. For a Higgs mode of same amplitude in the clean and dirty case, we find that the vectors potential amplitude ratio |^d_0|/|^c_0| ∼ 0.1, such that a less intense pulse in the dirty case can create a response of the same intensity as the clean case with a stronger pulse. Note that in a recent preprint, Yang and Wu <cit.> solved partially the Eilenberger equations in the clean case. Yet, they neglected the anomalous GFs contributions to the Higgs mode (see Appendix <ref>). §.§ Second-harmonic Current We now discuss the transport properties of the NIS junction and focus on the AC current at pulsation 2ω. The amplitude I_2 of this current is computed from the Green functions Eqs. (<ref>), (<ref>) using Eq. (<ref>). We have splitted the second-order Green functions in contributions proportional to A_F^2 and Δ_2 respectively. This results in two contributions in the current : i) a current I_V directly induced by the nonlinear coupling with the electromagnetic field and ii) a Higgs current I_H directly proportional to the Higgs amplitude Δ_2, the formula being given in the Annex as Eq. (<ref>). Since the Higgs amplitude Δ_2 is resonant at ω=Δ_0(T), the Higgs current inherits this resonant behavior, while I_V is not resonant. Qualitatively the results are quite similar to the dirty system of <cit.>. Nonetheless interesting differences between the clean and dirty can still be seen studying the second order ac current (<ref>). In the dirty case, the current I_V is much stronger than the Higgs current I_H at the resonance. We see from Fig. <ref> that in the clean case the two are of the same order, I_H being still higher than I_V. Outside the resonance I_V rapidly takes over the Higgs contribution. Qualitatively the two cases are still very similar. As in the dirty case, the current is resonant at ω = Δ_0 and it is a clear signature of the Higgs mode. More quantitatively, in the dirty case, the Higgs mode current is of the order of ∼ 30 G_t A_F^d /e at resonance, against ∼ 0.08 G_t A^c_F /e in the clean case. Knowing that for a same Higgs amplitude A_F^d will be around ten times smaller than A_F^c we get an estimate that, for equal Higgs amplitude mode, the current in the dirty case I_H^d ∼ 10I_H^c with I_H^c the current in the clean system. At resonance, the current grows quasi-linearly until the bias eV = Δ_0. At this point the current starts to decrease with increasing V. This is due to the fact that the Higgs mode is a coherent pairing/depairing of Cooper pairs of frequency 2Δ_0, such that the ac current is maximum for a DC bias at the SC band edge, i.e. for eV=Δ_0 <cit.>. As in the dirty case, I_V presents different pics at frequency Δ_0 + nω, which are signs of photon-assisted transport. § NSN JUNCTION In this section we study a ballistic transport problem in a clean Normal-superconducting-Normal metal (NSN) junction, where the central grounded SC part is irradiated. A signature of the Higgs mode is found in the DC differential conductance of the system. §.§ Model Here we propose a model consisting in a Normal-Irradiated SC-Normal metal junction. The SC has a finite length L. The THz light is characterized by the real vector potential (t) = _0 e^iω t + e^-iω t. The junction is purely ballistic and we use the Bogoliubov-de Gennes (BdG) equation i d/dt[ u; v ] = ℋ(t) [ u; v ] where u and v are respectively the electron and hole amplitudes. The BdG Hamiltonian reads ℋ = [ H_0 - μ Δ(t); Δ ^*(t) μ - 𝕋H_0𝕋^-1 ] where Δ(t) = Δ_0 + Δ_2 e^-i2ω t inside the SC and Δ(t)=0 in the normal electrodes, H_0 = + e (t)^2/2m, and 𝕋 being the time-reversal operator. Due to the fact that Δ_0 / E_F ≪ 1, we can use the quasi-classical limit of this equation <cit.> idu/dt = _F ·( -i∂ + e (t) ) u - μ u + Δ(t) v , idv/dt = -_F ·( -i∂ - e (t) ) v + μ v + Δ^*(t) u . In this approximation we completely neglect the effects of reflected electrons and crossed Andreev reflections, which is expected to be accurate in the case of fully transparent junctions <cit.>. As the BdG Hamiltonian ℋ(t) = ℋ(t+T), is periodic in time with period T = 2π/ω, we use the Floquet formalism <cit.>. In the same way that the Bloch theorem tells that an eigenstate of a periodic in space Hamiltonian can be labeled with quasi-momentum, each state has an associated quasi-energy, i.e. Ψ_ϵ(k,t) = e^-iϵ tΦ(k,t), where Φ(k,t+T) = Φ(k,t). Defining the Floquet-BdG Hamiltonian as ℋ_F(t) = ℋ(t) - i d/dt, we find a pseudo-stationary Schrödinger equation for Φ(k,t) ℋ_F Φ(k,t) = ϵ Φ(k,t). At this point it is usefull to introduce the following Fourier expansions ℋ(t) = ∑_n ∈ℤ H_n e^-inω t, Φ(k,t) = ∑_n ∈ℤΦ_n (k) e^-inω t. We then have to solve an infinite number of time independent equations for the Fourier coefficients ∑_m ∈ℤH_n-m - mωδ_m,nΦ _m = εΦ _n, with H_n = v_F k - μτ_3 + Δ_0 τ_1δ_n,0 + A_F 1δ_n,1 + δ_n,-1 + Δ_2τ_+ δ_n,2 + Δ_2 τ_- δ_n,-2, where τ_± = τ_1 ± i τ_2. In practice, to solve the equations, we choose a cut-off N_c in the number of Floquet replicas. §.§ Floquet spectrum The Floquet energy spectrum of the SC region presents various intercrossing bands (Fig. <ref>). The bands exhibits gaps at different energies. There is the superconducting gap at ϵ = 0, another one at ϵ = Δ_0. This type of gap induced by THz excitation has already been discussed in the context of irradiated graphene <cit.>. The gaps present a rich structure depending on the electromagnetic field strength. Here we found similar results in the quasi-classical limit BdG equation. An important observation is that a gap opens only in the presence of the Higgs mode for ϵ = Δ_0. We found that the gap size Γ at k_F is to a very good approximation given by the Higgs mode amplitude Γ≃Δ_2 [There are also higher order gaps at the same energy, but much smaller (e.g. we have also another higher order gap who scales as ∝Δ_2^2/2); the Higgs mode being here small compare to the other energy scales of our problem those higher order gaps will be negligible]. Of course those gaps are second order in the potential vector, as Δ_2 ∼ A_F^c2. In Fig. <ref> we compare the case with and without Higgs mode in the SC. To do that we artificially fixed Δ_2 = 0 in the left figure. In this case the gaps close, showing the necessity of the Higgs mode to open a gap at ϵ = Δ_0. §.§ BdG-Floquet Scattering problem We consider the following 1D scattering problem: an electron comes from the left and can be reflected as a hole or transmitted as a electron on the right. The quasi-classical approximation prevents the existence of crossed Andreev reflection in the junction or reflected electron. In the left N region (x<0), we have the following electron-like incident and hole-like reflected waves Ψ_in = [ 1; 0 ]∑_n e^ik^+_n x e^-iϵ te^-in ω t, Ψ_out = [ 0; 1 ]∑_n r_n e^ik^-_n x e^-iϵ te^-in ω t, with v_F k^±_n = μ± (ϵ + n ω). In the right N region (x>L), the transmitted wave is electron-like and reads Ψ_trans = [ 1; 0 ]∑_n t_n e^ik^+_n x e^-iϵ te^-in ω t, where r_n (resp. t_n) is the reflection (resp. transmission) coefficient for the n-th Floquet level. Inside the SC, the solution of the Floquet-BdG equation reads Ψ_SC = ∑_m∑_n a_m e^i k_m x Φ^m_n e^-i ε te^-i n ω t, where Φ^m_n (resp. k_m^±) are the Floquet eigenvectors (resp. eigenvalues) inside the superconductor. Those excitations are coherent superpositions of electron-like and hole-like, and the spinors Φ^m_n are obtained by solving the following eigen-mode problem (v_F k-μ) Φ_n = ϵ + nωτ_3 Φ_n - i Δ_0 τ_2 Φ_n + A_F τ_3 Φ_n-1 + Φ_n+1 - Δ_2τ_+ Φ_n-2 + Δ_2 τ_- Φ_n+2. The new matrix on the RHS of (<ref>) is not Hermitian, so nothing prevents k to have some non-zero imaginary part (if this is the case, the mode is an evanescent one). The Hamiltonian is a matrix of length 2(2N_c +1) × 2(2N_c +1) so the number of k_m is 2(2N_c +1). The full solution is obtained using the boundary condition at each interfaces, i.e. the continuity of the spinors (see Appendix <ref> for full details). This gives us the 𝒮-matrix for this scattering problem. From the unitarity of 𝒮 we obtain the conserved law ℛ + 𝒯 = ∑_n |r_n|^2 + |t_n|^2 = 1, with ℛ (𝒯) the total reflection (transmission) coefficient. §.§ DC Differential conductance Here we show that the differential conductance of the NSN junction is a simple way to probe and realize an electronic/transport spectroscopy of the BdG-Floquet band gaps discussed in Sec. <ref> (Fig. <ref>). From the 𝒮-matrix, the DC current I_DC is obtained using the extended Landauer-Büttiker formalism <cit.> I_DC = e/h∑_α=1^2 ∑_n ∫ dE |𝒮_α 1(E_n,E)|^2 ×f_1^in(E-eV) - f_α^out(E_n) with E the incident energy, f_α the distribution function in the lead α, V the bias potential. The lead α=1 (resp. α=2) is the normal metal on the left (resp. right) of the junction. The local differential conductance in the left part is given by <cit.> (see also Appendix <ref>) G_DC = ∂ I_DC/∂ V = e^2/h1 + ℛ(eV). By applying a DC bias V in the left normal metal, we expect a local differential conductance given by Eq. (<ref>). The results are given in Fig. <ref>. The oscillations originate from the resonant mode inside the superconducting part. The Higgs mode appears when eV ∼Δ_0. Indeed, we observe the appearance of a plateau in differential conductance centered around Δ_0 and of width ∼Δ_2. This plateau comes from the gap Γ. Indeed, the reflection coefficient for this range of energy will be close to 1, as the probability for elastic transmission will be very low. For A^c_F = 0.36 Δ_0, Δ_2 = 0.12 Δ_0, the first order inelastic transmission coefficient become dominant such that |t_-1| ≳ 10 |t_0|; elastic scattering being preferred we have |r_0|^2 ≫ |t_-1|^2. Thus, G_DC≃ 2 e^2 /h in this interval and this gives the plateau. To confirm this interpretation we look at the differential conductance in the case where we artificially put Δ_2 = 0. We saw before that now the gaps are closed, and we don't see any plateau anymore. Looking at Fig. <ref>, we see that the effect is only visible for large enough junction length L. Indeed for too small junction, the electrons injected, even inside the gap, can simply tunnel through the junction, giving this G_DC≃ e^2/h at all bias. For Δ_2 = 0, The deeps of the oscillations are obtain for energy bias eV^res_n that obey the resonant condition eV^res_n = √(Δ_0^2 + nπ/Lħ v_F^2). In this case the electron can tunnel through the SC and the differential conductance drops. The resonances observed at subgap bias are understood to come from photon assisted scattering (PAS). In this case, still for Δ_2 = 0, eV^res_n,N = √(Δ_0^2 + nπ/Lħ v_F^2) - Nħω, for n, N such that eV^res_n,N > 0. In our case only first order PAS are visible, i.e. the resonant bias are correctly predicted by taking N=1. §.§ Rotating wave approximation If the Higgs mode is present, the conductance deeps are slightly shifted compared to the case without Higgs mode, which can be explained by the modulation of the SC gap. Unfortunately, the full BdG equations (<ref>) are analytically intractable. Therefore, in order to get an analytical solution we consider here a simpler model where Δ_0 = 0 while Δ_2 ≠ 0. Then the Hamiltonian (<ref>) reduces to ℋ= v_F p-μτ_z + v_F eA(t) 1 + Δ_2 e^-2iω tτ_+ +Δ_2 e^2iω tτ_-. To get rid off the time dependence, we perform a "rotating frame method" by applying the unitary transformation 𝒰 = e^itωτ_z e^i e v_F∫ A dt to the Hamiltonian. The new time-independent Hamiltonian is given by ℋ' = 𝒰ℋ𝒰^† + id𝒰/dt𝒰^† = v_F p - μ - ωτ_z + Δ_2 τ_x, whose positive eigenvalues are ϵ = √(Δ_2^2 + v_F p - μ - ω^2). This result is formally similar to what can be found in irradiated semiconductors, and in that context the Δ_2 is called a dynamical gap, induced by the electromagnetic field <cit.>. For this system the resonance deeps will be found at bias eV^res_n = √(Δ_2^2 + nπ/Lħ v_F - ω^2). This shift nπ/Lħ v_F ⟶nπ/Lħ v_F - ω in the resonance bias qualitatively explains the phase shift between the resonance deeps with and without Higgs mode in Fig. <ref>. We also notate that at the Higgs resonance (ω = Δ_0), a gaps appears at momentum k ≃ k_F of size 2Δ_2. As we saw the gap reduces to Γ = Δ_2 when we add a real Δ_0 SC gap in the model. The full model with Δ_0 ≠ 0 is not solvable analytically. Nonetheless the general ideas are expected to still be true. Indeed small oscillations can be seen in Fig. <ref>, around eV ∼ 0.92 Δ_0. Those resonances appear only in the presence of the Higgs mode and can be explained by the same sort of shifting of Eq. (<ref>). § CONCLUSION In this work we investigated how to generate and detect the Higgs amplitude mode in ballistic superconducting hybrid devices. The Higgs mode is generated by irradiating the SC and is probed via AC or DC electronic current measurements. First, we have shown that the Higgs mode can be generated even in ideally clean SC. Then we have studied two different geometries. We have computed the current in a NIS tunnel junction and we found a typical signature of the Higgs mode in the AC second-harmonics current, like in the dirty case. The intensity of the response is nonetheless smaller in the clean case. We then studied an irradiated ballistic NSN junction within Bogoliubov-de Gennes formalism. We discovered that the Higgs mode opens gaps in the Floquet band energy. Those gaps can be seen by measuring the local differential conductance of the junction around the gap energy, eV ≃Δ_0 and their width is equal to the amplitude of the Higgs mode. The differential conductance measurements act as an electronic spectroscopy revealing the Floquet gaps dynamically generated by the presence of a finite Higgs amplitude mode. § AKNOWLEDGEMENT This work was supported by the “LIGHT S&T Graduate Program” (PIA3 Investment for the Future Program, ANR-17-EURE-0027) and GPR LIGHT. apsrev4-2 § RESOLUTION OF THE EILENBERGER EQUATIONS We use the quasi-classical formalism corresponding to μ_F ≫Δ_0. From the exact Gorkov GF Ǧ, we define the quasi-classical one ǧ = i/π∫ dξ_p Ǧ, where the integration ranges from ξ_p =-∞ to ξ_p =∞ and the Ǧ is peaked around ξ_p = 0. These quasiclassical integrals typically smooth out the fast oscillating behavior of the GF at small scales. It fulfills the Eilenberger equation in the homogeneous case <cit.> i {τ̌_3 ∂_t , ǧ} + i Δ(t)τ̌ _2,ǧ + e·_F τ̌_3 , ǧ = 0, where{τ̌_3 ∂_t,ǧ} = τ̌_3 ∂_t ǧ (t,t') + ∂_t'ǧ (t,t')τ̌_3, [𝒪, ǧ] = 𝒪(t) ǧ (t,t') - ǧ (t,t') 𝒪(t'), = _0 e^-i ω t, τ_i are Pauli matrices and ǧ = [ ĝ^r ĝ^k; 0 ĝ^a ] are the quasi-classical GFs in the 2D Keldysh space ; each ĝ are 2× 2 matrices where respectively r, a, k are for retarded, advanced and Keldysh (or kinetic) GF. Finally τ̌_̌ǐ = [ τ_i 0; 0 τ_i ], where τ_i are the Pauli matrices. The Higgs mode being a second order correction of Δ (t) we define Δ (t) = Δ_0 + Δ_2 e^-2i ω t. As usual, the gap has to be obtain self-consistently from (<ref>) ; in our case this is Δ(t) = -iπλ/4⟨τ_2 ĝ^k(t,t) ⟩ __F, with λ the pairing constant and ⟨…⟩ __F = ∫ dΩ_F / 4π (…) is the average over the Fermi surface. Eqs. (<ref>) and (<ref>) are not sufficient to get an unique solution, a normalisation condition is necessary ; it is given by ǧ∘ǧ (t,t') = δ(t-t'), with ∘ is the time convolution symbol. This equation cannot be solved in a simple form when is time-dependant. We will consider a perturbative approach : we expand the GFs in order of and write ĝ(t,t') = ĝ_0 (t,t') + ĝ_1 (t,t') + ĝ_2 (t,t'). It is usefull at this point to define the Fourier transforms ĝ_0(t,t') = ∫dϵ/2π e^-i ϵ (t-t')ĝ_0 (ϵ), ĝ_1(t,t') = ∫dϵ/2π e^-i ϵ_1 t e^i ϵ t'ĝ_1 (ϵ), ĝ_2(t,t') = ∫dϵ/2π e^-i ϵ_2 t e^i ϵ t'ĝ_2 (ϵ), where we define ϵ_n = ϵ + n ω. At the 0-th order the solution is ĝ_0^α(ϵ) = ϵτ_3 + i Δ_0 τ_2/s^α(ϵ), s^r(a)(ϵ) = i√(Δ_0^2 - (ϵ i γ)^2), where the square-root branch-cut is place in the real negative line. For higher orders, we will use the following properties i {τ_3 ∂_t , ĝ_i(t,t')} = ∫dϵ/2π e^-i ϵ_i t e^i ϵ t' ×ϵ_i τ_3 ĝ_i(ϵ) - ϵĝ_i(ϵ) τ_3, i Δ_2 e^-i2ω tτ_2,ĝ_i(t,t') = iΔ_2 ∫dϵ/2π e^-i ϵ_i+2 t e^i ϵ t' ×τ_2 ĝ_i(ϵ) - ĝ_i(ϵ_2) τ_2, e·_F τ_3, ĝ_i(t,t') = A_F ∫dϵ/2π e^-i ϵ_i+1 t e^i ϵ t' ×τ_3 ĝ_i(ϵ) - ĝ_i(ϵ_1) τ_3, with A_F = e _0 ·_F. From those, it is straightforward to get the equations for the first and second order corrections of the GF ξ_1 ĝ_1(ϵ) - ĝ_1(ϵ) ξ = A_Fĝ_0(ϵ_1) τ_3 - τ_3 ĝ_0(ϵ), ξ_2 ĝ_2(ϵ) - ĝ_2(ϵ) ξ = A_F ĝ_1(ϵ_1) τ_3 - τ_3 ĝ_1(ϵ) + i Δ_2 ĝ_0(ϵ_2) τ_2 - τ_2 ĝ_0(ϵ). We define the matrices ξ_i = ϵ_i τ_3 + iτ_2 Δ_0. To solve (<ref>) ((<ref>)), simply multiply by ξ_1 (ξ_2) from the left and ξ from the right and add the two equations together. ĝ_1^α(ϵ) = A_F/s(ϵ_1) + s(ϵ)τ_3 - ĝ_0(ϵ_1) τ_3 ĝ_0(ϵ)^α, ĝ_V^α(ϵ) = [A_F^2/s(ϵ_2) + s(ϵ_1)s(ϵ_2) + s(ϵ)s(ϵ_1) + s(ϵ). ×. [s(ϵ)+s(ϵ_1)+s(ϵ_2)ĝ_0(ϵ_2)ĝ̅_0(ϵ_1) ĝ_0(ϵ) - ξ_2 - ξ_1 - ξ] ]^α, ĝ_H^α(ϵ) = i Δ_2/s(ϵ_2) + s(ϵ)τ_2 - ĝ_0(ϵ_2) τ_2 ĝ_0(ϵ)^α. with 𝒪 = τ_3 𝒪τ_3. To find the Keldysh function it is usefull to write the solution as a sum of regular and anomalous term g^k_i (ϵ) = g^reg_i(ϵ) + g^an_i(ϵ) with g^reg_i(ϵ) = g^r_i (ϵ) h_0(ϵ) - h_0(ϵ_i) g^a_i (ϵ) where the distribution function h_0 (ϵ) = tanhβϵ / 2. After some tedious but straightforward calculations we find ĝ_1^an(ϵ) = e _0 ·_F tanhβϵ_1 / 2 - tanhβϵ / 2/s^r(ϵ_1) + s^a(ϵ) ×τ_3 - ĝ_0^r(ϵ_1)τ_3ĝ_0^a(ϵ) ; noting ĝ_2^an(ϵ) = ĝ_V^an(ϵ) + ĝ_H^an(ϵ), ĝ_V^an(ϵ) = e _0 ·_F^2 tanhβϵ_2 / 2 - tanhβϵ_1 / 2/s^r(ϵ_2) + s^a(ϵ_1)s^r(ϵ_2) + s^a(ϵ)s^a(ϵ_1) + s^a(ϵ) ×[s^a(ϵ)+s^a(ϵ_1)+s^r(ϵ_2)ĝ^r_0(ϵ_2)ĝ̅^a_0(ϵ_1) ĝ^a_0(ϵ) - ξ_2 - ξ_1 - ξ] + e _0 ·_F^2 tanhβϵ_1 / 2 - tanhβϵ / 2/s^r(ϵ_2) + s^r(ϵ_1)s^r(ϵ_2) + s^a(ϵ)s^r(ϵ_1) + s^a(ϵ) ×[s^a(ϵ)+s^r(ϵ_1)+s^r(ϵ_2)ĝ^r_0(ϵ_2)ĝ̅^r_0(ϵ_1) ĝ^a_0(ϵ) - ξ_2 - ξ_1 - ξ] ĝ_H^an(ϵ) = i Δ_2 tanhβϵ_2 / 2 - tanhβϵ / 2/s^r(ϵ_2) + s^a(ϵ)τ_2 - ĝ^r_0(ϵ_2) τ_2 ĝ^a_0(ϵ). From (<ref>) we have Δ_2 = Δ_0 ∫ dϵ⟨τ_2 ĝ^k_2(t,t) ⟩ __F/∫ dϵ⟨τ_2 ĝ^k_0(t,t) ⟩ __F. The previous equation can be rewritten as Δ_2 = - A_F^c2Δ_0 /3B^r - B^a + B^an/C^r - C^a + C^an, with B^α = ∫ d ϵ b^α(ϵ) tanhβϵ_(2) /2, B^an = ∫ d ϵ b^an_2(ϵ) tanhβϵ_2 /2 - tanhβϵ_1/2 + b^an_0(ϵ) tanhβϵ_1 /2 - tanhβϵ/2, C^α = ∫ d ϵ c^α(ϵ) tanhβϵ_(2) /2 - tanhβϵ/2 / s^α(ϵ), C^an = ∫ d ϵ c^an(ϵ) tanhβϵ_2 /2 - tanhβϵ /2, where b^α(ϵ) = [1/s(ϵ_2) + s(ϵ_1)s(ϵ_2) + s(ϵ)s(ϵ_1) + s(ϵ). .×[s(ϵ)+s(ϵ_1)+s(ϵ_2)ϵ_1 ϵ + ϵ_2 ϵ + ϵ_1 ϵ_2 + Δ_0^2/s(ϵ)s(ϵ_1)s(ϵ_2) -1 ] ]^α, b^an_2 = 1/s^r(ϵ_2) + s^a(ϵ_1)s^r(ϵ_2) + s^a(ϵ)s^a(ϵ_1) + s^a(ϵ) ×[s^r(ϵ_2) + s^a(ϵ_1) + s^a(ϵ)ϵ_1 ϵ + ϵ_2 ϵ + ϵ_1 ϵ_2 + Δ_0^2/s^a(ϵ)s^a(ϵ_1)s^r(ϵ_2) -1 ], b^an_0 = 1/s^r(ϵ_2) + s^r(ϵ_1)s^r(ϵ_2) + s^a(ϵ)s^r(ϵ_1) + s^a(ϵ) ×[s^r(ϵ_2) + s^r(ϵ_1) + s^a(ϵ)ϵ_1 ϵ + ϵ_2 ϵ + ϵ_1 ϵ_2 + Δ_0^2/s^a(ϵ)s^r(ϵ_1)s^r(ϵ_2) -1 ], c^α(ϵ) = ϵϵ_2 + Δ_0^2 + s(ϵ_2)s(ϵ)/s(ϵ_2) + s(ϵ)s(ϵ_2)s(ϵ)^α, c^an(ϵ) = ϵϵ_2 + Δ_0^2 + s^r(ϵ_2)s^a(ϵ)/s^r(ϵ_2) + s^a(ϵ)s^r(ϵ_2)s^a(ϵ). We denote the Higgs mode as Δ_2 = F_ω/1 - Π_ω, with F_ω and Π_ω the amplitude and polarization functions defines in <cit.>. From <cit.>, we see that Π_ω is the same in the clean and dirty case, as shown also using equilibrium Matsubara formalism in <cit.>. The eventual presence of disorder affects only the amplitude function F_ω. In our case F_ω∝ B^r - B^a + B^an. § CURRENT IN THE NIS JUNCTION In a clean tunnel junction the BC can be expressed in a simple form <cit.> I = G_t/16e∫ d ϵ⟨τ_3 ǧ_n,ǧ_s^k⟩__F, with G_t the conductance of the junction. Because ⟨ǧ_1⟩__F = 0, we can consider only the terms ǧ_n,ǧ_s_0^k = ĝ^r_n ĝ_0^k(ϵ) + ĝ^k_n(ϵ) ĝ_0^a(ϵ) - ĝ^r_0 (ϵ) ĝ_n^k(ϵ) - ĝ^k_0(ϵ) ĝ_n^a, ǧ_n,ǧ_s_2^k = ĝ^r_n ĝ_2^k(ϵ) + ĝ^k_n(ϵ_2) ĝ_2^a(ϵ) - ĝ^r_2 (ϵ) ĝ_n^k(ϵ) - ĝ^k_2(ϵ) ĝ_n^a. We now focus on the second order contribution to the current. Because ĝ^k_2 is traceless we have ǧ_n,ǧ_s_2^k = ĝ^k_n(ϵ_2) ĝ_2^a(ϵ) - ĝ^r_2 (ϵ) ĝ_n^k(ϵ). We now need to find the terms proportional to τ_3 in ĝ^α_2 (indeed τ_3 ĝ^k_n ∝1 + τ_3). We get g^α_V,3(ϵ) = [e _0 ·_F^2 /s(ϵ_2) + s(ϵ_1)s(ϵ_2) + s(ϵ)s(ϵ_1) + s(ϵ). . ×Σϵϵ_1 ϵ _2 + ϵ + ϵ_1 + ϵ_2ΣΔ_0^2 - s(ϵ) s(ϵ_1) s(ϵ_2) /s(ϵ) s(ϵ_1) s(ϵ_2)]^α, g^α_H,3(ϵ) = 2Δ_0 Δ_2 ϵ + ω/s(ϵ) s(ϵ_2) s(ϵ) + s(ϵ_2)^α, where Σ^α = s^α(ϵ) + s^α(ϵ_1) + s^α(ϵ_2). We can write the current I_2 = I_V + I_H where I_V(H) = G_t/8e∫ dϵ[tanhβ (ϵ_2 - eV) /2 - tanhβ (ϵ_2 + eV) /2] ⟨ĝ^a_V(H),3⟩__F - ∫ dϵtanhβϵ_- /2 - tanhβϵ_+/2⟨ĝ^r_V(H),3⟩__F. § S-MATRIX SOLUTION FOR THE FLOQUET SCATTERING IN NSN JUNCTION We use the notations of Section <ref> and define the vectors = r_-N_c… r_N_c^T, = t_-N_c… t_N_c^T, and , the last two being the amplitudes for an incoming electron on the left and incoming hole on the right of the junction. The 𝒮-matrix is defined by the relation [ ; ] = 𝒮[ ; ]. The boundary conditions, i.e. the continuity of the spinors gives us the equations [ 1; 0 ] A_n + [ 0; 1 ] r_n = ∑_m a_m Φ_n^m, [ 1; 0 ] e^i k^+_n Lt_n + [ 0; 1 ] e^i k^-_n LB_n = ∑_m e^i k_m L a_m Φ_n^m. From this we get [ ; e^ik^- L ] = [ Φ_1; Φ_2 e^ik L ] with e^ik^- L_n = _n e^ik^-_n L, _n = a_n, (Φ_i)_mn= Φ^m_n,i and (Φ_i e^ik L)_mn= Φ^m_n,i e^ik_m L) with i indicating the spinor coordinate. The 𝒮-matrix immediately follows 𝒮 = [ Φ_1 e^ik L; Φ_2 ][ Φ_1; Φ_2 e^ik L ] ^-1. We can rewrite 𝒮 = [ '; ' ] with and the matrices coefficients for the incoming electron from the left. For an incoming electron in Floquet band n=0, we will get the correct coefficients within the middle column of and . § DIFFERENTIAL CONDUCTANCE IN THE NSN JUNCTION The current in the junction I_DC = e/h∑_α=1^2 ∑_n ∫ dE |𝒮_α 1(E_n,E)|^2 ×f_1^in(E-eV) - f_α^out(E_n) can be written in term of the reflection and transmission coefficient I_DC = e/h∫ dE f_1^in(E-eV) - f_2^out(E_n)𝒯(E) + f_1^in(E-eV) - f_1^out(E-eV)ℛ(E) . In the case of an Andreev reflection, the incident electron is reflected as a hole, such that f_1^out(E-eV) = 1 - f_1^in(E-eV). Only keeping the term proportional to eV we get I_DC∝e/h∫ dE f_1^in(E-eV) 𝒯(E) + 2ℛ(E). Finally, from the conserved relation ℛ + 𝒯 = 1, we get G_DC = ∂ I_DC/∂ V = e/h∫ dE ∂ f_1^in(E-eV)/∂ V1+ℛ(E). In the low temperature limit, ∂ f_1^in(E-eV) / ∂ V = δ(E-eV) and G_DC = e^2/h1+ℛ(eV).
http://arxiv.org/abs/2307.04743v2
20230710175319
Geometric post-Newtonian description of massive spin-half particles in curved spacetime
[ "Ashkan Alibabaei", "Philip K. Schwartz", "Domenico Giulini" ]
gr-qc
[ "gr-qc", "physics.atom-ph", "quant-ph" ]
Classical Observables from the Exponential Representation of the Gravitational S-Matrix [ August 12, 2023 ======================================================================================== We consider the Dirac equation coupled to an external electromagnetic field in curved four-dimensional spacetime with a given timelike worldline γ representing a classical clock. We use generalised Fermi normal coordinates in a tubular neighbourhood of γ and expand the Dirac equation up to, and including, the second order in the dimensionless parameter given by the ratio of the geodesic distance to the radii defined by spacetime curvature, linear acceleration of γ, and angular velocity of rotation of the employed spatial reference frame along γ. With respect to the time measured by the clock γ, we compute the Dirac Hamiltonian to that order. On top of this `weak-gravity' expansion we then perform a post-Newtonian expansion up to, and including, the second order of 1/c, corresponding to a `slow-velocity' expansion with respect to γ. As a result of these combined expansions we give the weak-gravity post-Newtonian expression for the Pauli Hamiltonian of a spin-half particle in an external electromagnetic field. This extends and partially corrects recent results from the literature, which we discuss and compare in some detail. § INTRODUCTION Modern experiments allow to probe the interface between quantum and gravitational physics at a rapidly growing degree of accuracy. A proper theoretical description of such experiments would ideally be based on a higher-level theory encompassing Quantum Mechanics as well as General Relativity as appropriate limiting cases. However, as is well known, such a higher-level theory is still elusive. Hence, we cannot simply `compute' the impact of a classical gravitational field, described by a (generally curved) spacetime metric, upon the dynamics of a quantum system. Rather, depending on the context, we must `deduce' the influence of the gravitational field on the dynamics of the quantum system from general principles that we expect to be robust and eventually realised in the higher-level theory. This is, in a nutshell, the generally accepted strategy today for exploring the interface between quantum and gravitational physics, to which the present study also subscribes. As recent examples of interest we mention the question of the gravitational contribution to high-precision measurements of the g-factor of an electron stored in a Penning trap (also referred to as a `geonium atom') <cit.>, the recent results of qBounce, which is a Ramsey-type gravitational resonance spectroscopy experiment using ultra-cold neutrons to test the neutron's coupling to the gravitational field of the earth in the micrometer range <cit.>, and recent advances in matter-wave interferometry <cit.>. In such examples, and in the more general context of gravitational effects in quantum systems, it is important to base one's estimates of possible gravity effects on a well-defined and systematic approximation scheme. It is the aim of this paper to present such a scheme for a massive spin-half particle obeying the Dirac equation in curved spacetime. Our scheme is based on the assumed existence of a distinguished reference wordline γ, which, e.g., may be thought of as that of a clock in the laboratory or a distinguished particle. In a tubular neighbourhood of γ we use generalised Fermi normal coordinates <cit.> with reference to γ and an adapted (meaning the unit timelike vector is parallel to the tangent of γ) orthonormal frame along it. The coordinates are `generalised' in the sense that we will allow the worldline γ to be accelerated, and the orthonormal frame to rotate, i.e. its Fermi–Walker derivative need not vanish. The approximation procedure then consists of two steps which are logically independent a priori. In the first step we perform a `weak-gravity expansion', which means that we expand the fields in the tubular neighbourhood of γ in terms of a dimensionless parameter given by the ratio of the spacelike geodesic distance to γ to the radii that are defined by spacetime curvature, acceleration of γ, and the angular velocity of rotation of the chosen frame along γ. We recall that the radius associated with γ's acceleration a is given by c^2/a and that the radius associated with the frame's angular velocity ω (against a Fermi–Walker transported one) is c/ω. The curvature radius is given by the inverse of the modulus of the typical Riemann-tensor components with respect to the orthonormal frame. As first derivatives of the Riemann-tensor will also appear, we also need to control these against third powers of the geodesic distance. Our expansion hypotheses are summarised in the expressions (<ref>). Consistently performing this expansion is the content of <ref>, leading to the Dirac Hamiltonian (<ref>), which is our first main result. Note that a `Hamiltonian' refers to a `time' with respect to which it generates the evolution of the dynamical quantities. In our case, that time is given by the proper time along γ, i.e. time read by the `clock', extended to the tubular neighbourhood along spacelike geodesics. In the second step we perform a `slow-velocity' expansion by means of a formal power series expansion in terms of 1/c, i.e. a post-Newtonian expansion. More specifically, we will expand positive-frequency solutions of the (classical) Dirac equation as formal power series in c^-1, similar to the corresponding expansion for the Klein–Gordon equation as discussed in, e.g., <cit.>. For the case of γ being a stationary worldline in a stationary spacetime, this expansion may be considered a post-Newtonian description of the one-particle sector of the massive Dirac quantum field theory. A priori this `slow-velocity' approximation is an independent expansion on top of the former. But for the system moving under the influence of the gravitational field the latter approximation is only consistent with the former if the relative acceleration of the system against the reference set by γ stays bounded as 1/c → 0. This implies that the curvature tensor components with respect to the adapted orthonormal frame should be considered as of order c^-2. The coupled expansions then lead us to the Pauli Hamiltonian (<ref>), which is the second main result of our paper. Clearly, our work should be considered in the context of previous work by others. In 1980, Parker <cit.> presented explicit expressions for the energy shifts suffered by a one-electron atoms in free fall within a general gravitational field, the only restriction imposed on the latter being that its time-rate of change be sufficiently small so as to allow stationary atomic states and hence well-defined energy levels. Parker also used Fermi normal coordinates, though standard ones, i.e. with respect to non-rotating frames along a geodesic curve γ. He then gave an explicit expression for the Dirac Hamiltonian to what he calls `first order in the [dimensionful] curvature', which in our language means second order in the dimensionless ratio of geodesic distance to curvature radius. Regarding the `slow-velocity' approximation, Parker considers only the leading-order terms, i.e.the Newtonian limit instead of a post-Newtonian expansion. The restriction to non-rotating frames along γ and geodesic γ was lifted by Ito <cit.> in 2021, who aimed for estimating the inertial and gravitational effects upon g-factor measurements of a Dirac particle in a Penning trap. To that end he presented an expansion in generalised Fermi normal coordinates of the Dirac Hamiltonian also including terms to second order in the ratio (geodesic distance)/(curvature radius), but only to first order in the ratios (geodesic distance)/(acceleration radius), where `acceleration radius' refers to both acceleration of γ and the rotation of the frames along γ as explained above. Ito also considers the `non-relativistic limit' by performing a Fouldy–Wouthuysen transformation <cit.> with a transformation operator expanded as a formal power series in 1/m (the inverse mass of the fermionic particle). In dimensionless terms, the latter corresponds to a simultaneous expansion in v/c as well as the ratio (Compton wavelength)/(geodesic distance). Finally we mention the work of Perche & Neuser <cit.> from 2021, who generalise Parker's work <cit.> in allowing the reference curve γ to be accelerating, though the frame along it is still assumed to be non-rotating (Fermi–Walker transported). For vanishing acceleration of γ, their result for the Dirac Hamiltonian coincides with that of Parker. Similar to Ito <cit.>, they consider a `non-relativistic limit' by means of an expansion in ratios of relevant energies to the rest energy, which effectively amounts to an expansion in 1/m. Let it be mentioned already at this point that in <ref> we will show explicitly that the expansion as presented in <cit.> is not equivalent to the post-Newtonian expansion in 1/c that we employ. This we believe, however, to be rooted in the expansion in <cit.> being inconsistently applied; when taking proper care of all appearing terms, the expansion method of <cit.> is consistent with the corresponding truncation of our results. Our paper is an extension of those approaches, in that it also includes inertial effects from acceleration and rotation to consistently the same order as gravitational effects resulting from curvature, namely to order ((geodesic distance) / (charecteristic radii))^2. We will find some inconsistencies in the approximations of the aforementioned paper that result in the omission of terms which we will restore. Our paper is partly based on the master's thesis <cit.>. Here we use the opportunity to correct some oversights in the calculation of order-x^2 terms in that thesis, that we will further comment on below (cf. <ref>). To sum up, our paper is organised as follows: In <ref>, we recall the Dirac equation in curved spacetime. In <ref>, we implement the first step of our approximation procedure by expressing the Dirac equation in generalised Fermi normal coordinates corresponding to an accelerated reference worldline γ and orthonormal, possibly rotating frames along it. In <ref>, we implement the second step, namely the `slow-velocity' post-Newtonian expansion in 1/c. This step should be contrasted with the mentioned 1/m-expansions by others or expansions relying on Foldy–Wouthuysen transformations. In particular, this includes a comparison of our resulting Hamiltonian to that obtained in <cit.>, which we discuss in some detail in <ref>, where we argue for an inconsistency within the calculation in <cit.>. We conclude in <ref>. Details of calculations and lengthy expressions are collected in <ref>. § THE DIRAC EQUATION IN CURVED SPACETIME We consider a massive spin-half field ψ in a general curved background spacetime[Of course, for the very notion of spinor fields to make sense, we need to assume the spacetime to be equipped with a spin structure, i.e. a double cover of its orthonormal frame bundle such that the covering homomorphism is in trivialisations given by the double covering of the (homogeneous) Lorentz group ℒ^↑_+ = SO_0(1,3) by the spin group Spin(1,3) = SL(2,ℂ). Dirac spinor fields are then sections of the Dirac spinor bundle, which is the vector bundle associated to the spin structure with respect to the (1/2,0) ⊕ (0,1/2) representation of the spin group. As is well-known, the existence of a spin structure is for four-dimensional non-compact spacetimes equivalent to the spacetime manifold being parallelisable <cit.>.] (M,g), coupled to background electromagnetism, as described by the minimally coupled Dirac equation (γ^I (_I)^μ (∇_μ - q A_μ) - mc) ψ = 0. Here A_μ are the components of the electromagnetic four-potential, ∇ is the Levi-Civita covariant derivative of the spacetime metric g, extended to Dirac spinor fields, m is the mass of the field, and q is its electric charge. Note that we set ħ = 1, but keep explicit the velocity of light c. The Dirac equation takes the above local form with respect to a choice of tetrad (_I) = (_0, _i), i.e. a local orthonormal frame of vector fields. Explicitly, this means that the vector fields satisfy g(_I,_J) = η_IJ where (η_IJ) = diag(-1,1,1,1) are the components of the Minkowski metric in Lorentzian coordinates. The gamma matrices γ^I appearing in the Dirac equation (<ref>) are the standard Minkowski-spacetime gamma matrices γ^I ∈End(^4), which satisfy the Clifford algebra relation {γ^I ,γ^J } = -2 η^IJ1_4 with {·,·} denoting the anti-commutator. The Dirac representation of the Lorentz algebra Lie(SO(1,3)) on ^4 is given by Lie(SO(1,3)) ∋ (X^I_J) ↦ -1/2 X_IJ S^IJ∈End(^4), with the generators S^IJ∈End(^4) given by S^IJ = 1/4 [γ^I,γ^J]. Thus, the spinor covariant derivative is represented with respect to the chosen tetrad by ∇_μψ = ∂_μψ + Γ_μ·ψ, with the spinor representation of the local connection form explicitly given byΓ_μ = -1/2ω_μ IJ S^IJ in terms of the local connection form ω_μ^I_J of the Levi-Civita connection with respect to the tetrad, defined by ∇_I = ω^J_I⊗_J , i.e. in components∇_μ (_I)^ν = ω_μ^J_I (_J)^ν . Due to the local connection form taking values in the Lorentz algebra, i.e. satisfying ω_μ IJ = - ω_μ JI, the spinor representation of the connection form may be explicitly expressed as Γ_μ = -1/2ω_μ IJ S^IJ = -1/2ω_μ 0iγ^0 γ^i - 1/4ω_μ ijγ^i γ^j . As said in the introduction, in the following we will describe a systematic approximation scheme for the one-particle sector of the massive Dirac theory from the point of view of an observer moving along a fixed timelike reference worldline γ, which will proceed in two conceptually independent steps. The first step, which is described in <ref> and implements a `weak-gravity' approximation by expanding the Dirac equation in (generalised) Fermi normal coordinates, is actually valid without restricting to the one-particle theory. Only for the second step, the `slow-velocity' post-Newtonian expansion in <ref>, we will restrict to the one-particle theory. For this, we assume the spacetime and the reference worldline γ to be (approximately) stationary, such that there is a well-defined (approximate) notion of particles in quantum field theory and we may meaningfully restrict to the one-particle sector of the theory. This sector is then effectively described by positive-frequency classical solutions of the Dirac equation, which we will approximate by the post-Newtonian expansion. § `WEAK-GRAVITY' EXPANSION IN GENERALISED FERMI NORMAL COORDINATES As the first step of our scheme, we will implement a `weak-gravity' approximation of the Dirac equation with respect to a timelike reference worldline γ and orthonormal spacelike vector fields (_i(τ)) defined along γ which are orthogonal to the tangent _0(τ) := c^-1γ̇(τ). The approximation works by expressing the Dirac equation in generalised Fermi normal coordinates with respect to γ and (_i). These coordinates are constructed as follows (compare <ref>): in a neighbourhood of γ, each point p is connected to γ by a unique spacelike geodesic. The temporal coordinate of p is the proper time parameter τ of the starting point of this geodesic, defined with respect to some fixed reference point on γ, and the spatial coordinates of p are the components x^i of the initial direction of the geodesic with respect to the basis (_i(τ)). Phrased in terms of the exponential map, this means that the coordinate functions (x^μ) = (cτ, x^i) are defined by the implicit equation p = exp(x^i(p) _i(τ(p))). These coordinates are adapted to an observer along γ who defines `spatial directions' using the basis (_i). Note that differently to classical Fermi normal coordinates <cit.> we allow for the worldline γ to be accelerated—i.e. γ need not be a geodesic—, as well as for the basis (_i) to be rotating with respect to gyroscopes—i.e. the (_i) need not be Fermi–Walker transported along γ. Generalised Fermi normal coordinates may be seen as the best analogue of inertial coordinates that exists for an arbitrarily moving observer carrying an arbitrarily rotating basis in a general curved spacetime. The acceleration a(τ) of γ is the covariant derivative of γ̇(τ) along γ, i.e. the vector field a(τ) = ∇_γ̇(τ)γ̇(τ) along γ, which is everywhere orthogonal to γ̇= c _0. Note that we take the covariant derivative with respect to the worldline's four-velocity γ̇(τ), such that the physical dimension of the components a^μ will really be that of an acceleration (given that the coordinate functions have the dimension of length). The angular velocity of the observer's spatial basis vector fields (_i) (with respect to non-rotating directions, i.e. Fermi–Walker transported ones) is another vector field along γ that is everywhere orthogonal to γ̇= c _0; we denote it by ω(τ). It is defined by (∇_γ̇_I)^μ = - (c^-2 a^μγ̇_ν - c^-2γ̇^μ a_ν + c^-1ε_ρσ^μ_νγ̇^ρω^σ) _I^ν , where both sides of the equation are evaluated along γ, and ε denotes the volume form of the spacetime metric g. The covariant derivatives of a and ω along γ will be denoted by b(τ) := ∇_γ̇(τ) a(τ), η(τ) := ∇_γ̇(τ)ω(τ). When working in generalised Fermi normal coordinates we will denote the timelike coordinate which has the dimension of length by s = c τ, since it is an extension of the proper length function along γ. In index notation, we will use s as the timelike coordinate index and reserve 0 for use as the timelike index for orthonormal frame components. The components of the spacetime metric g in generalised Fermi normal coordinates may be expressed as formal power series in the geodesic distance to γ according to <cit.> g_ss = - 1 - 2 c^-2 a · x - c^-4 ( a · x)^2 - R_0l0m x^l x^m + c^-2 (ω× x)^2 + ( x^3), g_si = c^-1 (ω× x)_i - 2/3 R_0lim x^l x^m + ( x^3), g_ij = δ_ij - 1/3 R_iljm x^l x^m + ( x^3). Here, in addition to the acceleration a^i(τ) of γ and the angular velocity ω^i(τ) of the spatial basis (_i), the curvature tensor R_IJKL(τ) evaluated along γ appears as well; the components are taken with respect to the orthonormal basis (_0, _i) along γ. We also have used standard `three-vector' notation for geometric operations taking place in the three-dimensional vector space Σ_τ = (_0(τ))^⊥ = span{_i(τ)}⊂ T_γ(τ)M of the observer's local `spatial directions', endowed with the Euclidean metric δ_τ := g|_Σ_τ induced by g: we write v · w := δ_ij v^i w^j, v := √(δ_ij v^i v^j), ( v × w)_i := ε_ijk v^j w^k for the scalar product, the norm, and the vector product with respect to this metric. Note that with respect to the orthonormal basis (_i), the components δ_ij of the induced metric and ε_ijk of its volume form are just given by the Kronecker delta and the totally antisymmetric three-dimensional Levi-Civita symbol, respectively. The expansion in powers of the geodesic distance to γ implements the desired approximation in terms of `weak gravity' and `weak inertial effects': we expand according to R_IJKL· x^2 ≪ 1, a/c^2· x ≪ 1, ω/c· x ≪ 1, R_IJKL;M/R_NOPQ· x≪ 1, i.e. for the expansion to be valid at a point, the geodesic distance to γ has to be small compared to the curvature radius of spacetime, the `acceleration radius' of γ, the `angular velocity radius' of the spatial reference vector fields, and the characteristic length scale on which the curvature changes. This also gives a precise analytical meaning to the formal expansion in the dimensionful parameter x: the actual dimensionless quantity in which we expand is the ratio of x to the minimum of the characteristic geometric lengths defined by the spacetime curvature, acceleration a, angular velocity ω, and rate of change of the curvature, as given in (<ref>). For the sake of brevity, in the following we will speak of terms of n^th order in the geodesic distance to γ simply as being of `order x^n', and correspondingly use the shorthand notation (x^n) := ( x^n). Our goal is to expand the Dirac equation (<ref>) systematically to order x^2. To make precise what we mean by this, first recall that for the local formulation (<ref>) of the Dirac equation to be possible, we have to choose a tetrad (_I) not only along the reference worldline γ, but also away from it. This choice of tetrad is an additional input into the approximation procedure, on top of the choice of local coordinate system. However, in our situation there is a natural choice for the tetrad: on γ, we choose it to be given by the basis (c^-1γ̇, _i) with respect to which the generalised Fermi normal coordinates are defined; away from γ, we extend the vector fields by parallel transport along spacelike geodesics. The explicit form of the tetrad components in coordinates will be computed at a later stage. With a choice of tetrad, we may rewrite the Dirac equation (<ref>) in the Schrödinger-like form ∂_τψ = H_Diracψ with the Dirac Hamiltonian H_Dirac = (g^ss)^-1γ^J (_J)^s (γ^I (_I)^i c (D_i + Γ_i) - m c^2) - c (Γ_s - q A_s), where we used that ∂_s = c^-1∂_τ and that (g^ss)^-1γ^J (_J)^s = (-γ^J (_J)^s )^-1, and where D_i = ∂_i - q A_i denotes the spatial electromagnetic covariant derivative. It is this Dirac Hamiltonian that we will expand to order x^2 in the following. Note that the partial derivative ∂_i = ∂/∂ x^i in the operator D_i effectively is of order x^-1 when acting on functions, such that in the following calculation, it is important to keep track of terms of the form x^l x^m x^n D_i, which despite their superficial appearance are in fact of order x^2. We are now going to compute all objects appearing in the Dirac Hamiltonian (<ref>) to those orders in x which are necessary to obtain the total Hamiltonian to order x^2. In order to be able to expand covariant derivatives and the local connection form to order x^2, we need to know the Christoffel symbols in our coordinate system to order x^2. Note that these cannot be obtained from the metric components as given in (<ref>): there the metric is given to order x^2, such that its derivatives can only be known to order x^1. However, extending the work in <cit.>, the Christoffel symbols to order x^2 (and the metric to order x^3) in generalised Fermi normal coordinates were calculated in <cit.>. The Christoffel symbols are given in the appendix in (<ref>) (note that some calculational errors were made in <cit.>, which we corrected in (<ref>)). We may now compute the coordinate components of our tetrad (_I). Recall that we define the tetrad by extending the vector fields (c^-1γ̇, _i) along γ into a neighbourhood of γ by parallel transport along spacelike geodesics. Since spacelike geodesics take a simple form in generalised Fermi normal coordinates, the parallel transport equation may explicitly be solved perturbatively using the Christoffel symbols (<ref>). This calculation is straightforward, but quite lengthy; it yields the tetrad components (_0)^s = 1 - c^-2 a · x + c^-4 ( a · x)^2 - 1/2 R_0l0m x^l x^m - 1/6 R_0l0m;n x^l x^m x^n + 5/6 c^-2 ( a · x) R_0l0m x^l x^m - c^-6 ( a · x)^3 + (x^4), (_0)^i = - c^-1 (ω× x)^i + c^-3 ( a · x) (ω× x)^i + 1/2R_0l^i_m x^l x^m + 1/6R_0l^i_m;n x^l x^m x^n + 1/2 c^-1 (ω× x)^i R_0l0m x^l x^m - c^-5 (ω× x)^i ( a · x)^2 - 1/3 c^-2 ( a · x) R_0l^i_m x^l x^m + (x^4), (_i)^s = - 1/6 R_0lim x^l x^m - 1/12 R_0lim;n x^l x^m x^n + 1/6 c^-2 ( a · x) R_0lim x^l x^m + (x^4), (_i)^j = δ^j_i + 1/6R^j_lim x^l x^m + 1/12R^j_lim;n x^l x^m x^n + 1/6 c^-1 (ω× x)^j R_0lim x^l x^m + (x^4). From this, we may compute the components of the dual frame as (^0)_s = 1 + c^-2 a · x + 1/2 R_0l0m x^l x^m + (x^3), (^0)_i = 1/6 R_0lim x^l x^m + (x^3), (^i)_s = c^-1 (ω× x)^i - 1/2R^i_l0m x^l x^m + (x^3), (^i)_j = δ^i_j - 1/6R^i_ljm x^l x^m + (x^3). Note that we have computed the dual frame components only to order x^2 (instead of going to order x^3 as would have been possible from (<ref>)), since this suffices for our goal, namely the expansion of the Dirac Hamiltonian (<ref>) to order x^2. Now we have the required information in order to calculate the local connection form ω_μ^I_J according to (<ref>) to order x^2, which is given in the appendix in (<ref>). From this, we can directly obtain its spinor representation Γ_μ according to (<ref>). We will also need the component g^ss of the inverse metric to order x^3. Using the frame (<ref>), we may easily compute this according to g^ss = -((_0)^s)^2 + δ^ij (_i)^s (_j)^s, yielding g^ss = -1 + 2 c^-2 a · x - 3 c^-4 ( a · x)^2 + 4 c^-6 ( a· x)^3 + R_0l0m x^l x^m + 1/3 R_0l0m;n x^l x^m x^n - 8/3 c^-2 ( a · x) R_0l0m x^l x^m + (x^4). We thus have obtained all ingredients to express the Dirac equation (<ref>), (<ref>) in generalised Fermi normal coordinates and our chosen tetrad to order x^2. Inserting g^ss, the tetrad components, and the spinor representation of the local connection form as computed above into the Dirac Hamiltonian (<ref>), by a tedious but straightforward calculation, employing standard identities for products of three gamma matrices, we obtain the explicit form of the Dirac Hamiltonian as H_Dirac = γ^0 {mc^2 + m a · x + mc^2/2 R_0l0m x^l x^m } - γ^i {mc^2/6 R_0lim x^l x^m } + 1{- q A_τ + (ω× x)^i D_i - c/2R_0l^i_m x^l x^m D_i + c^-1/4 ( a · x) R_0l x^l + c/12 R_0l;m x^l x^m - c/6R_0l^i_m;n x^l x^m x^n D_i - c^-1/6 ( a · x) R_0l^i_m x^l x^m D_i } - γ^0 γ^j { c D_j + c^-1/2 a_j + c^-1( a · x) D_j + c/4 (R_0j0l - R_jl) x^l + c/2 R_0l0m x^l x^m D_j + c/6R^i_ljm x^l x^m D_i + c/12 (R_0j0l;m - 2 R_jl;m) x^l x^m - c^-1/4 ( a · x) R_jl x^l + c/6 R_0l0m;n x^l x^m x^n D_j + c/12R^i_ljm;n x^l x^m x^n D_i + c^-1/6 ( a · x) R_0l0m x^l x^m D_j + c^-1/6 ( a · x) R^i_ljm x^l x^m D_i } + γ^i γ^j {-/4ε_ijkω^k + c/4 R_0ijl x^l + c/6 R_0lim x^l x^m D_j + c/12 R_0ijl;m x^l x^m + c/12 R_0lim;n x^l x^m x^n D_j + c^-1/6 ( a · x) R_0lim x^l x^m D_j } + (x^3). As already stated in the introduction, this is our first main result. Here A_τ = c A_s is the electric scalar potential with respect to our coordinates. Recall that the partial derivative operator ∂_i appearing in D_i is effectively of order x^-1 when acting on functions, such we need to keep terms of the form x^l x^m x^n D_i (since they are of order x^2). The terms in the Hamiltonian are ordered, in each pair of curly brackets, by order in spatial geodesic distance x to the worldline, with those terms of a given order that include a D_i appearing after those without.[ Here we correct some omissions that occurred in the master's thesis <cit.> concerning terms of order x^2. Consequently our Dirac Hamiltonian (<ref>) differs from that in <cit.>.] Note that setting ω = 0 and ignoring quadratic terms in a^i and R_IJKL as well as terms involving covariant derivatives of the curvature tensor, our Dirac Hamiltonian (<ref>) reproduces the Dirac Hamiltonian from <cit.>. § POST-NEWTONIAN EXPANSION As the second step of our approximation scheme, we will now perform a post-Newtonian `slow-velocity' expansion of the Dirac equation with respect to our reference worldline γ. In order to perform the post-Newtonian expansion systematically, we are going to implement it as a formal power series expansion[More precisely, since for some objects terms of negative order in c^-1 will appear, it is an expansion as formal Laurent series. We will however continue to use the term `power series', since most of our series will only have terms of non-negative order in c^-1.] in the parameter c^-1, where c is the velocity of light.[Of course, analytically speaking, a `Taylor expansion' in a dimensionful parameter like c^-1 does not make sense (even more so since c is a constant of nature); only for dimensionless parameters can a meaningful `small-parameter approximation' be made. In physical realisations of the limit from (locally) Poincaré- to Galilei-symmetric theories, this means that the corresponding small parameter has to be chosen as, e.g., the ratio of some typical velocity of the system under consideration to the speed of light. In the following, however, we will ignore such issues and simply expand in c^-1 as a formal `deformation' parameter.] In order to obtain a consistent post-Newtonian expansion[From a purely formal perspective, not assigning those c^-1-orders to the curvature components would lead to the expanded positive-frequency Dirac equation that we consider later not having perturbative solutions. However, as already stated in the introduction, this assumption may also be viewed from a physical angle: in order for the acceleration of a system relative to γ, as given by the geodesic deviation equation, to stay bounded in the formal limit c →∞, we need to assume that R_0i0j = (c^-2).], we need to treat the orthonormal-basis components of the curvature tensor and its covariant derivative as being of order c^-2, i.e. R_IJKL = (c^-2), R_IJKL;M = (c^-2). Since we have already introduced a formal power series expansion in x (i.e. in spacelike geodesic distance to our reference worldline γ), in the following we will encounter expressions that are `doubly expanded' as power series in powers of both c^-1 and x.[Formally, they will be valued in the formal Laurent/power series ring ((c^-1,x]].] When writing down such expansions, we will order their terms as follows: first, we group and sort the terms by order of c^-1, and second, the terms comprising such a coefficient of a power c^-n will be sorted by order of x. We will also use the notation (c^-nx^m) for terms that are of order at least n in the c^-1-expansion and order at least m in the x-expansion—e.g., we have c^-2x^4 + c^-3x^3 = (c^-2x^3). For example, the expansion of some quantity X might look like X = A + B_i x^i + C_ij x^i x^j + c^-1(E + F_i x^i ) + (c^-1x^2) (which would in particular imply that X has vanishing coefficients for all powers c^-nx with n ≥ 2). Considering the Dirac Hamiltonian H_Dirac that appears in the Dirac equation ∂_τψ = H_Diracψ in generalised Fermi normal coordinates, as computed in (<ref>), we may of course read off its expansion as a power series in c^-1 directly from (<ref>)—we just need to keep in mind that we treat the curvature tensor as being of order c^-2 according to (<ref>). However, this expansion of the Dirac Hamiltonian in powers of c^-1 is of no direct physical relevance for perturbation theory in the parameter c^-1: from (<ref>), we directly obtain H_Dirac = γ^0 m c^2 + (c^1), such that when expanding the Dirac spinor field as a formal power series ψ = ∑_k=0^∞ c^-kψ^(k), the Dirac equation tells us at the lowest occurring order in c^-1, namely c^2, that 0 = γ^0 m ψ^(0), i.e. ψ^(0) = 0. At the next order c^1, it then implies ψ^(1) = 0, etc.—meaning that the Dirac equation has no non-trivial perturbative solutions of this form. Hence, in order to obtain a meaningful `slow-velocity' approximation to the Dirac theory, we need to make a different perturbative ansatz for the spinor field. This will be a WKB-like `positive frequency' ansatz. Conceptionally, we now restrict from the full Dirac quantum field theory to its (effective) one-particle sector, which is a well-defined notion if we assume the spacetime to be stationary. The one-particle sector is effectively described by classical positive-frequency solutions of the Dirac equation, where `positive frequency' is defined with respect to the stationarity Killing field <cit.>.[Often, this is called consideration of the `first-quantised theory'—a historically grown name that sometimes unfortunately tends to create conceptual confusion. For details and caveats of how and why the one-particle sector of the quantum field theory is described by the positive-frequency classical theory, we refer to the extensive discussion in the monograph by Wald <cit.>.] It is those positive-frequency solutions whose field equation of motion we will expand in the following in powers of c^-1. A similar post-Newtonian expansion scheme for the Klein–Gordon equation may be found in <cit.>; a more general discussion of such schemes is given in <cit.>. Note that in any realistic situation, in which the theory contains interactions, this description can only be an approximation: the energy of all processes taking place has to be small enough such as to stay below the threshold of pair production, such that the system does not leave the one-particle sector. Therefore, such a post-Newtonian expansion always has to be considered a low-energy approximation. In the following, we will define positive frequencies with respect to the coordinate time τ of the generalised Fermi normal coordinates introduced in <ref>; therefore, for the relationship between positive-frequency classical solutions and the one-particle sector of the quantum theory to (approximately) hold, we need the timelike vector field ∂/∂τ to be (approximately) Killing. The geometric meaning of this is briefly discussed in <ref>. Note, however, that the definition of positive-frequency solutions with respect to some `time translation' vector field and the post-Newtonian expansion of such solutions of course also works for time translation vector fields which are not Killing, i.e. in a non-stationary situation, in which it still allows to view the full `relativistic' positive-frequency Dirac equation as a formal deformation of its (locally) Galilei-symmetric Newtonian limit. In particular, as long as we are in an approximately stationary situation and the vector field is approximately Killing, the expansion will still give an approximate description of the one-particle sector of quantum field theory. The WKB-like positive frequency ansatz that we will make for the Dirac field will lead, due to the lowest c^-1 orders of the Dirac equation, to a split of the Dirac spinor into two two-component spinor fields with coupled equations of motion. One of these components can then, order by order in c^-1, be eliminated in terms of the other, which will in the end lead to a Pauli equation for the remaining two-spinor field, with gravitational and inertial `corrections'. We are going to carry out this expansion to order c^-2, and in doing so, we want to keep the expansion in spacelike geodesic distance to the reference worldline γ such that the resulting Pauli Hamiltonian contains terms to order x^2, as it was the case for the Dirac Hamiltonian in (<ref>). However, in the decoupling/elimination process described above, the to-be-eliminated component of the Dirac spinor field will be spatially differentiated once. Therefore, to achieve our goal of a consistent expansion of the final Hamiltonian to order x^2, we actually need to know those terms in the Dirac Hamiltonian which are of order up to c^-1 in the c^-1-expansion not only to order x^2, but to order x^3. Employing the methods from <cit.>, one can calculate the order-x^3 terms in the Christoffel symbols in generalised Fermi normal coordinates of c^-1-expansion order up to c^-2 with a comparably small amount of work; and while doing so, one can actually convince oneself that all x-dependent terms in the Christoffel symbols are actually of order at least c^-2. The resulting Christoffel symbols, to order x^3 in the c^-2 terms and to order x^2 in the higher-c^-1-order ones, are given in the appendix in (<ref>). Using these further expanded Christoffel symbols, we can go through the further steps of the calculation of the Dirac Hamiltonian from <ref>, thus computing the Dirac Hamiltonian to order x^3 in the c^-1 terms and to order x^2 in the higher-c^-1-order ones. The expressions for the frame, the connection form and the inverse metric component g^ss arising as intermediate results in this process are given in <ref>; the resulting Dirac Hamiltonian is given in (<ref>). This Dirac Hamiltonian will give rise, when carrying out our systematic expansion of the positive-frequency Dirac equation in powers of c^-1, to a consistently derived Pauli Hamiltonian to order x^2 and c^-2. As the first step for implementing the expansion, we make for the Dirac field the WKB-like ansatz[Note that in the master's thesis <cit.> on which the present article is based, a different notational convention was used in which ψ̃^(k) includes the factor of c^-k.] ψ = ^ c^2 Sψ̃ with S = (c^0), ψ̃ = ∑_k=0^∞ c^-kψ̃^(k). This ansatz we then insert into the Dirac equation ∂_τψ = H_Diracψ, with the Dirac Hamiltonian given by (<ref>). The resulting equation we multiply with ^- c^2 S and compare coefficients of different powers of c^-1. At the lowest ocurring order c^3, we obtain the equation 0 = γ^0 γ^i (∂_i S) ψ̃^(0), which in order to allow for non-trivial solutions ψ̃ enforces ∂_i S = 0, i.e. the function S depends only on time. At the next order c^2, we then obtain the equation -(∂_τ S) ψ̃^(0) = γ^0 m ψ̃^(0). Since γ^0 has eigenvalues ±1, for non-trivial solutions ψ̃ of the Dirac equation to exist we need ∂_τ S = ± m. Since we are interested in positive-frequency solutions of the Dirac equation, we choose S = -mτ, discarding the constant of integration (which would lead to an irrelevant global phase). The preceding equation then tells us that the component of ψ̃^(0) which lies in the -1 eigenspace of γ^0 has to vanish. In the following, we will work in the Dirac representation for the gamma matrices, in which they are given by γ^0 = [ 1 0; 0 -1 ], γ^i = [ 0 σ^i; -σ^i 0 ] in terms of the Pauli matrices σ^i, such that Dirac spinors may be decomposed as ψ = [ ψ_A; ψ_B ] in terms of their components ψ_A,ψ_B lying in the +1 and -1 eigenspace of γ^0, respectively. (Note that ψ_A,B are represented by functions taking values in ^2.) Summing up the above, our ansatz for the Dirac field now takes the form ψ = ^- mc^2 τ[ ψ̃_A; ψ̃_B ], ψ̃_A,B = ∑_k=0^∞ c^-kψ̃_A,B^(k) , and we know that ψ̃_B^(0) = 0. Inserting this into the Dirac equation and multiplying with ^ mc^2 τ, we obtain two coupled equations for ψ̃_A,B, which are given in the appendix in (<ref>). Now comparing in these equations the coefficients of different orders of c^-1, we may order by order read off equations for the ψ̃_A,B^(k). These allow to eliminate ψ̃_B in favour of ψ̃_A, for which we will obtain a post-Newtonian Pauli equation. More explicitly, this proceeds as follows. (<ref>) at order c^1 yields 0 = -σ^j D_j ψ̃_B^(0) , which is trivially satisfied since ψ̃_B^(0) = 0. (<ref>) at order c^1 gives 2m ψ̃_B^(1) = -σ^j D_j ψ̃_A^(0) and thus allows us to express ψ̃_B^(1) in terms of ψ̃_A^(0). We can carry on to the next order: (<ref>) at order c^0 yields {∂_τ + q A_τ - m a · x - mc^2/2 R_0l0mx^l x^m - (ω× x)^i D_i + 1/2σ·ω + (x^3) }ψ̃_A^(0) = - σ^j D_j ψ̃_B^(1) . Using (<ref>), this may be rewritten as a Pauli equation ∂_t ψ̃_A^(0) = H^(0)ψ̃_A^(0) for ψ̃_A^(0), with lowest-order Hamiltonian H^(0) = -1/2m (σ· D)^2 + m a · x + mc^2/2 R_0l0m x^l x^m + (ω× x)^i D_i - 1/2σ·ω - q A_τ + (x^3) . Next, (<ref>) at order c^0 allows us to express ψ̃_B^(2) in terms of ψ̃_A^(1) and ψ̃_A^(0): 2m ψ̃_B^(2) = -σ^j D_j ψ̃_A^(1) + (mc^2/6σ^i R_0lim x^l x^m + m c^2/12σ^i R_0lim;n x^l x^m x^n + (x^4) ) ψ̃_A^(0) Note that since ψ̃_B^(2) will be differentiated once in the following calculation, here we need to include the term of order x^3 for later consistency, i.e. in order to be able to obtain the final Hamiltonian to order x^2. This is why we needed to know the low-c^-1-order terms of the Dirac Hamiltonian to order x^3, and not just order x^2. The same will happen at several later stages of the computation. (<ref>) at order c^-1 will then give an equation for ψ̃_A^(1), which may be rewritten in the Pauli-like form ∂_t ψ̃_A^(1) = H^(0)ψ̃_A^(1) + H^(1)ψ̃_A^(0) . Due to the nature of the expansion, the lowest-order operator H^(0) read off here will be the same as the one from the previous order. Detailed expressions may be found in <ref>. Continuing, (<ref>) at order c^-1 allows to express ψ̃_B^(3) in terms of ψ̃_A^(2), ψ̃_A^(1), ψ̃_A^(0), and ψ̃_B^(1), which in turn may be expressed in terms of ψ̃_A^(0) by (<ref>). (<ref>) at order c^-2 can then be rewritten as the Pauli-like equation ∂_t ψ̃_A^(2) = H^(0)ψ̃_A^(2) + H^(1)ψ̃_A^(1) + H^(2)ψ̃_A^(0) . Again, we know that H^(0) and H^(1) are the same as determined before; the operator H^(2) will contain new information. Detailed expressions may again be found in <ref>. Note that in the process of expressing ψ̃_B^(3) in terms of the ψ̃_A, one term arises for which we need to re-use the Pauli equation (<ref>) for ψ̃_A^(0) in order to fully eliminate the time derivative in the resulting expression. The three Pauli-like equations (<ref>), (<ref>) and (<ref>) now may be combined into a Pauli equation ∂_t ψ̃_A = H_Pauliψ̃_A with Hamiltonian H_Pauli = H^(0) + c^-1 H^(1) + c^-2 H^(2) + (c^-3). Explicitly, the post-Newtonian Pauli Hamiltonian reads H_Pauli = {-1/2m - 1/2mc^2 a · x - 1/4m R_0l0m x^l x^m - 1/8m R_0l0m;n x^l x^m x^n - 1/24m R_0k0l;mn x^k x^l x^m x^n }(σ· D)^2- 1/8m^3c^2 (σ· D)^4 + {-1/6mR^i_l^j_m x^l x^m - 1/12mR^i_l^j_m;n x^l x^m x^n - 1/40mR^i_k^j_l;mn x^k x^l x^mx^n } D_i D_j + {(ω× x)^j - 2 c/3R_0l^j_m x^l x^m - c/4R_0l^j_m;n x^l x^m x^n - 1/4mc^2 a^j - /4mc^2 (σ× a)^j + 1/12m (4R^j_l + R_0^j_0l) x^l + /8mσ^k (-2 ε^ij_k R_0l0i + ε^im_kR^j_lim) x^l + 1/24m(5 R^j_l;m - 3 R_0^j_0l;m - R_0l0m^;j - R^j_l^i_m;i - ε^ij_kσ^k (2R_0i0l;m + R_0l0m;i) + 2ε^in_kσ^k R^j_lin;m) x^l x^m + 1/120m(9 R^j_l;mn - 6 R_0^j_0l;mn - 5 R_0l0m^;j_n - 3 R^j_l^i_m;in) x^l x^m x^n + /96mσ^k ( -4ε^ij_k (R_0i0l;mn + R_0l0m;ni) + 3 ε^ir_kR^j_lir;mn) x^l x^m x^n } D_j - q A_τ + m a · x + mc^2/2 R_0l0m x^l x^m - 1/2σ·ω + c/3 R_0l x^l - c/4ε^ij_kσ^k R_0lij x^l + c/24(5 R_0l;m - R_0l^i_m;i) x^l x^m - c/8ε^ij_kσ^k R_0lij;m x^l x^m + 1/8m R + 1/4m R_00 + 1/16m (R_;l + 2 R^i_l;i) x^l + /24mε^ij_kσ^k (R_0i0l;j - 2 R_il;j) x^l + 1/48m(R_;lm + 4 R^i_l;im + ε^ij_kσ^k (R_0i0l;jm - 3 R_il;jm)) x^l x^m - q/4m^2c^2σ^i σ^j D_i E_j - q/12m (R_lm + R_0l0m) x^l x^m σ· B + q/12mσ^j R_iljm x^l x^m B^i + q/4m^2c^2σ· (ω× B) + q/2m^2c^2ω· B + q/4m^2c^2 (ω_j x^i - ω^i x_j) D_i B^j + q/4m^2c^2 (σ· (ω× x)) B · D - q/4m^2c^2σ^j (ω× x) · D B_j + (c^-3) + (x^3), where E_i = ∂_i A_τ - ∂_τ A_i is the electric field and B^i = ε^ijk∂_j A_k is the magnetic field (note that up to higher-order corrections, these are indeed the electromagnetic field components in an orthonormal basis). Note that in the expressions D_i E_j, D_i B^j, and D B_j, the D_i acts on the product of the electric/magnetic field and the ψ̃_A on which the Hamiltonian acts. The post-Newtonian Pauli Hamiltonian (<ref>) is the second main result of this paper. The terms in the Hamiltonian are ordered as follows: the terms involving electromagnetic fields come in the end, the terms without in the beginning. The latter are grouped by the form of the spatial derivative operators (built from D_i) appearing in them. In each of these groups, the terms are ordered as explained before (<ref>): first, they are sorted by order of c^-1, and for each c^-1-order, the terms are sorted by order of x. The lowest-order terms in the Hamiltonian, marked in green in (<ref>), have clear interpretations: we have the usual `Newtonian' kinetic-energy term -1/2m (σ· D)^2 for a Pauli particle minimally coupled to electromagnetism, the coupling -q A_τ to the electric scalar potential, the `Newtonian' gravitational coupling m( a · x + c^2/2 R_0l0m x^l x^m) to a potential including an acceleration and a tidal force term, and the spin–rotation coupling -1/2σ·ω. Note also that the Hamiltonian contains the special-relativistic correction to kinetic energy, -1/8m^3c^2 (σ· D)^4. The other terms are higher-order inertial and gravitational corrections. Note that the scalar product of our quantum theory, with respect to which the Hamiltonian (<ref>) needs to be interpreted, is not simply the standard L^2 scalar product of ℂ^2-valued Pauli wavefunctions ⟨ϕ̃_A, ψ̃_A⟩_L^2 := ∫^3x ϕ̃_A( x)^T ψ̃_A( x). Rather, the correct scalar product is that coming from the original Dirac theory: we start with the original Dirac scalar product ⟨ϕ, ψ⟩_Dirac := ∫_Σvol_Σ n_μ ϕ^T γ^I (_I)^0 γ^J (_J)^μψ and compute its expansion in x and c^-1 that arises from inserting our post-Newtonian ansatz (<ref>) for the Dirac field and expressing ϕ̃_B and ψ̃_B in terms of ϕ̃_A and ψ̃_A. With respect to this scalar product, the Hamiltonian is automatically Hermitian, since the Dirac scalar product in the full theory is conserved under time evolution. Our post-Newtonian quantum theory also comes with a natural position operator, which in this representation of the Hilbert space is given by multiplication of `wave functions' by coordinate position x^a. This operator arises as the post-Newtonian equivalent of that operator in the one-particle sector of the full Dirac theory which multiplies the Dirac fields by coordinate position x^a. For the case of the reference worldline γ being an inertial worldline in Minkowski spacetime and a non-rotating frame, that operator is, in fact, the Newton–Wigner position operator <cit.>. §.§ Comparison to previous results by others We now want to compare our post-Newtonian Hamiltonian (<ref>) to that obtained in <cit.>. In order to do so we proceed as follows: we first recall the hypotheses on which the expansion in <cit.> is based. These we then use to further approximate our result in accordance with these hypotheses. Then, finally, we compare the result so obtained with that of <cit.>. We shall find a difference which we interpret as an inconsistency in <cit.>. Now, the approximation hypotheses in <cit.> that go beyond those imposed by us fall into three classes: First, concerning `weak gravity', they assume ω = 0 (no frame rotation), they neglect quadratic terms in a^i and R_IJKL, and, finally, they also do not consider terms involving covariant derivatives of the curvature tensor. Second, as regards their `non-relativistic approximation', they neglect terms of quadratic or higher order in m^-1. Third, they trace over the spin degrees of freedom, i.e.compute 1/2tr(H_Pauli), in order to obtain what in <cit.> was called `the Hamiltonian […] compatible with the description of a Schrödinger wavefunction'[This method of tracing over the spin degrees of freedom in order to obtain a Hamiltonian acting on single-component (complex-number-valued) wavefunctions is used in <cit.> without further justification beyond the goal of acting on ℂ-valued functions. We do not believe this method to be of general physical validity for the following reason: the unitary time evolution described by the full post-Newtonian Pauli Hamiltonian contains interactions between the position and spin degrees of freedom. Therefore, the effective time evolution which we would obtain by ignoring the spin, i.e. by taking the partial trace of the total density matrix over the spin degrees of freedom, would no longer be unitary. Consequently, it cannot be described by a Schrödinger equation with respect to some Hamiltonian. Of course, this general argument does not exclude that, depending on the context, an approximately unitary time evolution for some specific initial states does indeed exist, but such an argument is not given in <cit.>. Nevertheless, for the sake of comparison to <cit.>, we still apply the tracing procedure which we consider physically unwarranted.]. In units with c = 1, as used in <cit.>, the result of applying this procedure to our Hamiltonian reads 1/2tr(H_Pauli) = { -1/2m -1/2m a · x - 1/4m R_0l0m x^l x^m } D^2 + {- 2/3R^j_l0m x^l x^m - 1/4m a^j + 1/3mR^j_l x^l + 1/12mR_0l0^j x^l } D_j - q A_τ + m a · x + m/2 R_0l0m x^l x^m + /3 R_0l x^l + 1/8m R + 1/4m R_00 - 1/6mR^i_l^j_m x^l x^m D_j D_i . This is different from the resulting Hamiltonian ℋ from <cit.>, with the difference reading 1/2tr(H_Pauli) - ℋ = {1/4m a · x + 1/8m R_0l0m x^l x^m } D^2 + {1/2m a^j + 1/2mR_0l0^j x^l } D_j + 1/4m R_00 . This difference arises precisely from that term in the computation of our second-order Hamiltonian H^(2) for which we had to re-use the lowest-order Pauli equation for ψ̃_A^(0): in the final Pauli Hamiltonian, this term amounts to a contribution of - 1/4m^2c^2 (-σ· D) { q σ· E + (-σ· D) (H^(0) + q A_τ) }; due to H^(0) containing terms proportional to m, this expression contains terms proportional to m^-1, which yield exactly the difference term (<ref>). Closely examining the calculation of <cit.>, one can exactly pinpoint the step of this calculation at which the above term has been neglected: in appendix C of <cit.>, going from equation (C3) to (C4), an inverse operator of the form 1/2m(1 + ∂_T + q A_0/2m + (terms linear in a^i and R))^-1 is evaluated via a perturbative expansion (in the notation of <cit.>, ∂_T is the `non-relativistic energy' operator, i.e. the total energy of the Dirac solution minus the rest energy). The authors of <cit.> argue that when expanding (with respect to small quotients of involved energies), `the rest mass of the system tends to be much larger than any of the terms that show up in the expansion', such that `in a power expansion of the inverse operator in equation (C3), it makes sense to neglect terms that will contribute with order m^-2'. Following this argument, the term involving ∂_T/2m is neglected. However, by following the ensuing calculation one can check that if it were not neglected at this point, this term would in the end lead to a contribution to the final Pauli Hamiltonian of the form -1/4m^2(-σ· D)^2 H^(0) + (m^-2) in our notation, which to the order of approximation used in <cit.> is precisely the term noted above in (<ref>). We thus see that the ∂_T/2m term ought not to be neglected in going from (C3) to (C4) in <cit.>, since in the end it leads to terms that are of the same order as the other correction terms. A more direct formulation of the argument against neglecting this term is to note that ∂_T acting on ψ_A (in the notation of <cit.>) induces terms proportional to m, such that -∂_T/(4m^2) is not actually of order m^-2, but of order m^-1. One may also formulate our argument against the neglection without referring to expanding in m^-1 at all, speaking only about quotients of energies instead, in the spirit of <cit.>: if one were to neglect the term -∂_T/(4m^2) = -1/2m·`non-relativistic energy'/2(rest energy) in going from (C3) to (C4) in <cit.>, then one would as well have to neglect the terms -1/2m(1/2 a_j x^j + 1/4 R_k0m0 x^k x^m) = -1/2m·m(a_j x^j + R_k0m0 x^k x^m/2)/2m = -1/2m·corrections in `non-relativistic energy'/2(rest energy). These last terms, however, clearly have to be kept in the calculation since they contribute at a relevant order, and indeed are kept in <cit.>. Thus, we come to the conclusion that the difference between the result of <cit.> and our result when truncated to linear approximation order is due to an undue neglection in <cit.>, which without further justification seems to render the approximation used in <cit.> inconsistent. In our opinion, this exemplifies that a mathematically clear systematic approximation scheme with spelled-out assumptions—such as ours, based on (formal) power series expansions in deformation parameters—reduces possibilities for conceptual errors in approximative calculations. § CONCLUSION Deducing the impact of classical gravitational fields (in the sense of General Relativity) onto the dynamical evolution of quantum systems is a non-trivial task of rapidly increasing theoretical interest given the acceleration that we currently witness in experimental areas, like g-factor measurements <cit.>, atom interferometry <cit.>, and metrology. The relatively simple case of a single spin-half particle in an external gravitational field that we dealt with here provides a good example of the nature and degree on non-triviality immediately encountered. Given the many much further reaching claims that emerge from various `approaches' to a theory of quantum gravity proper this may be read as a call for some restraint. On the other hand, just listing longer and longer strings of corrections to Hamiltonians will in the end also lead us nowhere without a consistent interpretational scheme that eventually allows us to communicate the physical significance of each term to our experimental colleagues. In this respect we tried hard to consistently stay within a well-defined scheme, so as to produce each term of a given, well-defined order once and only once. In that respect we also wish to refer to our discussion in <cit.>. Closest to our approach are the papers that we already discussed in the introduction. We claim to have improved on them concerning not only the order of approximation but also concerning the systematics. We showed that even within the larger (and hence more restricting) set of approximation-hypotheses assumed in the most recent of these papers <cit.>, their list of terms for the final Hamiltonian is not complete. Ours, we believe, is. Finally we wish to mention a characteristic difficulty concerning the interpretation of interaction terms in Hamiltonians in the context of GR. It has to do with the changing interpretation of coordinates once the Hamiltonian refers to different metrics. More precisely, consider two Hamiltonian functions being given, one of which takes into account the interaction with the gravitational field to a higher degree than the other; then, strictly speaking, it is not permissible to address the additional terms as the sole expression of the higher order interaction, the reason being that together with the higher degree of approximation to the metric, the metric meaning of the coordinates, too, has also changed at the same time. Again we refer to <cit.> for a more extensive discussion, also providing a typical example. § ACKNOWLEDGEMENTS This work was supported by the Deutsche Forschungsgemeinschaft via the Collaborative Research Centre 1227 `DQ-mat'—project number 274200144, project A05. apsrev41Control § THE CONNECTION IN GENERALISED FERMI NORMAL COORDINATES The Christoffel symbols in generalised Fermi normal coordinates were calculated to second order in the geodesic distance to the reference worldline in <cit.>. Note that in this reference, some calculational errors were made, which we have corrected in the following and marked in red. The Christoffel symbols are given by Γ^s_ss = c^-3 ( b · x + 2 a · (ω× x)) + 1/2 R_0l0m;0 x^l x^m + 1/3 c^-2 a^i R_0lim x^l x^m - c^-5 ( b · x + 2 a · (ω× x)) ( a · x) + 2 c^-1 R_0i0j (ω× x)^i x^j + (x^3), Γ^s_si = c^-2 a_i - c^-4 a_i ( a · x) + R_0i0j x^j +1/6 (R_0l0m;i + 2 R_0i0l;m) x^l x^m - 2/3 c^-2 ( a · x) R_0i0j x^j - 1/3 c^-2 a_i R_0l0m x^l x^m + c^-6 a_i ( a · x)^2 - 1/3 c^-1 (ω× x)^k (R_0ilk + R_0kli) x^l + (x^3), Γ^s_ij = 1/3{2 R_0(ij)k + 1/4 (5 R_0(ij)k;l - R_0kl(i;j)) x^l - 2 c^-2 ( a · x) R_0(ij)k} x^k + (x^3), Γ^i_ss = c^-2 a^i + R_0^i_0j x^j + c^-2 (η× x)^i + c^-4 ( a · x) a^i + c^-2 (ω× (ω× x))^i - 1/2R_0l0m^;i x^l x^m + R_0^i_0l;m x^l x^m + 2 c^-2 ( a · x) R_0^i_0j x^j - 1/3 c^-2 a^j R^i_ljm x^l x^m - c^-4 (ω× x)^i ( b · x + 2 a · (ω× x)) - 2 c^-1 (ω× x)^k R_0j^i_k x^j + (x^3), [0] Γ^i_sj = - c^-1ε^i_jkω^k - R_0k^i_j x^k - c^-3 (ω× x)^i a_j + {+1/6R_0j^i_l;m - 1/2R_0l^i_j;m-1/6R_0l^i_m;j} x^l x^m - 1/3 c^-2 ( a · x) (R_0k^i_j + R_0^i_kj) x^k + 1/3 c^-2 a_j R_0l^i_m x^l x^m - c^-1 (ω× x)^i R_0j0k x^k - 1/3 c^-1 (ω× x)^l (R_lk^i_j + R_l^i_kj) x^k + c^-5 a_j ( a · x)(ω× x)^i + (x^3), Γ^i_jk = - 1/3{2 R^i_(jk)l + 1/4 (5 R^i_(jk)l;m - R^i_lm(j;k)) x^m + 2 c^-1 (ω× x)^i R_0(jk)l} x^l + (x^3). The local connection form with respect to the frame (<ref>) is given by ω_μ^0_0 = 0, ω_s^0_i = c^-2 a_i + R_0i0l x^l + 1/2 c^-2 ( a · x) R_0i0l x^l + 1/2 c^-1 (ω× x)^k R_0ikl x^l + 1/2 R_0i0l;m x^l x^m + (x^3), ω_i^0_j = 1/2 R_0jil x^l + 1/3 R_0jil;m x^l x^m + (x^3), ω_μ^i_0 = δ^ijω_μ^0_j , ω_s^i_j = - c^-1ε^i_jkω^k - R^i_j0l x^l - 1/2 c^-2 ( a · x) R^i_j0l x^l - 1/2 c^-1 (ω× x)^k R^i_jkl x^l - 1/2R^i_j0l;m x^l x^m + (x^3), ω_k^i_j = - 1/2R^i_jkl x^l - 1/3R^i_jkl;m x^l x^m + (x^3). § STATIONARITY WITH RESPECT TO THE GENERALISED FERMI NORMAL COORDINATE TIME TRANSLATION FIELD In the following, we are going to briefly discuss the geometric interpretation of the possible condition that the metric be stationary with respect to the time coordinate τ of the generalised Fermi normal coordinates introduced in <ref>, i.e. that the timelike vector field ∂/∂τ be Killing. Note that, as explained in the main text, the post-Newtonian expansion in c^-1 of <ref> is still a meaningful approximation procedure if stationarity does not hold, formulating the Dirac theory as a deformation of its Newtonian limit. As a first step, stationarity with respect to ∂/∂τ of course means that the reference worldline γ has to be stationary. Away from the worldline, ∂/∂τ being Killing means that the metric components (<ref>) need be independent of coordinate time τ; i.e. we need the components a^i, ω^i, R_IJKL of the acceleration of γ, the angular velocity of the spatial basis (_i) and the curvature to be constant along the reference worldline γ: ȧ^i(τ) = 0, ω̇^i(τ) = 0, Ṙ_IJKL(τ) = 0 Note, however, that the components are taken with respect to the generalised Fermi normal coordinates; therefore, to see the true geometric meaning of these conditions, we need to rewrite them covariantly. By direct computation, for the covariant derivatives of acceleration and angular velocity we have b^i(τ) = (∇_γ̇(τ) a(τ))^i = ȧ^i(τ) + c Γ^i_sj(γ(τ)) a^j(τ) = ȧ^i(τ) + (ω(τ) × a(τ))^i , η^i(τ) = (∇_γ̇(τ)ω(τ))^i = ω̇^i(τ) + c Γ^i_sj(γ(τ)) ω^j(τ) = ω̇^i(τ). Thus, we see that stationarity of the metric with respect to the time translation vector field given by generalised Fermi normal coordinates implies that the angular velocity ω of the spatial reference vectors be covariantly constant along the reference worldline γ. However, in the case of ω being non-zero, in the generic case the worldline's acceleration a need not be covariantly constant—it has to itself rotate with angular velocity ω, such that its components with respect to the rotating basis are constant. This may sound somewhat artificial, but note that for example one could satisfy this condition with a covariantly constant acceleration a and spatial basis vectors (_i) that rotate around the axis given by a. Of course, the condition of constancy of the curvature components along γ can be rewritten in terms of covariant derivatives of the curvature tensor as well; however, this does not lead to any great insight, so we will refrain from doing so here. § THE DIRAC HAMILTONIAN UP TO (C^-1X^4) + (C^-2X^3) In the main text, for a consistent post-Newtonian expansion of the Dirac Hamiltonian leading to a resulting Pauli Hamiltonian known to order c^-2 and x^2, we need to know the Dirac Hamiltonian to order x^3 in those terms of order up to c^-1 in the c^-1-expansion. Going through the derivation of <cit.>, one can convince oneself that all x-dependent terms in the Christoffel symbols in generalised Fermi normal coordinates are of order at least c^-2 when expanding also in c^-1; and employing the methods from <cit.>, one can go to higher order and calculate the order-x^3 terms to order c^-2. The resulting Christoffel symbols read as follows, with the newly calculated terms marked in blue (note that we use the ordering of terms and the (c^-n x^m) notation as explained in the main text before (<ref>)): Γ^s_ss = c^-2(c^2/2 R_0l0m;0 x^l x^m + c^2/6 R_0l0m;n0 x^l x^m x^n) + c^-3 ( b · x + 2 a · (ω× x) + 2 c^2 R_0i0j (ω× x)^i x^j) + c^-4(c^2/3 a^i R_0lim x^l x^m ) - c^-5( ( b · x + 2 a · (ω× x)) ( a · x)) + (c^-2x^4) + (c^-3x^3), Γ^s_si = c^-2(a_i + c^2 R_0i0j x^j + c^2/6 (R_0l0m;i + 2 R_0i0l;m) x^l x^m + c^2/12 (R_0i0l;mn + R_0l0m;ni) x^l x^m x^n) + c^-3(- c^2/3 (ω× x)^k (R_0ilk + R_0kli) x^l ) + c^-4(-a_i ( a · x) - 2c^2/3 ( a · x) R_0i0j x^j - c^2/3 a_i R_0l0m x^l x^m ) + c^-6 a_i ( a · x)^2 + (c^-2x^4) + (c^-3x^3), Γ^s_ij = c^-2(c^2/3{2 R_0(ij)k + 1/4 (5 R_0(ij)k;l - R_0kl(i;j)) x^l } x^k + c^2/20 (3R_0(ij)l;mn - R_0lm(i;j)n) x^l x^m x^n) + c^-4(-2c^2/3 ( a · x) R_0(ij)l x^l) + (c^-2x^4) + (c^-3x^3), Γ^i_ss = c^-2(a^i + c^2 R_0^i_0j x^j + (η× x)^i + (ω× (ω× x))^i + c^2/2 (2R_0^i_0l;m - R_0l0m^;i) x^l x^m + c^2/6 (2R_0^i_0l;mn - R_0l0m;n^i) x^l x^m x^n) + c^3 (- 2 c (ω× x)^k R_0j^i_k x^j) + c^-4(( a · x) a^i + 2 c^2 ( a · x) R_0^i_0j x^j - c^2/3 a^j R^i_ljm x^l x^m - (ω× x)^i ( b · x + 2 a · (ω× x)) ) + (c^-2x^4) + (c^-3x^3), [0] Γ^i_sj = - c^-1ε^i_jkω^k + c^-2(-c^2 R_0k^i_j x^k + c^2 {1/6R_0j^i_l;m - 1/2R_0l^i_j;m - 1/6R_0l^i_m;j} x^l x^m + c^2/12 (R_0j^i_l;mn - 2 R_0l^i_j;mn - R_0l^i_m;nj) x^l x^m x^n) + c^-3(-(ω× x)^i a_j - c^2/3 (ω× x)^l (R_lk^i_j + R_l^i_kj) x^k - c^2 (ω× x)^i R_0j0k x^k ) + c^-4(- c^2/3 ( a · x) (R_0k^i_j + R_0^i_kj) x^k + c^2/3 a_j R_0l^i_m x^l x^m ) + c^-5 a_j ( a · x) (ω× x)^i + (c^-2x^4) + (c^-3x^3), Γ^i_jk = c^-2(- c^2/3{2 R^i_(jk)l + 1/4 (5 R^i_(jk)l;m - R^i_lm(j;k)) x^m } x^l - c^2/20 (3R^i_(jk)l;mn - R^i_lm(j;k)n) x^l x^m x^n) + c^-3 (2 c^2 (ω× x)^i R_0(jk)l x^l) + (c^-2x^4) + (c^-3x^3). Note that, according to (<ref>), we have treated the curvature tensor as being of order c^-2. Using the above Christoffel symbols, one can compute the parallely transported frame (<ref>) to higher order of expansion, which reads (_0)^s = 1 + c^-2(- a · x - c^2/2 R_0l0m x^l x^m - c^2/6 R_0l0m;n x^l x^m x^n - c^2/24 R_0k0l;mn x^k x^l x^m x^n) + c^-4(( a · x)^2 + 5c^2/6 ( a · x) R_0l0m x^l x^m ) - c^-6 ( a · x)^3 + (c^-2x^5) + (c^-3x^4), (_0)^i = - c^-1 (ω× x)^i + c^-2(c^2/2R_0l^i_m x^l x^m + c^2/6R_0l^i_m;n x^l x^m x^n + c^2/24R_0k^i_l;mn x^k x^l x^m x^n) + c^-3(( a · x) (ω× x)^i + c^2/2 (ω× x)^i R_0l0m x^l x^m ) + c^-4(-c^2/3 ( a · x) R_0l^i_m x^l x^m ) - c^-5 (ω× x)^i ( a · x)^2 + (c^-2x^5) + (c^-3x^4), (_i)^s = c^-2(-c^2/6 R_0lim x^l x^m - c^2/12 R_0lim;n x^l x^m x^n - c^2/40 R_0kil;mn x^k x^l x^m x^n) + c^-4(c^2/6 ( a · x) R_0lim x^l x^m ) + (c^-2x^5) + (c^-3x^4), (_i)^j = δ^j_i + c^-2(c^2/6R^j_lim x^l x^m + c^2/12R^j_lim;n x^l x^m x^n + c^2/40R^j_kil;mn x^k x^l x^m x^n) + c^-3(c^2/6 (ω× x)^j R_0lim x^l x^m ) + (c^-2x^5) + (c^-3x^4). For the dual frame, we also obtain that the x dependence starts at order c^-2: (^0)_s = 1 + c^-2( a · x + c^2/2 R_0l0m x^l x^m ) + (c^-2x^3) (^0)_i = c^-2(c^2/6 R_0lim x^l x^m ) + (c^-2x^3) (^i)_s = c^-1 (ω× x)^i + c^-2(-c^2/2R^i_l0m x^l x^m ) + (c^-2x^3) (^i)_j = δ^i_j + c^-2(-c^2/6R^i_ljm x^l x^m ) + (c^-2x^3) From this, we can compute the higher-order corrections to the connection form, the nontrivial components of which read ω_s^0_i = c^-2(a_i + c^2 R_0i0l x^l + c^2/2 R_0i0l;m x^l x^m + c^2/6 R_0i0l;mn x^l x^m x^n) + c^-3(c^2/2 (ω× x)^k R_0ikl x^l ) + c^-4(c^2/2 ( a · x) R_0i0l x^l ) + (c^-2x^4) + (c^-3x^3), ω_i^0_j = c^-2(c^2/2 R_0jil x^l + c^2/3 R_0jil;m x^l x^m + c^2/8 R_0jil;mn x^l x^m x^n) + (c^-2x^4) + (c^-3x^3), ω_s^i_j = - c^-1ε^i_jkω^k + c^-2(-c^2 R^i_j0l x^l - c^2/2R^i_j0l;m x^l x^m - c^2/6R^i_j0l;mn x^l x^m x^n) + c^-3(-c^2/2 (ω× x)^k R^i_jkl x^l ) + c^-4(-c^2/2 ( a · x) R^i_j0l x^l ) + (c^-2x^4) + (c^-3x^3), ω_k^i_j = c^-2(-c^2/2R^i_jkl x^l - c^2/3R^i_jkl;m x^l x^m - c^2/8R^i_jkl;mn x^l x^m x^n) + (c^-2x^4) + (c^-3x^3). The component of the inverse metric that is needed for the computation of the Dirac Hamiltonian takes the following form including the newly computed higher-order corrections: g^ss = -1 + c^-2(2 a · x + c^2 R_0l0m x^l x^m + c^2/3 R_0l0m;n x^l x^m x^n + c^2/12 R_0k0l;mn x^k x^l x^m x^n) + c^-4(-3 ( a · x)^2 - 8c^2/3 ( a · x) R_0l0m x^l x^m ) + c^-64( a · x)^3 + (c^-2x^5) + (c^-3x^4) Using all these ingredients, we can finally compute the Dirac Hamiltonian in our coordinates and frame to the necessary order (as for the original Dirac Hamiltonian (<ref>), the computation is rather tedious, but straightforward): H_Dirac = γ^0 {mc^2 + c^0 (m a · x + m c^2/2 R_0l0m x^l x^m + m c^2/6 R_0l0m;n x^l x^m x^n) + (c^0x^4) } - γ^i {c^0 (m c^2/6 R_0lim x^l x^m + m c^2/12 R_0lim;n x^l x^m x^n) + (c^0x^4) } +1{- q A_τ + (ω× x)^i D_i + c^-1(- c^2/2R_0l^i_m x^l x^m D_i + c^2/12 R_0l;m x^l x^m - c^2/6R_0l^i_m;n x^l x^m x^n D_i + c^2/24 R_0l;mn x^l x^m x^n - c^2/24R_0k^i_l;mn x^k x^l x^m x^n D_i) + c^-3( c^2/4 ( a · x) R_0l x^l - c^2/6 ( a · x) R_0l^i_m x^l x^m D_i ) } - γ^0 γ^j {c D_j + c^-1(/2 a_j + ( a · x) D_j + c^2/4 (R_0j0l - R_jl) x^l + c^2/2 R_0l0m x^l x^m D_j + c^2/6R^i_ljm x^l x^m D_i + c^2/12 (R_0j0l;m - 2 R_jl;m) x^l x^m + c^2/6 R_0l0m;n x^l x^m x^n D_j + c^2/12R^i_ljm;n x^l x^m x^n D_i + c^2/48 (R_0j0l;mn - 3 R_jl;mn) x^l x^m x^n + c^2/24 R_0k0l;mn x^k x^l x^m x^n D_j + c^2/40R^i_kjl;mn x^k x^l x^m x^n D_i) + c^-3( - c^2/4 ( a · x) R_jl x^l + c^2/6 ( a · x) R_0l0m x^l x^m D_j + c^2/6 ( a · x) R^i_ljm x^l x^m D_i ) } + γ^i γ^j {-/4ε_ijkω^k + c^-1( c^2/4 R_0ijl x^l + c^2/6 R_0lim x^l x^m D_j + c^2/12 R_0ijl;m x^l x^m + c^2/12 R_0lim;n x^l x^m x^n D_j + c^2/48 R_0ijl;mn x^l x^m x^n + c^2/40 R_0kil;mn x^k x^l x^m x^n D_j) + c^-3 c^2/6 ( a · x) R_0lim x^l x^m D_j } + (c^-1x^4) + (c^-2x^3) § DETAILS OF THE POST-NEWTONIAN EXPANSION The equations that arise from the Dirac equation when inserting the post-Newtonian ansatz (<ref>) are { D_τ - m a · x - mc^2/2 R_0l0m x^l x^m - m c^2/6 R_0l0m;n x^l x^m x^n - (ω× x)^i D_i + 1/2σ·ω + c^-1( c^2/2R_0l^i_m x^l x^m D_i - c^2/6 R_0l;m x^l x^m - c^2/12ε^ij_kσ^k R_0ijl;m x^l x^m + c^2/6R_0l^i_m;n x^l x^m x^n D_i - c^2/16 R_0l;mn x^l x^m x^n - c^2/48ε^ij_kσ^k R_0ijl;mn x^l x^m x^n + c^2/24R_0k^i_l;mn x^k x^l x^m x^n D_i + σ^i σ^j [ c^2/4 R_0ijl x^l + c^2/6 R_0lim x^l x^m D_j + c^2/12 R_0lim;n x^l x^m x^n D_j + c^2/40 R_0kil;mn x^k x^l x^m x^n D_j ] ) + c^-3( - c^2/4 ( a · x) R_0l x^l + c^2/6 ( a · x) R_0l^i_m x^l x^m D_i + c^2/6 ( a · x) σ^i σ^j R_0lim x^l x^m D_j ) + (c^0x^4) + (c^-2x^3) }ψ̃_A = -σ^j { c D_j + mc^2/6 R_0ljm x^l x^m + m c^2/12 R_0ljm;n x^l x^m x^n + c^-1(/2 a_j + ( a · x) D_j + c^2/4 (R_0j0l - R_jl) x^l + c^2/2 R_0l0m x^l x^m D_j + c^2/6R^i_ljm x^l x^m D_i + c^2/12 (R_0j0l;m - 2 R_jl;m) x^l x^m + c^2/6 R_0l0m;n x^l x^m x^n D_j + c^2/12R^i_ljm;n x^l x^m x^n D_i + c^2/48 (R_0j0l;mn - 3 R_jl;mn) x^l x^m x^n + c^2/24 R_0k0l;mn x^k x^l x^m x^n D_j + c^2/40R^i_kjl;mn x^k x^l x^m x^n D_i ) + c^-3( - c^2/4 ( a · x) R_jl x^l + c^2/6 ( a · x) R_0l0m x^l x^m D_j + c^2/6 ( a · x) R^i_ljm x^l x^m D_i ) + (c^0x^4) + (c^-2x^3) }ψ̃_B , [0] {2 mc^2 + D_τ + m a · x + mc^2/2 R_0l0m x^l x^m + m c^2/6 R_0l0m;n x^l x^m x^n - (ω× x)^i D_i + 1/2σ·ω + c^-1( c/2R_0l^i_m x^l x^m D_i - c^2/6 R_0l;m x^l x^m - c^2/12ε^ij_kσ^k R_0ijl;m x^l x^m + c^2/6R_0l^i_m;n x^l x^m x^n D_i - c^2/16 R_0l;mn x^l x^m x^n - c^2/48ε^ij_kσ^k R_0ijl;mn x^l x^m x^n + c^2/24R_0k^i_l;mn x^k x^l x^m x^n D_i + σ^i σ^j [ c^2/4 R_0ijl x^l + c^2/6 R_0lim x^l x^m D_j + c^2/12 R_0lim;n x^l x^m x^n D_j + c^2/40 R_0kil;mn x^k x^l x^m x^n D_j ] ) + c^-3( - c^2/4 ( a · x) R_0l x^l + c^2/6 ( a · x) R_0l^i_m x^l x^m D_i + c^2/6 ( a · x) σ^i σ^j R_0lim x^l x^m D_j ) + (c^0x^4) + (c^-2x^3) }ψ̃_B = -σ^j { c D_j - mc^2/6 R_0ljm x^l x^m - m c^2/12 R_0ljm;n x^l x^m x^n + c^-1(/2 a_j + ( a · x) D_j + c^2/4 (R_0j0l - R_jl) x^l + c^2/2 R_0l0m x^l x^m D_j + c^2/6R^i_ljm x^l x^m D_i + c^2/12 (R_0j0l;m - 2 R_jl;m) x^l x^m + c^2/6 R_0l0m;n x^l x^m x^n D_j + c^2/12R^i_ljm;n x^l x^m x^n D_i + c^2/48 (R_0j0l;mn - 3 R_jl;mn) x^l x^m x^n + c^2/24 R_0k0l;mn x^k x^l x^m x^n D_j + c^2/40R^i_kjl;mn x^k x^l x^m x^n D_i ) + c^-3( - c^2/4 ( a · x) R_jl x^l + c^2/6 ( a · x) R_0l0m x^l x^m D_j + c^2/6 ( a · x) R^i_ljm x^l x^m D_i ) + (c^0x^4) + (c^-2x^3) }ψ̃_A , where D_τ = ∂_τ - q A_τ, D_i = ∂_i - q A_i denotes the electromagnetic covariant derivative. Note that we used the Pauli matrix identity σ^i σ^j = δ^ij1 + ε^ij_kσ^k for the simplifications σ^i σ^j ε_ijkω^k = 2 σ·ω and σ^i σ^j R_0ijl;m = -R_0l;m + ε^ij_kσ^k R_0ijl;m. At order c^-1, (<ref>) yields { D_τ - m a · x - mc^2/2 R_0l0m x^l x^m - (ω× x)^i D_i + 1/2σ·ω + (x^3) }ψ̃_A^(1) + { c^2/2R_0l^i_m x^l x^m D_i - c^2/6 R_0l;m x^l x^m - c^2/12ε^ij_kσ^k R_0ijl;m x^l x^m + c^2/6R_0l^i_m;n x^l x^m x^n D_i + c^2/4σ^i σ^j R_0ijl x^l + c^2/6σ^i σ^j R_0lim x^l x^m D_j + c^2/12σ^i σ^j R_0lim;n x^l x^m x^n D_j + (x^3) }ψ̃_A^(0) = - σ^j D_j ψ̃_B^(2) + {- mc^2/6σ^i R_0lim x^l x^m - m c^2/12σ^i R_0lim;n x^l x^m x^n + (x^4) }ψ̃_B^(1) . Using (<ref>) and (<ref>) to eliminate the ψ̃_B, this may be rewritten as { D_τ - m a · x - mc^2/2 R_0l0m x^l x^m - (ω× x)^i D_i + 1/2σ·ω + (x^3) }ψ̃_A^(1) + { c^2/2R_0l^i_m x^l x^m D_i - c^2/6 R_0l;m x^l x^m - c^2/12ε^ij_kσ^k R_0ijl;m x^l x^m + c^2/6R_0l^i_m;n x^l x^m x^n D_i + c^2/4σ^i σ^j R_0ijl x^l + c^2/6σ^i σ^j R_0lim x^l x^m D_j + c^2/12σ^i σ^j R_0lim;n x^l x^m x^n D_j + (x^3) }ψ̃_A^(0) = - 1/2m (σ· D)^2 ψ̃_A^(1) + {- c^2/12σ^j σ^i R_0lim D_j (x^l x^m ·) - c^2/24σ^j σ^i R_0lim;n D_j (x^l x^m x^n ·) + c^2/12σ^i σ^j R_0lim x^l x^m D_j + c^2/24σ^i σ^j R_0lim;n x^l x^m x^n D_j + (x^3)}ψ̃_A^(0) From this, we can read off the next-to-leading-order Hamiltonian H^(1) according to (<ref>), giving H^(1) = - c^2/2R_0l^i_m x^l x^m D_i + c^2/6 R_0l;m x^l x^m + c^2/12ε^ij_kσ^k R_0ijl;m x^l x^m - c^2/6R_0l^i_m;n x^l x^m x^n D_i - c^2/4σ^i σ^j R_0ijl x^l - c^2/12 R_0lim (σ^i σ^j x^l x^m D_j + σ^j σ^i D_j(x^l x^m ·)) - c^2/24 R_0lim;n (σ^i σ^j x^l x^m x^n D_j + σ^j σ^i D_j (x^l x^m x^n ·)) + (x^3) = c^2/3 R_0l x^l - c^2/4ε^ij_kσ^k R_0lij x^l - 2 c^2/3R_0l^j_m x^l x^m D_j + c^2/24(5 R_0l;m - R_0l^i_m;i) x^l x^m - c^2/8ε^ij_kσ^k R_0lij;m x^l x^m - c^2/4R_0l^j_m;n x^l x^m x^n D_j + (x^3), where we again used σ^i σ^j = δ^ij1 + ε^ij_kσ^k for simplifications, as well as the Bianchi identities. The difference of this result to the corresponding one in the master's thesis <cit.> on which the present article is based, arising from oversights in <cit.> regarding the consistent calculation of the order x^2 terms, consists solely in the appearance of the terms containing covariant derivatives of the curvature tensor. Note that H^(0) read off from (<ref>) is the same as the one calculated above in (<ref>). (<ref>) at order c^-1 gives the following: 2m ψ̃_B^(3) + { D_τ + m a · x + mc^2/2 R_0l0m x^l x^m + m c^2/6 R_0l0m;n x^l x^m x^n - (ω× x)^i D_i + 1/2σ·ω + (x^4) }ψ̃_B^(1) = - σ^j D_j ψ̃_A^(2) + {mc^2/6σ^i R_0lim x^l x^m + m c^2/12σ^i R_0lim;n x^l x^m x^n + (x^4) }ψ̃_A^(1) - σ^j {/2 a_j + ( a · x) D_j + c^2/4 (R_0j0l - R_jl) x^l + c^2/2 R_0l0m x^l x^m D_j + c^2/6R^i_ljm x^l x^m D_i + c^2/12 (R_0j0l;m - 2 R_jl;m) x^l x^m + c^2/6 R_0l0m;n x^l x^m x^n D_j + c^2/12R^i_ljm;n x^l x^m x^n D_i + c^2/48 (R_0j0l;mn - 3 R_jl;mn) x^l x^m x^n + c^2/24 R_0k0l;mn x^k x^l x^m x^n D_j + c^2/40R^i_kjl;mn x^k x^l x^m x^n D_i + (x^4) }ψ̃_A^(0) With (<ref>) to eliminate ψ̃_B^(1), this can be used to express ψ̃_B^(3) in terms of the ψ̃_A. Note however that this will involve the term - D_τ/2mψ̃_B^(1) = - D_τ/4m^2 (-σ· D) ψ̃_A^(0), such that we need to re-use the Pauli equation (<ref>) for ψ̃_A^(0) to fully eliminate the time derivative in the resulting expression. Explicitly, the term in question evaluates to - D_τ/4m^2 (-σ· D) ψ̃_A^(0) = - 1/4m^2{ [D_τ, σ· D] + (-σ· D) D_τ}ψ̃_A^(0) = - 1/4m^2{ q σ· E + (-σ· D) (H^(0) + q A_τ) }ψ̃_A^(0) , where E_i = ∂_i A_τ - ∂_τ A_i is the electric field (note that up to higher-order corrections, these are indeed the electric field components in an orthonormal basis). We finally need the next order of expansion in c^-1 in order to compute the Hamiltonian at order c^-2. (<ref>) at order c^-2 is { D_τ - m a · x - mc^2/2 R_0l0m x^l x^m - (ω× x)^i D_i + 1/2σ·ω + (x^3) }ψ̃_A^(2) + { c^2/2R_0l^i_m x^l x^m D_i - c^2/6 R_0l;m x^l x^m - c^2/12ε^ij_kσ^k R_0ijl;m x^l x^m + c^2/6R_0l^i_m;n x^l x^m x^n D_i + c^2/4σ^i σ^j R_0ijl x^l + c^2/6σ^i σ^j R_0lim x^l x^m D_j + c^2/12σ^i σ^j R_0lim;n x^l x^m x^n D_j + (x^3) }ψ̃_A^(1) + (x^3) ψ̃_A^(0) = - σ^j D_j ψ̃_B^(3) + {- mc^2/6σ^i R_0lim x^l x^m - m c^2/12σ^i R_0lim;n x^l x^m x^n + (x^4) }ψ̃_B^(2) - σ^j {/2 a_j + ( a · x) D_j + c^2/4 (R_0j0l - R_jl) x^l + c^2/2 R_0l0m x^l x^m D_j + c^2/6R^i_ljm x^l x^m D_i + c^2/12 (R_0j0l;m - 2 R_jl;m) x^l x^m + c^2/6 R_0l0m;n x^l x^m x^n D_j + c^2/12R^i_ljm;n x^l x^m x^n D_i + c^2/48 (R_0j0l;mn - 3 R_jl;mn) x^l x^m x^n + c^2/24 R_0k0l;mn x^k x^l x^m x^n D_j + c^2/40R^i_kjl;mn x^k x^l x^m x^n D_i + (x^4) }ψ̃_B^(1) . Now we use (<ref>), (<ref>), (<ref>) and (<ref>) to rewrite (<ref>) just in terms of ψ̃_A and read off the next-order Hamiltonian H^(2) according to (<ref>): H^(2) = - (-σ· D)/4m^2{ q σ· E + (-σ· D) (H^(0) + q A_τ) } - (-σ· D)/2m{m a · x + mc^2/2 R_0l0m x^l x^m + m c^2/6 R_0l0m;n x^l x^m x^n - (ω× x)^i D_i + 1/2σ·ω + (x^4) }(-σ· D)/2m - (-σ· D)/2mσ^j {/2 a_j + ( a · x) D_j + c^2/4 (R_0j0l - R_jl) x^l + c^2/2 R_0l0m x^l x^m D_j + c^2/6R^i_ljm x^l x^m D_i + c^2/12 (R_0j0l;m - 2 R_jl;m) x^l x^m + c^2/6 R_0l0m;n x^l x^m x^n D_j + c^2/12R^i_ljm;n x^l x^m x^n D_i + c^2/48 (R_0j0l;mn - 3 R_jl;mn) x^l x^m x^n + c^2/24 R_0k0l;mn x^k x^l x^m x^n D_j + c^2/40R^i_kjl;mn x^k x^l x^m x^n D_i + (x^4) } - σ^j {/2 a_j + ( a · x) D_j + c^2/4 (R_0j0l - R_jl) x^l + c^2/2 R_0l0m x^l x^m D_j + c^2/6R^i_ljm x^l x^m D_i + c^2/12 (R_0j0l;m - 2 R_jl;m) x^l x^m + c^2/6 R_0l0m;n x^l x^m x^n D_j + c^2/12R^i_ljm;n x^l x^m x^n D_i + c^2/48 (R_0j0l;mn - 3 R_jl;mn) x^l x^m x^n + c^2/24 R_0k0l;mn x^k x^l x^m x^n D_j + c^2/40R^i_kjl;mn x^k x^l x^m x^n D_i + (x^4) }(-σ· D)/2m Note that in the expression - mc^2/6σ^i R_0lim x^l x^m ψ̃_B^(2) = - c^2/12σ^i R_0lim x^l x^m {-σ· D ψ̃_A^(1) + (x^2) ψ̃_A^(0)} appearing in (<ref>), the second term is off our order of approximation, so we neglected it when reading off H^(2). Explicitly evaluating the above expression, we obtain the following order c^-2 Hamiltonian: H^(2) = -1/4m a · D - /4m (σ× a) · D - 1/2m ( a · x) (σ· D)^2 - c^2/4m R_0l0m x^l x^m (σ· D)^2 - c^2/8m R_0l0m;n x^l x^m x^n (σ· D)^2 - c^2/24m R_0k0l;mn x^k x^l x^m x^n (σ· D)^2 - 1/8m^3 (σ· D)^4 + c^2/8m R + c^2/4m R_00 + c^2/12m (4 R^j_l + R_0^j_0l) x^l D_j + c^2/8mσ^k (- 2 ε^ij_k R_0l0i + ε^im_kR^j_lim) x^l D_j - c^2/6mR^i_l^j_m x^l x^m D_i D_j + c^2/16m (R_;l + 2 R^i_l;i) x^l + c^2/24mε^ij_kσ^k (R_0i0l;j - 2 R_il;j) x^l + c^2/24m(5 R^j_l;m - 3 R_0^j_0l;m - R_0l0m^;j - R^j_l^i_m;i - ε^ij_kσ^k (2R_0i0l;m + R_0l0m;i) + 2ε^in_kσ^k R^j_lin;m) x^l x^m D_j - c^2/12mR^i_l^j_m;n x^l x^m x^n D_i D_j + c^2/48m(R_;lm + 4 R^i_l;im + ε^ij_kσ^k (R_0i0l;jm - 3 R_il;jm)) x^l x^m + c^2/120m(9 R^j_l;mn - 6 R_0^j_0l;mn - 5 R_0l0m^;j_n - 3 R^j_l^i_m;in) x^l x^m x^n D_j + c^2/96mσ^k ( -4ε^ij_k (R_0i0l;mn + R_0l0m;ni) + 3 ε^ir_kR^j_lir;mn) x^l x^m x^n D_j - c^2/40mR^i_k^j_l;mn x^k x^l x^mx^n D_i D_j - q/4m^2σ^i σ^j D_i E_j - q/12m c^2 (R_lm + R_0l0m) x^l x^m σ· B + q/12mσ^j c^2 R_iljm x^l x^m B^i + q/4m^2σ· (ω× B) + q/2m^2ω· B + q/4m^2 (ω_j x^i - ω^i x_j) D_i B^j + q/4m^2 (σ· (ω× x)) B · D - q/4m^2σ^j (ω× x) · D B_j + (x^3) This is the final information that we need in order to calculate the Pauli Hamiltonian up to and including the order of c^-2 (<ref>). Note that we have used the identity σ^i σ^j = δ^ij1 + ε^ij_kσ^k multiple times for simplifications, as well as the Bianchi identities and [D_i, D_j] = - q (∂_i A_j - ∂_j A_i) = - q ε_ijk B^k, where B^i = ε^ijk∂_j A_k is the magnetic field. We also used that covariant derivatives commute up to curvature terms, which are of higher order in c^-1. Note that due to a calculational oversight, the terms explicitly containing the magnetic field were missing in the master's thesis <cit.> on which the present article is based. In <cit.>, some oversights were also made regarding the consistency of the calculation of the terms of order x^2. However, the only differences of (<ref>) to the corresponding result in <cit.> that arise from these miscalculations of order x^2 terms are the appearance of all terms which contain covariant derivatives of the curvature tensor and the absence of the term - c^2/4 (ω× x)^k (R_kl + R_0k0l) x^l from <cit.>.
http://arxiv.org/abs/2307.04088v1
20230709034448
Cracking the Puzzle of CO2 Formation on Interstellar Ices. Quantum Chemical and Kinetic Study of the CO + OH -> CO2 + H Reaction
[ "Germán Molpeceres", "Joan Enrique-Romero", "Yuri Aikawa" ]
astro-ph.GA
[ "astro-ph.GA" ]
Quantum Chemical and Kinetic Study of the CO + OH -> CO2 + H Reaction. Department of Astronomy, Graduate School of Science, The University of Tokyo, Tokyo 113 0033, Japan [email protected] Leiden Institute of Chemistry, Gorlaeus Laboratories, Leiden University, PO Box 9502, 2300 RA Leiden, The Netherlands [email protected] CO2 is one of the dominant components of the interstellar ice. Recent observations show CO2 exists more abundantly in polar (H2O-dominated) ice than in apolar (H2O-poor) ice. CO2 ice formation is primarily attributed to the reaction between CO and OH, which has a barrier. We investigate the title reaction in H2O ice and CO ice to quantify the efficiency of the reaction in polar ice and apolar ice. Highly accurate quantum chemical calculations were employed to analyze the stationary points of the potential energy surfaces of the title reaction in the gas phase on a H2O and CO clusters. Microcanonical transition state theory was used as a diagnostic tool for the efficiency of the reaction under ISM conditions. We simulate the kinetics of ice chemistry, considering different scenarios involving non-thermal processes and energy dissipation. The CO + OH reaction proceeds through the remarkably stable intermediate HOCO radical. On the H2O cluster, the formation of this intermediate is efficient, but the subsequent reaction leading to CO2 formation is not. Conversely, HOCO formation on the CO cluster is inefficient without external energy input. Thus, CO2 ice cannot be formed by the title reaction alone either on H2O cluster or CO cluster. In the polar ice, CO2 ice formation is possible via CO + OH -> HOCO, followed by HOCO + H ->CO2 + H2, as demonstrated by abundant experimental literature. In apolar ice, CO2 formation is less efficient because HOCO formation requires external energy. Our finding is consistent with the JWST observations. Further experimental work is encouraged using low-temperature OH radicals. Cracking the Puzzle of CO2 Formation on Interstellar Ices G. Molpeceres 1 J. Enrique-Romero 2 Y. Aikawa 1 Received August 12, 2023; accepted August 12, 2023 ================================================================================================================ § INTRODUCTION In the cold molecular clouds of the interstellar medium (ISM), a significant fraction of the molecules are contained in the solid phase in the form of ice. While most of the molecules present in the ISM have been detected in the gas phase using radio telescopes through their rotational transitions, the direct observation of ices requires studying their vibrational transitions, which are commonly affected by telluric contamination. In this context, space telescopes, such as Spitzer or, more recently, JWST, are essential. Ice observations <cit.> reveal the presence of several components such as H2O, CO, CH3OH, and the object of this study, CO2. The abundance of these species, as well as their speciation in the ice or their presence in specific regions of the ISM, can only be explained by considering their formation routes and the chemical conditions necessary for their appearance. The different components of interstellar ice may be formed in situ on the surface of refractory material. Such is the case of H2O, which is formed from the hydrogenation of atomic oxygen <cit.>, or the case of CH3OH, which is formed from the hydrogenation of CO <cit.>. Other significant components are primarily synthesized in the gas and accrete under extremely cold and dense conditions on the grain, like CO. Interstellar carbon dioxide, CO2, is thought to form via reactions on the surface (see, e.g., <cit.>). The postulated reactions contributing to the CO2 formation are: CO + OH -> CO2 + H HCO + O -> CO2 + H CO + O -> CO2 From this ternary of reactions, Reaction <ref> has a barrier energy when atomic oxygen is in its ground state, (^3P)O <cit.>. Reaction <ref> is barrierless, and Reaction <ref>, the reaction whose study we tackle in this paper, is assumed to have a minimal activation energy (∼ 100 K, <cit.>. The assumption of tiny activation energy for the CO + OH -> CO2 + H reaction is supported by a plethora of experiments dealing with surface chemical experiments <cit.>. Each of these experiments vary in different factors, including the formation route of the OH radical, either by hydrogenation of O2, <cit.>, dissociation of H2O molecules before deposition on the ice <cit.>, or direct photodissociation of H2O ice molecules <cit.>. Other variations between experiments include the substrate under consideration, either amorphous silicates <cit.>, CO <cit.>, matrix isolation <cit.> or H2O <cit.>. On the modelling side, <cit.> build on the experimental knowledge and coarse-grained it in a combination of a direct formation route CO + OH -> CO2 + H operating at T≥12 K, coinciding with the onset of CO diffusion on H2O, and an indirect three-body route on CO ices that relies in the formation of a kinetically excited OH radical O + H -> OH^* that subsequently partakes in the CO + OH^* reaction. The latter route on CO ices allows to explain the CO2 bands in a non-polar media observed in infrared observations of ices <cit.>. In summary, there is ample evidence for Reaction <ref>, to be efficient on dust grains. However, the same reaction in the gas phase is relatively slow, with rate constants as low as ∼ 2x10^-13 molecules cm^-3 s^-1 at 300 K <cit.>. The title reaction in the gas phase has also been a source of extensive theoretical attention. It has been simulated using both semi-classical and quantum dynamics on highly accurate potential energy surfaces (PES) <cit.>. It was also studied in the presence of other CO2 molecules <cit.>. The theoretical works find rate constants even lower than the values reported in <cit.>. The different reactivity on surfaces and the gas phase is puzzling and counterintuitive. In both phases, the reaction is acknowledged to proceed through the highly stable HOCO radical. The evolution from this radical is the primary source of uncertainty because of the high activation energies to form the bimolecular CO2 + H products. In the gas, where a third body to stabilize HOCO is unavailable, the reaction is more likely to occur owing to the energy redistribution into the few vibrational degrees of freedom, ultimately leading to an irreversible reaction. On the surface, the ice molecules dissipate a significant fraction of this energy, ideally leading to the thermalization of HOCO, hence slowing or impeding the formation of CO2. This was proved by <cit.>, initiating the conundrum we tackle in this work and that has also been debated from different prisms <cit.>. If the reaction is slow in the gas, it should not proceed on the ice, where little energy is left for the reaction after dissipation into the ice. Hence, how is the mismatch between gas and solid phase experiments possible? In this article, we aim to shed light on this particular issue. The two main possibilities to explain the disagreement include, in the first place, the operation of external energy input, either chemical from the O2 + H or O + H reactions required to form the OH radical, or the excess energy used to photodissociate H2O. Secondly, free H atoms from the experiment may promote H abstraction reactions, HOCO + H -> CO2 + H2. While these two possibilities are often assumed when interpreting the experimental results, it is fundamental to distinguish which is dominant, if any, to establish under which conditions the laboratory measurements apply to the ISM. Determining the factors contributing to the reaction yield in the experiments is complicated because the detection techniques are suited for identifying only the final products. Quantum chemical calculations are instrumental and provide an atomistic perspective of the different elementary processes relevant to the reaction. In this work, we simulate the title reaction on two different model ices, H2O and CO, and perform kinetic simulations using a microcanonical formalism to determine the importance of non-thermal effects in the reaction, including dissipation over different numbers of molecules, and complete the picture left by the different experimental studies. The paper is structured as follows. In <Ref>, we describe the employed computational methodology. In <Ref> we present the structural models for the ices (<Ref>), the PES for the reactions in each of the surfaces (<Ref> and <Ref>) and the associated kinetic analysis (<Ref>). <Ref> is dedicated to interpreting our results from an astrophysical point of view, contextualising the preceding experiments. We finally summarize our main findings in <Ref>. § METHODOLOGY §.§ Quantum chemical calculations The stationary points in the PES were characterized using density functional theory (DFT) calculations on model clusters mimicking H2O and CO ices. Because this work aims to determine the impact of energy redistribution in the formation of CO2 on ice, we need to use sufficiently large structural models to allow for (ergodic) energy equipartition. In a preceding calculation, <cit.> used a cluster containing 33 H2O water molecules and discussed the suitability of a model of this size, indicating that energy dissipation should be well described with a model of this size. This was later confirmed with dedicated studies using ab-initio molecular dynamics simulations <cit.>. Therefore, in this study, we use the same 33 H2O cluster to simulate the H2O ice <cit.>, and we constructed a 33 CO cluster to simulate the CO ice. To construct such a cluster, we used Packmol <cit.> in a 8 Å radius sphere, ensuring that every molecule is at a minimum initial distance of 3 Å from each other. This initial cluster is later refined at the level of the theory described below. The geometries of the initial clusters were optimized at the MN15-D3BJ/6-31+G(d,p) level of theory <cit.>, with parameters for the D3BJ dispersion correction taken from <cit.>. The DFT and optimizations utilize the Gaussian16 (rev.C.01) suite of programs <cit.>. We later place the CO and OH admolecules on the clusters sequentially, first occupying a binding site for the CO molecule and later for OH. Once the two admolecules are located on the clusters, we followed the gas-phase reaction mechanism presented in <cit.> for both clusters, except for an alternative exit path on CO ice (<Ref>). Additional differences between the gas-phase and surface-like profiles are highlighted in <Ref>. After locating every stationary point, we confirmed them as either true minima or first-order saddle points, i.e., transition states (TS), in the PES by computing the molecular Hessian of the system. The electronic energies of the stationary points on the PES were further refined using the domain-based local pair-natural orbital coupled cluster singles and doubles with a perturbative treatment of triple excitations, DLPNO-CCSD(T) <cit.> using a two-point complete basis set extrapolation (CBS) to the basis-set limit using the cc-pVDZ and cc-pVTZ basis sets <cit.>. The internal options for the PNO localization scheme were set to normal, and resolution of the identity (RI) techniques were used to evaluate exchange and Coulomb integrals (RIJK) using a cc-PVTZ/JK auxiliary basis set. We apply the frozen-core approximation in the correlated calculations. The ORCA (v.5.0.4) code was used for the DLPNO-CCSD(T)/CBS calculations <cit.>. In addition to cluster calculations, we also carried out gas-phase calculations at the same level of theory for comparative purposes, which are indicated throughout the paper in square brackets. Finally, we assessed the quality of our theoretical method of choice, comparing our gas phase results with the ones of <cit.>, finding excellent agreement for all the relevant parts of the PES. These results are presented in the <Ref>. It is worth noting here that our theoretical method does not predict the correct energetics for the high energy intermediate HCO2. This intermediate is not relevant to the kinetics of the system because its formation requires surmounting an emerged barrier of ∼8-9 kcal mol^-1 from the bimolecular OH + CO asymptote (38-39 kcal mol^-1 from the HOCO potential well) <cit.>. Moreover, we could not find this intermediate in the simulations on the H2O cluster. We, therefore, skip the search for this intermediate in all cluster calculations. Nonetheless, we discuss the origin of this disagreement in <Ref>. §.§ Kinetic Analysis We employed the microcanonical flavour of the transition state theory, called Rice–Ramsperger–Kassel–Marcus (RRKM) to compute the energy-dependent rate constants k(E) for the transitions between reaction wells, given by: k(E) = N^(E - E_0)hρ(E), where h is the Planck's constant, N^(E - E_0) is the sum of states of the transition state evaluated at energy E to the energy of the transition state, E_0, and ρ(E) is the density of states of the reactant at energy E. In addition, the sum of states contains tunnelling corrections, for which the non-symmetric Eckart potential model was employed <cit.>. We did not include rotational symmetry factors in our calculations due to the symmetry breaking induced by the amorphous surface. The rigid-rotor harmonic oscillator model is used throughout the kinetic calculations. The application of RRKM to interstellar reactions is discussed in <cit.> and used or implied in several other works <cit.> As it will be explained later on (<Ref>), the title reaction occurs strictly non-thermally at 10 K. Hence we make our analysis based on k(E) for the entrance CO + OH -> t-HOCO/c-HOCO and exit channels: c-HOCO -> CO2 + H (and alternatively c-HOCO/t-HOCO + CO -> CO2 + HCO, <Ref>). We provide k(E) considering several energy dissipation scenarios. Each of them has a different number of molecules, n, over which instantaneous energy dissipation is allowed. We studied n=16, 10, 5, and 0 (CO/H2O) molecules. In the latter (n=0), energy redistribution occurs only within the CO + OH system. We carried out this study by projecting out the molecular Hessian matrix elements for the m molecules (where m = 33 - n) farther from the t-HOCO minima, as the global minima of our study. The microcanonical rate constants obtained in this study are calculated with the MESS code <cit.>. We note that the sizes of the clusters (see Figure <ref>) and the highest number of dissipating water molecules are sufficient according to previous studies, e.g., <cit.>. Although no specific studies have addressed this issue for CO ice, we have made a reasonable assumption that the same holds true. It is worth highlighting again that we considered different dissipating CO ice molecules. § RESULTS §.§ Cluster model The fully optimized H2O and CO clusters mimicking ice surfaces are presented in <Ref>. While the CO ice model has a more spherical and compact shape with dimensions 10×12×13 Å, the water one is slightly more elongated, 15×9×10.5 Å. The latter hosts a cavity, where the CO + OH -> CO2 + H reaction is simulated. On the contrary, the more compact CO cluster does not have any clear deeper binding site. Hence the reaction site was randomly chosen. The binding energies of the reactants and reaction intermediates on the surfaces are presented in <Ref>. These were calculated as the energy difference between the complexes containing the surface and the admolecule and the sum of the isolated fragments, including ZPVE. In the H2O cluster cavity, we find a binding energy for CO of 4.64 kcal mol^-1, higher than the values reported by <cit.> (≤3.71 kcal mol^-1). This indicates that our cavity is a really deep binding site with a maximized number of neighbour water molecules. For the OH radical, on the contrary, the cavity binding site yields lower than average binding energies (6.45 kcal mol^-1) than other reported values, e.g., 10.33 kcal mol^-1 <cit.>, and 10.6 kcal mol^-1 <cit.>. The observed differences arise from the specific structure of our cavity, where the number of dangling H-bonds is saturated, and the binding mode of OH, whose acceptor/donnor H-bonds about 0.1 Å shorter than in the cavity case reported by <cit.>. On the CO cluster, the CO/CO binding energy corresponds to the lower bound of the values presented in <cit.> while the values of OH/CO are unreported. We note that the dual-level error introduced by our calculations is relevant for determining binding energies for CO/CO due to the mismatch of geometries arising from the weak CO-CO interaction in the ice <cit.>. In the subsequent reactivity studies, the relative magnitude of this error is diminished because energy differences between reaction steps are much higher than the CO-CO interaction energy. For the reactivity studies, we keep the CO binding site determined above, while the OH radical is placed on a different binding site. We justify this choice based on two arguments. First, when both adsorbates are thermalized, the higher interstellar abundance of CO makes it more likely to be located in deep binding sites, such as the cavity formed in the H2O cluster. Second, in <Ref>, we investigate the effect of a translationally excited OH radical colliding with a pre-adsorbed CO. §.§ Potential energy surface construction All the energy diagrams have been referenced from the asymptotes, i.e., from the sum of energies of the surface, reacting CO and the reacting OH radical. We will refer to this as the bimolecular system, and for the sake of simplicity it will be denoted as CO + OH, regardless of the ice surface. This was done for the sake of clarity, as it is much clearer what the influence of the substrate in stabilizing the reactants is, as well as its catalytic effect on the barriers. §.§.§ H2O ice We include two pre-reactant complexes following the literature <cit.>. First, a pre-reactant complex with large dihedral ∠HOCO angles, PRC, which leads to the formation of the t-HOCO intermediate. Second, a near 0° dihedral angle pre-reactant complex (PRC'), that forms the c-HOCO intermediate (which was not found on CO ice, as discussed in <Ref>). The transition states that connect the PRCs with the reaction wells are named TS1 and TS1', respectively, and the transition state connecting these two wells is TS2. Finally, the transition state leading to CO2 + H from c-HOCO is named TS4. The reason for not naming it TS3 is that the TS3 label (specifically TS3') is reserved for the exit transition state from t-HOCO, a stationary point we do not find on water ice. The stationary points on the reaction profile are gathered in <Ref>. The reaction profile has, for the most part, the same profile as in the gas phase, with two notable exceptions. The first concerns the absence of the HCO2 intermediate, as we already discussed in <Ref>. The second is the inversion in energy between PRC and PRC'. This inversion appears following the formation of a HO–H2O hydrogen bond that locks the PRC' geometry in the binding site contiguous to the CO binding site. The snapshots of the stationary points are collated in <Ref>, where this effect can be visualized. The higher stabilization of PRC' also results in higher activation energy to c-HOCO through TS1'. The binding energies of t-HOCO and c-HOCO on the cavity are 15.51 kcal mol^-1 (7805 K) and 12.30 kcal mol^-1 (6190 K), respectively. These binding energies are significantly higher than the ones for CO and OH presented in <Ref>, and are closer to the average values reported for the related molecule, HC(O)OH, formic acid (e.g., ∼ 12.30 kcal mol^-1 <cit.>, 10.7–21.0 kcal mol^-1 <cit.>). The t-HOCO and c-HOCO wells are significantly stabilized on the surface, evinced by the 13–16 kcal mol^-1 difference in energy with the same intermediates in the gas phase. As a consequence, the activation energy of TS4 is higher on water. When breaking the O–H bond in c-HOCO, the energy corresponding to the OH moiety must be overcome, i.e. a significant fraction of the binding energy. The binding energy of the CO2 + H system on H2O was found to be 7.30 kcal mol^-1 (3673 K). Finally, from <Ref>, it is evident that the reaction, if viable, must proceed through quantum tunnelling. The c-HOCO -> CO2 + H barrier is 32.1 kcal mol^-1, which is extremely high for ISM conditions. However, contrary to what happens in the gas phase, TS4 is submerged with respect to the reactant asymptote, thanks to the stabilization promoted by the H2O surface. The product of the reaction, CO2 + H, is higher in energy than both radicals, and the reaction is significantly less exothermic because of the break of hydrogen bonds. Nonetheless, once CO2 + H is formed, H is susceptible of diffusing or evaporating, thus concluding the reaction. §.§.§ CO ice The reaction profile on CO ice is shown in Figure <ref> and the stationary points in Figure <ref>. With respect to the gas-phase process, as previously discussed, the profile lacks the HCO2 intermediate. When comparing with the results for the water cluster presented above, the main difference is the lack of PRC', so that the reaction must go through the t-HOCO intermediate to reach CO2. While PRC' exists on the CO ice, we found it to be a first-order saddle point. Unlike in water, where PRC' is stabilized thanks to the interaction of the OH radical with a dangling bond of H2O, on CO, this interaction is unavailable, and the weak OH-CO interaction promotes the rotation to PRC. There is still the possibility that the lack of PRC' is an effect of the random selection of the binding site, however a full binding site sampling is beyond our computational resources. To reach the t-HOCO intermediate, however, the TS1 must be crossed at the same energy level as the asymptote. Hence, significant energy dissipation would suppress the whole reaction unless enough energy input is provided via non-thermal mechanisms. Additionally, the much reduced inter-molecular interaction of the admolecules with the surface due to the lack of electrostatic and H-bonding interactions of CO ices affects the energetics of the stationary points. The most prominent examples are the lower stabilisation of intermediates and the barrier in TS4, which sits above the energy of the asymptote. In general, the energetics on CO ice is closer to the gas phase case, with small differences, e.g., the isomerisation barrier for the t-HOCO -> cis-HOCO reaction on CO is about 1 kcal mol^-1 lower (and about 2 kcal mol^-1 lower for the reverse reaction). The fact that there are more CO molecules surrounding the reaction site opens a new possibility not available on water ice or the gas phase. It involves the reactivity of the t-HOCO and cis-HOCO intermediates with a neighbouring CO, leading to CO2 + HCO, see Figure <ref>. Interestingly, these reactions possess lower activation energy barriers than TS4, see Figure <ref>, and in the case of the cis-HOCO + CO -> CO2 + HCO reaction, the barrier sits below the asymptote. §.§ Microcanonical rate constants We estimated the microcanonical rate constants for the PES entrance and exit channels described in the previous sections. The entrance channels start with the pre-reactant complexes and finish with t/c-HOCO, and the exit channels start with t/c-HOCO and finish with CO2 + H, and additionally CO2 + HCO for CO. These channels present the relevant rate constants for the kinetics of the reaction because the t-HOCO -> c-HOCO is much faster, even when energy redistribution is at play. Notice that due to the barriers (TS1 and TS1'), if the stationary points of the PES were populated according to a thermal distribution, the formation of the HOCO intermediates would be slow, and the formation of products would likely not happen at all. To simulate non-thermal reactions, an initial amount of energy is given to the system; see below. Experiments of <cit.> show the formation of HOCO with an apparent small barrier or null barrier. We note that for the exit channel (c/t)-HOCO -> CO2 + H/HCO , the starting potential well is very deep, and thermalization is more likely <cit.>. Nevertheless, as we will show, under a microcanonical formalism, the formation of CO2 + H is found to be slow. Finally, different energy dissipation is allowed by changing the number of ice molecules considered in the microcanonical calculations, n. Our PESs indicate that adsorption energy (formation of PRC/PRC') is not completely dissipated but employed in forming HOCO. The energy reference is again the energy of the asymptotes. One could consider that this is not the best choice since the initial energy lies above the energy of the PRC/PRC' and it would actually mean that the initial state is actually higher in energy than a fully thermalized reactant set. However, it must be noted that (i) if a reference state is an upper bound of the real one, and even in this case the reaction is not plausible, then starting from a more stable reference will not change the qualitative picture, and (ii) in cases where an incomplete energy dissipation promoted by certain exothermic processes, e.g. diffusion into deeper binding sites and possible Eley-Rideal mechanisms [That may be of relevance for CO molecules given their abundance in ISM ices.] would actually involve higher initial energies than PRC/PRC'. This effect is irrelevant when the activation energy of a reaction is much higher than the exothermicity caused by the mentioned processes, but for CO + OH -> HOCO the activation energy of the reaction falls below the adsorption energy, and it is of small magnitude. The correct energy reference would lie somewhere in between that of the asymptote and the PRC/PRC'. The microcanonical rate constants for the entrance step are shown in <Ref> and <Ref> for H2O and CO ice. In this plot, we show the reaction rate constants as a function of the energy, where k(E=0) corresponds to the separated, no adsorption asymptote (CO + OH in <Ref> and <Ref>). Energies above zero indicate extra energy from non-thermal excitation mechanisms. In this work, to compare with experimental observations, we will consider the presence of extra energy from either (i) a prior O + H -> OH reaction (ΔU = 102.1 kcal mol^-1) or (ii) half the energy deposited by a single Ly-α photon, assuming equal energy partition into the products of the H2O -> OH + H, (ΔU = 118.7 kcal mol^-1). Notice that the amount of extra energy used to promote the title reaction through the non-thermal mechanisms is unknown. Hence, we represent fractions of that energy, 0.10, 0.25, 0.50, as vertical dashed lines in <Ref> and <Ref> to serve as a guide to evaluate how the rate constants would increase under these assumed scenarios. As we introduced in <Ref>, we evaluated the behaviour of the reaction assuming dissipation into a set of n molecules. The four different cases for n=0, 5, 10, 16 are illustrated in <Ref> and <Ref>. The rate constants for the entrance step on H2O ice are, for all n dissipating molecules, fast for the PRC -> t-HOCO step, indicating that external energy input is unnecessary for this reaction, as determined experimentally by <cit.> and computationally by <cit.>. However, for the alternative PRC' → c-HOCO reaction, we observe k(E=0)≤10^8 s^-1 for the models with 10, 16 H2O dissipating molecules. This means that if the timescale for thermalization is shorter than tens of nanoseconds, the adsorption energy alone is insufficient to overcome the entrance barrier. This constraint is lifted by considering extra energy. The reason for the difference between rate constants for the reactions starting from PRC and PRC' stems from the significantly higher activation energy in the latter case. For the CO model, we observe systematically lower values of k(0) than in water, owing to the lower stabilization of the PRC complex on CO than on H2O leading to higher energy barriers than in the best case for H2O. This, in turn, yields k(E=0)≤10^8 s^-1 for all of our models. Because k(E) is a very steep function around E=0, the reaction is viable with a small input of energy that can come from reactions, e.g. O2 + H <cit.>. This finding reinforces the scenario presented in <cit.> for the three body formations of CO2 on CO ice, as we will discuss in <Ref>. An important comment for each of these rate constants is that we implicitly assumed an infinitely fast energy partition into n molecules, which may not be a good representation of this reaction on CO. At this research stage, we warn that extracting strong conclusions for a limit case like the one found for PRC -> t-HOCO on CO ice is difficult and more sophisticated approaches are necessary. We are currently working on a molecular dynamics study of this reaction to illuminate this issue. Similarly to the entrance rate constants, the exit c-HOCO -> CO2 + H rate constants on H2O ice and c/t-HOCO -> CO2 + H/HCO rate constants on CO ice are plotted in <Ref> and <Ref> for the different dissipation scenarios. It is important to remind that while the entrance channels are unaffected by quantum tunnelling, all the exit channels involve the migration of an H atom, turning quantum tunnelling into an important driver for the reaction, as already evinced by nuclear quantum dynamics calculations <cit.>. Still, even with the influence of quantum tunnelling, the reactions are, in all cases, significantly slower than in the entrance step. The importance of the energy dissipation scheme is major for these reactions. There is a clear gap in exit rate constant values between the (ideal) n=0 dissipation model and the 5, 10 and 16 molecules dissipation models that, in all the cases, yield rate constants k(E=0)≤ 0 s^-1. We remind that these values must be confronted against the thermalization timescale, i.e. if thermalization is faster, the reaction will not proceed. A rate constant of k(E=0)≤ 0 s^-1 means reaction times of seconds, and we find it hard that thermalization would not happen on those timescales, precluding all the c/t-HOCO -> CO2 + H/HCO reactions in all the conditions and substrates considered in this work. We conclude then that, without the input of any external energy other than the adsorption energy of the reactants, the reaction can proceed neither microcanonically nor from thermalized HOCO. When including a degree of external energy from the mechanisms explained above (chemical and H2O photodissociation), the exit reaction is faster, as expected. However, only the n=0 dissipation model yields rate constants that are sufficiently high ≥ 10^8 s^-1 to compete with thermalization. The upper bound of the timescale for (almost) complete thermalization of HOCO is estimated to be similar to that of CO2 formed from the CO + (^1D)O -> CO2 reaction, that is, a few nanoseconds <cit.>. While the energy dissipation in RRKM is instantaneous, and an incomplete energy dissipation may increase the values of the rate constants, our assumption for the external energy input is also rather ideal. Thus, we conclude that even in the presence of external energy input, we find it hard to justify the formation of CO2 and H/HCO from the title reaction. This suggests that the formation of CO2 relies on the subsequent reaction described as follows: t/c-HOCO + H -> CO2 + H2. Reaction <ref> involves two radicals, and even though an activation barrier may be present on ice <cit.> quantum tunnelling should play a major role, as it is the case found for H abstraction reactions <cit.>. Thus, reaction <ref> must be viable. The inclusion of reaction <ref> in the CO2 reaction network was already in place for the non-energetic formation of CO2, for example, in <cit.>. Still, this article shows that it also applies to the energetic formation of CO2. We put our results in a laboratory/simulation and astrophysical context in <Ref>. Finally, and despite it does not affect the outcome of the reactions studied in this work (e.g. the t/c-HOCO ( + CO) -> CO2 + H/HCO reactions remain non-viable under ISM conditions), it is interesting from a purely chemical perspective to comment on the effect observed for the two competing reactions c-HOCO -> CO2 + H and t/c-HOCO + CO -> CO2 + HCO. The competition between these two processes is energy dependent. At low values of E, e.g. k(E=0), favours t/c-HOCO + CO -> CO2 + HCO whereas c-HOCO -> CO2 + H is the preferred exit channel at higher energies, between 10–120 kcal mol^-1, depending on the number of dissipating molecules. The dependence on the energy and number of dissipating molecules clearly reveals that the dominion of the c-HOCO -> CO2 + H route at high energies is an entropic effect. For both routes, the count of states at the TS energy (the numerator of <Ref>) depends on the height of the barrier and the number of low-frequency vibrational modes. Because HCO, in contrast with H, has two molecular vibrations, H-C and C=O, at 2800 and 1900 cm^-1, the count of states will be smaller at high energies. Low-frequency vibrations overwhelm the purely kinetic effect arising from the lower barrier. § DISCUSSION §.§ The CO + OH -> CO2 + H reaction in the laboratory The experiments carried out in the CO + OH -> CO2 + H reaction were reviewed in <Ref>. For most of them, the biggest experimental conundrum is the generation of the OH radical, which is very unstable under laboratoty conditions and needs to be generated in situ. The experimental methods for forming the OH radical in these experiments are, in most cases, different. However, all the possible formation pathways involve the co-deposition or co-generation of H atoms e.g. formation with O2 + H, fragmentation of H2O in a microwave discharge or H2O photodissociation. In general, it is impossible to experimentally discern whether the CO + OH reaction proceeds directly to CO2 + H or, in turn, stops at t-HOCO, which is converted to CO2 via reaction <ref>. A rigorous study of the reaction using molecular dynamics <cit.> showed the probability of direct formation of CO2 on H2O ice is lower than 1%. It is important to remark that in <cit.>, the OH was generated with excess energy coming from photodissociation of H2O. Our results support the latter scenario and discard the direct reaction. Compared with our results, the small fraction observed for the direct formation of CO2 + H in <cit.> may come from the slower and more realistic non-ergodic energy dissipation present in the molecular dynamics study. On CO ice, the reaction proceeds similarly to in H2O, both in our calculations and in the experiments of <cit.>, where HOCO is explicitly included as the intermediate for the reaction. <cit.> discuss the competition with formic acid (HC(O)OH) through the reaction: HOCO + H -> HC(O)OH with Reaction <ref>. Our results complement these experiments as well, showing that in addition to what was already known, the formation of the HOCO complex has to surmount an activation energy of 2.2 kcal mol^-1 with a mere adsorption energy of 2.5 kcal mol ^-1, in contrast with H2O ice, where the higher stabilization of the PRC complex increases the energetic budget for the formation of HOCO. The consequence of this effect in the overall reaction scheme is that the formation of HOCO cannot be taken for granted on CO ice under a non-energetic regime. In <cit.>, such energy input is given by a preceding chemical reaction. The more impeded formation of the HOCO radical on CO is the main difference with H2O ice and is illustrated by the rate constants in <Ref> (Top panel) and <Ref>. This different reactivity on different substrates may explain the recent JWST observations of a higher degree of mixing of CO2 with H2O than with CO <cit.>. However, and as we indicated in section <Ref>, further studies are being undertaken to understand the precise behaviour of the CO + OH -> t-HOCO association step on CO ices. On the other hand, <cit.> used matrix isolation, electron paramagnetic resonance and FT-IR techniques, which made it possible to observe several radicals, among which HOCO, and CO2. HC(O)OH is also detected, although its formation seems to be due to HCO + OH rather than reaction <ref>. In this experiment, methanol molecules embedded in an Argon matrix are photolysed at 14 K. The resulting photo-products can relax as the matrix acts as a third body. Later the sample is warmed up to 35 K, and the Ar matrix is removed, allowing light species to diffuse. The peak of CO_2 production occurs in this last stage. According to our results and interpretation, if CO2 is formed via reaction <ref>, either there is some extra energy input, not all the energy from the photolysis step was completely dissipated, or H-abstraction reactions are in place. In the latter case, this can be triggered by other radicals rather than reaction <ref>, something we did not consider in this work, and that would require either the diffusion at warmer temperatures or the presence of a nearby radical species. In addition, an efficient H-abstraction radical-radical channel should be present, which will certainly depend on their relative orientation <cit.>. Notice that in this experiment, no ice surface is present, but rather the bare copper plate on top of which the matrix and reactant mixture is prepared. Finally, we would like to encourage more experiments on CO_2 formation starting from thermalized reactants, especially on CO surfaces. §.§ The CO + OH -> CO2 + H reaction in the ISM The comparison between the experiments and our calculations presented in the last section motivates us to contextualize our results in the expected conditions of the ISM. We concluded that the sole CO + OH reaction is insufficient for the formation of CO2 on ices and that Reaction <ref> is the most promising candidate for the follow-up reaction. Considering this, is it justified to consider a small activation energy for the OH + CO -> CO2 + H reaction in astrochemical models of molecular clouds and prestellar cores? In light of our simulations, we consider that there are at least four different cases. * High coverage of H2O ice and high abundance of H atoms. * High coverage of H2O ice and low abundance of H atoms. * High coverage of CO ice and high abundance of H atoms. * High coverage of CO ice and low abundance of H atoms. On H2O ice (Cases 1 and 2 above), the formation of the HOCO complex is facile and does not require any energy input, with a fast reaction occurring thanks to the adsorption energy (or a fraction of it) on water ice. Moreover, the dominance of H2O in the early stages of a molecular cloud's life, during the translucent cloud phase <cit.>, ensure mild temperature conditions (15–50 K) that allow for diffusion of CO molecules, and relatively low extinction (A_v∼ 1-2 mag). Under these conditions, Case 1 is the most likely one, with H atoms produced from photodissociation of H2O and other hydrogenated molecules both in the gas and on the grain. Other mechanisms, such as cosmic ray ionization, also contribute to these fragmentation processes. Under these conditions, we determine that considering a null or low activation barrier for Reaction <ref> in astrochemical models is justified because the H atom will ensure prompt conversion of HOCO to CO2 through reaction <ref>. However, we warn that HC(O)OH abundance could be underestimated following this approach. At higher extinctions, but without enough CO surface coverage (Case 2, molecular cloud stage), the abundance of H atoms on grain surfaces will be reduced, and the HOCO complex will survive longer on the grain. Under these conditions, we recommend differentiating Reaction <ref> and <ref>. The next two cases (Cases 3 and 4) can be treated conjointly. Our simulations show that forming the HOCO radical from CO + OH is not straightforward on CO ice and requires initial energy input. While the energy required to initiate the reaction is not very high, the very low temperatures where Cases 3 and 4 would dominate (dense prestellar cores with T=10 K) discard the thermal energy as the initiator of the reaction. This energy input can come from a neighbouring chemical reaction because H2O photodissociation should be a small factor in CO ices. Therefore we consider that the approach presented in <cit.> of modelling the CO2 formation as the three-body reaction, e.g. H + O + CO is a good compromise to model the reaction on CO ice. Whether the three-body reaction can be coarse-grained to yield CO2 + H directly or HOCO (and later proceed through reaction <ref>) is likely to depend on the H atom abundance. For example, an important factor should be the local cosmic ray ionization rate (ζ) determining the dissociation of H2 into 2H, thus the ratio of HOCO radicals to H atoms. We must emphasize that coarse-graining the formation of CO2 through the title reaction to study CO2 formation and evolution may be acceptable only when H atom abundance overwhelms HOCO abundance. However, in doing so, the abundance of other HOCO-derived molecules like HC(O)OH will be underestimated. Precaution is advised when the target of the models involves these molecules. Finally we would like to discuss other possible scenarios. One possibility is that the excited formation of OH leads to non-thermal diffusion out of the reaction site or its desorption (notice that the latter would be more plausible on CO ices due to the lower binding energy), in these cases the reaction would not take place. Another possible scenario regards the energy dissipation after HOCO is formed. Because of the high exothermicity of the CO + OH -> HOCO reaction and the low binding energies of these radicals on CO ice, there is the possibility that HOCO chemically desorbs, or triggers the desorption of a nearby ice CO molecule. In addition, if these reactions would have to take place in the inner layers of the ice, one must take into account that energy dissipation would be even more efficient due to the larger number of intermolecular interactions and the higher number of surrounding molecules, rendering each reaction step less and less efficient. § CONCLUSIONS Using accurate quantum chemical calculations and microcanonical kinetic modelling, we found that the CO + OH -> CO2 + H reaction, which has been considered as the most important producer of interstellar CO2, is rather inefficient, and its occurrence cannot be taken for granted. The reaction proceeds through a rather stable intermediate, HOCO, and more specifically through its two structural isomers t-HOCO and c-HOCO. On H2O ice, the formation of HOCO is feasible, but its evolution to CO2 requires a further reaction step that most likely involves H abstraction through reaction <ref>. On CO ice, we found, for the first time, that the formation of HOCO is not as efficient as currently assumed, owing to the lower adsorption energy of OH and CO molecules on CO ice. We indicate that non-thermal effects are necessary to form HOCO, and thus CO2, on CO ice. This limitation may be behind the recent ice observations showing higher fraction of CO2 found in water-dominated environments <cit.> when comparing with apolar (CO-dominated) ices. Because our calculations assume an ideal energy redistribution in an infinitely short time after the reactions, our results represent a lower bound for the production of HOCO and CO2 from the CO + OH reaction. We aim to improve the description of energy dissipation in forthcoming works to resolve ambiguous cases. We encourage further experimental work on the topic, especially on CO ices following <cit.>. Nonetheless, with our results, we were able to provide atomistic insight into the formation of CO2, one of the most important interstellar ice constituents, and indicate the cases where coarse-graining of the CO + OH reaction in astrochemical models is, to a first approximation, acceptable and not. G.M. thanks the Japan Society for the Promotion of Science (JSPS International Fellow P22013, and Grant-in-aid 22F22013) for its support. The authors acknowledge support by the Research Center for Computational Science in Okazaki, Japan (Projects: 22-IMS-C301, 23-IMS-C128), the state of Baden-Württemberg through the bwHPC consortium and the German Research Foundation (DFG) through grant no INST 40/575-1 FUGG (JUSTUS 2 cluster) (Project: 22-IMS-C301). Y.A. acknowledges support by Grant-in-Aid for Transformative Research Areas (A) grant Nos. 20H05847. aa § GAS-PHASE COMPARISON WITH <CIT.> We compare our energetics of the CO + OH -> CO2 + H gas-phase reaction profile at the DLPNO-CCSD(T)/CBS//MN15-D3BJ/6-31+G(d,p) level with the high-quality CCSD(T)/AVTZ results presented in <cit.> in <Ref>. Note that the energies presented here are not ZPVE corrected, unlike in the main manuscript. We observe excellent (between 0.0–1.3 kcal mol^-1) deviations between methods, e.g. chemical accuracy, for all structures except HCO2. As we introduced in the methods section, this intermediate and the associated entrance and exit transition states, TS5 and TS6, are irrelevant to the reaction kinetics or dynamics <cit.>. Hence, a wrong prediction of the energetics of this intermediate does not affect our results, and we do not include it in our kinetic simulations. Yet, it is interesting to mention the reason for the discrepancy. In <cit.>, the authors show that the HCO2 intermediate belongs to the C_2v symmetry point group at the CCSD(T)/AVTZ level of theory. However, the geometries at the MN15-D3BJ/6-31+G(d,p) level converge to a C_s intermediate. The T_1 diagnostic at the DLPNO-CCSD(T)/cc-pVTZ level of theory for the HCO2 intermediate hints at a strong multireference character (T_1=0.068), so it is not clear if the CCSD(T) or the MN15-D3BJ calculations are better in predicting the correct HCO2 geometry. However, it is clear that a dual-level approach like DLPNO-CCSD(T)/CBS//MN15-D3BJ/6-31+G(d,p) will fail due to the mismatch of geometries. Despite the discrepancy found for HCO2, the excellent agreement for all the relevant parts of the PES indicate that the studies on the H2O and CO clusters will yield the correct energetics for the system.
http://arxiv.org/abs/2307.05201v3
20230711121042
The Staged Knowledge Distillation in Video Classification: Harmonizing Student Progress by a Complementary Weakly Supervised Framework
[ "Chao Wang", "Zheng Tang" ]
cs.CV
[ "cs.CV" ]
Journal of Class Files, Vol. 18, No. 9, September 2020 The Staged Knowledge Distillation in Video Classification: Harmonizing Student Progress by a Complementary Weakly Supervised Framework The Staged Knowledge Distillation in Video Classification: Harmonizing Student Progress by a Complementary Weakly Supervised Framework Chao Wang, Member, IEEE and Zheng Tang, Member, IEEE Manuscript created March, 2023; Chao Wang is with China Academy of Railway Sciences, Beijing 100081, China (e-mail: [email protected]). Zheng Tang is with NVIDIA, Redmond, WA 98052, USA (e-mail: [email protected]). Accepted as a Transactions Paper of IEEE Transactions on Circuits and Systems for Video Technology (TCSVT) in 2023. August 12, 2023 =================================================================================================================================================================================================================================================================================================================================================================================================================== In the context of label-efficient learning on video data, the distillation method and the structural design of the teacher-student architecture have a significant impact on knowledge distillation. However, the relationship between these factors has been overlooked in previous research. To address this gap, we propose a new weakly supervised learning framework for knowledge distillation in video classification that is designed to improve the efficiency and accuracy of the student model. Our approach leverages the concept of substage-based learning to distill knowledge based on the combination of student substages and the correlation of corresponding substages. We also employ the progressive cascade training method to address the accuracy loss caused by the large capacity gap between the teacher and the student. Additionally, we propose a pseudo-label optimization strategy to improve the initial data label. To optimize the loss functions of different distillation substages during the training process, we introduce a new loss method based on feature distribution. We conduct extensive experiments on both real and simulated data sets, demonstrating that our proposed approach outperforms existing distillation methods in terms of knowledge distillation for video classification tasks. Our proposed substage-based distillation approach has the potential to inform future research on label-efficient learning for video data. Knowledge distillation, weakly supervised learning, teacher-student architecture, substage learning process, video classification, label-efficient learning. § INTRODUCTION In the past decade, deep learning, especially convolutional neural networks (CNNs), has achieved tremendous success in computer vision. In contrast, the weak supervision paradigm <cit.> has permeated every corner of its latest advances and has been widely used in the industrial sector. Image classification <cit.> is a fundamental task of computer vision, which gives rise to many other video tasks, such as video segmentation <cit.>, video instance segmentation <cit.>, object tracking <cit.>, and visual recognition<cit.>. To achieve state-of-the-art performance, various CNN models have become increasingly deeper and wider, which require more computational power and memory for model inference. However, large-scale models are not ideal for industrial applications, making them challenging to be deployed in video processing applications <cit.> such as self-driving cars and embedded systems. To tackle this issue, knowledge distillation (KD) has emerged as a promising technique to transfer knowledge from a larger and more complex teacher model to a smaller and less complex student model <cit.>. The field of KD has seen significant progress in recent years, with research focusing on the types of KD, loss functions, and the design of teacher-student architectures <cit.>. Different types of KD, including response-based <cit.>, feature-based <cit.>, and relation-based <cit.>, have been proposed to transfer knowledge from the teacher to the student. Researchers have also proposed various loss functions to enhance the student's performance, such as distilling knowledge with dense representations <cit.>, attention-based knowledge distillation <cit.>, and temperature-damping normalization <cit.>. Furthermore, different teacher-student architectures have been designed, including multiple teachers-students <cit.>, self-training <cit.>, and mutual learning <cit.>. However, despite the abundance of research on the topic, there has been little attention paid to the internal relationship between the type of distillation method and the structural design of the teacher-student architecture. KD is a process that mimics human learning by systematically transferring knowledge from a teacher model to a student model. To maximize the effectiveness of this process, we propose a novel framework called Staged Knowledge Distillation (SKD), which consists of three distinct learning stages, as shown in Figure <ref>. We argue that a blind, single-stage approach may be inefficient and ineffective due to its increased cost and potential for bias. To address this issue, we adopt a parallel multi-branch structure training method divided into three substages (see Figure <ref>), which emulates the human learning process and improves the algorithm's generalization ability. However, if the capacity gap between the teacher and the student is too large, the training effect will be diminished <cit.>. To overcome this limitation, we introduce the teaching assistant method <cit.>, which we refer to as cascade training. Additionally, we use a weakly supervised label enhancement technique that generates high-quality pseudo labels to supervise the classification in the video. The final student model in SKD is obtained by combining the results of each stage, which reduces the model's generalization error. However, the same learning framework is used internally for each parallel training branch, leading to the excessive correlation between models <cit.> and increased inference time and memory occupation when deployed. To address this issue, we propose a novel variant called Relevant Staged Knowledge Distillation (RSKD), which employs a combination stage (CS) approach, as illustrated in Figure <ref>. RSKD omits the ensemble process and instead uses a combination-based stage distillation. Finally, we enhance the KD loss function between different processes to adapt to our proposed SKD and RSKD frameworks. For feature-based KD, we propose an improved method for comparing the distribution of feature dimensions (DF), which selects the teacher's top K channels and the student's K channels for distillation based on the original loss function. This improvement can significantly improve the effectiveness of distillation. In this paper, we make several contributions to the field of knowledge distillation: * We propose a new loss method, DF, which is based on the feature distribution and is able to uncover knowledge hidden in the distribution of features. * We address traditional challenges in knowledge distillation by treating it as a stage learning problem and proposing a combination stage approach (CS). * We propose two new weakly supervised distillation frameworks. The first, SKD, is a substage-integrated framework that simulates the human-stage learning process. The second, RSKD, is based on a relevant combination stage approach that improves efficiency and accuracy. * We conducted extensive experiments on two analog data sets (CIFAR-100 and ImageNet) and one real data set (UCF101) <cit.>, and demonstrate that our models achieve state-of-the-art and competitive results in video classification using knowledge distillation. These contributions demonstrate the effectiveness of our proposed methods and provide insights into the potential of knowledge distillation for improving video classification models. § RELATED WORKS §.§ Weakly-Supervised Paradigms Automatic video detection has gained increasing attention due to its usefulness in various intelligent surveillance systems. In recent years, research has focused on the weakly supervised learning paradigms due to the scarcity of clip-level annotations in video datasets <cit.>. Existing one-stage methods <cit.> rely on Multiple Instance Learning to describe anomaly detection as a regression problem and adopt an ordering loss approach. However, the lack of clip-level annotations often results in low accuracy. To address this issue, a two-stage approach based on self-training has been proposed <cit.>. This approach generates pseudo-labels for clips and uses them to refine the discriminative representation. However, the generated pseudo-labels' uncertainty is not considered, which may negatively impact the classifier's performance. §.§ Types of Knowledge Distillation Knowledge distillation (KD) was first proposed in <cit.> and has since been widely used to transfer knowledge from a large teacher network to a smaller student network <cit.>. Direct response-based distillation methods use softened label outputs to match the teacher and student predictions. However, they can be challenging to converge in some cases, leading to the development of feature-based distillation methods. FitNets <cit.> proposed a two-stage method based on the middle layer to improve the performance of student networks. However, introducing an intermediate layer for dimension conversion may introduce additional parameters. Attention Transfer <cit.> introduced the attention mechanism to transform the feature information of each layer inside the teacher network to the student network. Relation-based distillation explores the relationship between different layers in the teacher and student networks. In <cit.>, the authors define the matrix correlation to measure the feature correlations between input and output layers of teachers and students. This method requires teachers and students to have the same structure, which limits the model's generalization ability. §.§ Structures of Knowledge Distillation KD's teacher-student network structure design can be summarized into five categories, as shown in <ref>. The self-training method <cit.> belongs to <ref>, where a poor teacher is used to train the student network. The teacher is trained to bypass the soft labels of the student through the label smoothing technique. The authors of <cit.> propose a deep mutual learning strategy under <ref>, in which a set of student networks learns and guides each other throughout the training process. Instead of compressing the model, the student network <cit.> is trained with the same parameterization as the teacher network. The final result is obtained by combining the training results of multiple students and calculating their averages, falling under ensemble learning as shown in <ref>. As demonstrated in <cit.>, a model with multiple teachers and a single student is proposed to reduce the complexity of the student network. Subsequently, a novel collaborative teaching strategy <cit.> is proposed, leveraging the knowledge of two remarkable teachers to discover valuable information. Park <cit.> proposes a feature-based ensemble training method for KD, which employs multiple nonlinear transformations to transfer the knowledge of multiple teachers in parallel. These approaches can be visualized in <ref>. However, the primary issue with these approaches is that they rely on a single KD type, which can lead to a weakened distillation effect if there is a significant discrepancy between the teacher and student models. To address this, a novel distillation method for teaching assistants is proposed <cit.>, as shown in <ref>, which addresses the decreased KD efficiency when the gap between the teacher and student networks is too large. However, the lack of multi-type KD in stages results in a weak generalization ability of the model, making it challenging to apply to large-scale datasets and models. § PROPOSED METHOD In this section, we present our proposed method for knowledge distillation, which consists of three key components. First, we improve the loss function of the distillation substage to optimize the distillation process and make it adaptable to our proposed method. Second, we analyze the effectiveness of multi-branch stage distillation and propose a novel approach that combines distillation substages with less correlation within the structure to achieve maximum distillation effect. Finally, we introduce two new distillation methods, SKD and RSKD, which aim to simulate the learning process of the human stage as closely as possible by applying the substage and multi-branch combined distillation method. Together, our proposed method provides a comprehensive framework for efficient and effective knowledge distillation. §.§ Label Generation To simulate a video classification dataset, we designed a composite-label generator inspired by the parallel multi-head classifier in <cit.>. The generator consists of an original-label generator and a pseudo-label generator. The main idea is to utilize a CycleGAN <cit.> to generate labels with relatively high approximate probabilities and then use random selection to generate labels with relatively low approximate probabilities (i.e., anomalous labels), and perform interpolation combinations. As illustrated in <ref>, the original-label image is generated by directly extracting the probability value from the corresponding position of the original image generated by the pre-training model. For the pseudo-label image, we utilized the pre-training module to process an input image obtained through random search and CycleGAN <cit.>, and calculated the probability value of the corresponding position. The original-label image and the pseudo-label image are then combined to form a video package, which serves as the training set for the model. Specifically, firstly, the forward network of the pre-training model calculates the probability values for each frame generated image to match the real labels. Then, using CycleGAN and random sampling techniques, each frame of static images is expanded into a package dataset, which is a combination of a sequence of continuous images, simulating a group of data frame images within a short time interval. Finally, the overall probability value of the package is obtained through weighted aggregation. In general, when considering single-frame static images and simulated video package datasets, from the perspective of the loss function, the weighted processing of video frames does not fundamentally differ from the processing of individual frames. Therefore, it does not affect the accuracy of data processing. §.§ Optimization of Substage KD Response-based Substage The response-based substage, depicted in <ref>, can be formulated as follows: ℓ^Rp_KD(ϑ^Rp_T,ϑ^RpS)=ℓ(_S,t)+λ L(φ(^T) ∥φ(^S)), where ℓ(·) is the cross-entropy loss between the student and the ground-truth labels, L(·) is the KL divergence used to measure the consistency of the distribution between the teacher and the student, and λ is a hyperparameter that balances the impact of KL divergence on the overall loss. ^T and ^S are the activation values of the teacher network as T and student network as S, _S and t are the direct output of S and the actual values of the labels, respectively. The cross-entropy loss function ℓ(_S,_t) is computed as follows: ℓ(_S,_t) = ∑_i=1^N(_S,_t), where (·) denotes the cross-entropy loss function, and N is the capacity of the training sample space. The probability of a student classification task can be defined as _S = [r_1, r_2, …, r_k, …, r_C], where C denotes the number of classes. For each probability value r_k in _S, it is computed using the softmax loss function: r_k = exp(_k)/∑^C_j=1exp(j). In our case, KL divergence is used as the loss function for the classification task, which can be defined as follows: L = ∑^C_i=1φ(y^T_i)·log(φ(y^T_i)/φ(y^Si)), where y^T∈^T, y^S∈^S, and φ(·) denotes the probability distribution of activation values, which can be written as below: φ(·) = exp(y_i/)/∑_i=1^Nexp(y_i/) Here, is a hyperparameter representing the distillation temperature. Feature-based Substage The feature-based substage, as illustrated in <ref>, is a critical component of the knowledge distillation process, and it is formulated to ensure that the student network can mimic the teacher network's feature representation as accurately as possible. In this substage, the teacher-student feature mapping is accomplished by minimizing the loss function ℓ^Fe_KD(ϑ^Fe_T,ϑ^Fe_S), which is defined in <ref>. ℓ^Fe_KD(ϑ^Fe_T,ϑ^Fe_S) = ℓ(_S,_t)+∑_i∈ k∈ S_C(α(ϝ^T_i,ϝ^S_i) + βκ(ϝ^T_ik,ϝ^S_ik)), where represents the number of layer pairs that need to be mapped and distilled between the teacher and student features, and uses the attention transfer (AT)<cit.> loss function to train the feature vector of layers in the teacher-student network. The AT method accumulates the features across the channel dimensions to achieve dimensionality reduction, allowing teachers and students to maintain the same dimension when distilling features. However, this tends to lose the distribution of features on each channel. In many cases, the feature differences between channels not only exist in the global feature information but also are closely related to the feature distribution. To address this issue, we propose the distribution of feature dimensions (DF) method, which compares the distribution of feature dimensions. Based on the need to ensure the similarity of the dimension distribution of the feature layer between teachers and students, this method combines the largest K channel values in the channel dimension and their corresponding location information to the pool. It maximizes the consistency of feature dimension distribution as much as possible. α and β are two hyperparameters to balance the effect of AT versus DF in the overall loss. (ϝ^T_i,ϝ^S_i) and κ(ϝ^T_ik,ϝ^S_ik) are defined as follows: (ϝ^T_i,ϝ^S_i) = ∥ϝ^T_i(Å)/∥ϝ^T_i(Å)∥_2-ϝ^S_i(Å)/∥ϝ^S_i(Å)∥_2∥_2 κ(ϝ^T_ik,ϝ^S_ik) = ∑_k∈ S_C∥ϝ^T_ik(P̂·Å)/∥ϝ^T_ik(P̂·Å)∥_2 - ϝ^S_ik(P̂·Å)/∥ϝ^S_ik(P̂·Å)∥_2∥_2, where S_C denotes the collection of student channels in which we want to transfer feature maps. ϝ^T_i(Å) = ∑^Ch_j=1_j^2, where Ch is the number of channels in the ith layer, ϝ is a feature mapping function, which can convert the 3D features of C×H×W into corresponding 2D features and _j represents the 2D feature vector on the j channel. Unlike AT, DF considers the top K channels of the maximum feature dimension of the teacher, defined as ϝ^T_ik(P̂·Å) = ϝ^T_i(P̂_̂k̂·_k^2), where P̂_̂k̂ represents the position information vector corresponding to the feature of the k channel. This vector is used to determine the optimal channel selection for the teacher, allowing for the most efficient use of the available resources. Relation-based Substage In the relation-based substage, the goal is to measure the similarity between the internal structures of the teacher and student networks. To achieve this, we adopt the FSP matrix proposed by <cit.>, which is a matrix that measures the inner product between two feature layers in the neural network. This matrix is computed using the weights of the feature layers of the teacher and student networks and can be formulated as follows: ℓ^Re_KD(ϑ^Re_T,ϑ^Re_S)=ℓ(_S,_true) + γ·ℓ(G(W_T),G(W_S)), where _S and _true denote the predicted and true labels of the student network, respectively. The FSP matrix G(·) is used to measure the similarity between the feature layers of the teacher and student networks. γ is a hyperparameter that is used to balance the impact of FSP on the overall loss. To compute the FSP matrix for the selected teacher-student, we denote m as the selected input layer, n as the selected output layer, h as the height, and w as the width. The FSP matrix can be computed as follows: G_(m,n)(·) = ∑_i=1^h∑_j=1^wW_(·)m× W_(·)n/h × w. Here, W_(·)m and W_(·)n are the weight matrices of the input and output layers of the selected teacher-student, respectively. The FSP matrix is normalized by the product of the height and width of the feature map to ensure that the values of the matrix are between 0 and 1. This normalization helps to reduce the impact of the size of the feature map on the FSP matrix. The relation-based substage helps the student network to mimic the internal structure of the teacher network by measuring the similarity between their feature layers. The FSP matrix is a powerful tool for measuring this similarity and can help to improve the performance of the student network. §.§ Combination Stage In the proposed improved (CS) design method, as shown in <ref>, auxiliary branch substages are added in parallel for distillation, with a response-based substage serving as the backbone. Different combination stages are then composed of substages from different branches. This method introduces diversity in the substages used for distillation, which helps students learn more diverse and essential knowledge from teachers, and thus improves overall accuracy. Although the diversity introduced may not be as accurate as the corresponding substages based on the response, it indirectly facilitates the learning of deep semantic features of distillation. This approach changes the style of substages of branches and the style of channels simultaneously, which enhances the generalization ability of students. This is particularly useful when the same substage is used for each distillation branch, which results in high style similarity between adjacent substages within the same branch, hindering the learning process of adjacent substages and diminishing the generalization ability of students, as shown in <ref>. It is worth noting that the combination of CS branches and the backbone network structure significantly affects the final KD. As ensemble models have demonstrated, the diversity of classifiers plays an essential role in boosting performance <cit.>. Similarly, the diversity introduced in the substages used for distillation can significantly improve the performance of KD. The improved CS design method achieves this by introducing diversity in the substages used for distillation. §.§ SKD and RSKD The proposed SKD framework is based on the observation that different stages of KD have a phased impact on the final learning process. The training process of SKD mainly consists of three stages: response-based, feature-based, and relation-based. Each stage has a distinct purpose, allowing for a more comprehensive learning experience. The response-based stage distills the teacher-student final output result, which is similar to the final result in the human learning process. The feature-based stage distills the specific layers that the teacher-student selects to distill, making them as similar as possible, akin to the shorter cycle in the human learning process. Lastly, the relation-based stage distills the similarity between specific layers at different scales, replicating the phased outcomes learned in the long human learning cycle. This multi-stage approach enables the student to learn both high-level and low-level knowledge from the teacher network. The SKD training process is outlined in Algorithm <ref>. By leveraging the distinct advantages of each stage, our SKD framework provides a comprehensive and effective learning experience. However, integrating the output results of the three stages would lead to memory consumption and delay in the deployment model. To address this, the proposed RSKD method forgoes the idea of an ensemble and instead uses a combination stage structure to combine the output results of different branch stages in the distillation process. This approach improves the model's generalization performance and reduces memory consumption and inference delay, achieving a balance between precision and efficiency, making it more conducive to model deployment in small embedded devices. As shown in <ref>, the RSKD knowledge distillation process consists of a combination stage where different types of substage designs are used for each branch, with the backbone network using a response-based substage. The cascaded backbone network uses the response-based KD substage, and other auxiliary branch substages are added in parallel for distillation. <Ref> details the RSKD training process. § EXPERIMENTS §.§ Experimental Setup Datasets We perform experiments on two public image classification benchmarks: CIFAR-100 and ImageNet. To synthesize pseudo-labels for the video classification images, we follow the method shown in <ref>, which allows us to leverage the rich information in videos to improve image classification performance. We also used a real video action recognition dataset: UCF101 <cit.>. CIFAR-100 is a widely used image classification dataset that contains 32x32 images of 100 categories. The dataset comprises 50,000 training images, with 500 images for each class, and 10,000 test images, with 100 images for each class. ImageNet is a well-known large-scale image classification dataset that consists of 1.2 million training images from 1,000 classes and 50,000 validation images. Unlike CIFAR-100, ImageNet better represents the real-world diversity and complexity of images. UCF101 is a widely-used dataset for video action recognition, comprising around 13,000 video clips sourced from YouTube. It encompasses 101 distinct action categories, with each video clip having an average duration of approximately 7 seconds and a fixed resolution of 320x240 pixels. Network Structure We evaluate our method on two types of network structures: homogeneous and heterogeneous. Homogeneous networks refer to the network structures of a teacher and student network with the same architecture, allowing us to measure the performance gain of distillation in a like-for-like scenario. We consider three architectures for homogeneous networks: ResNet <cit.>, Wide ResNet <cit.>, and VGG <cit.>. For heterogeneous networks, we test our method on networks with different architectures for the teacher and student networks, which allows us to evaluate the generalization ability of our method across architectures. We consider four architectures for heterogeneous networks: ResNet, Wide ResNet, ShuffleNet <cit.>, and MobileNet <cit.>. Implementation Details To verify our method, we compared our proposed SKD and RSKD with KD<cit.>, FITNET<cit.>, AT<cit.>, AB<cit.>, FSP<cit.>, CRD<cit.> and DKD<cit.> methods. To ensure a fair comparison, we implemented the algorithms of other methods based on their papers and codes. For CIFAR-100 datasets, we expanded the original 32x32 images to 40x40 by filling 4 pixels on each side. We used randomly cropped 32x32 pixel images for training and kept the original 32x32 images for testing. We used pre-trained teachers as the initial teacher network. To maintain consistency, we set the weight factor β that balances the actual loss with the distillation loss to the optimal value specified in the original text if available. Otherwise, we used the value obtained by grid search. The weight factor β of different networks is shown in <ref>. For the distillation of AB<cit.> and FSP<cit.>, we used a separate pre-training phase, so β was set to 0. The α of DKD<cit.> network was set to 1. We set the temperature parameter T = 4 and α = 0.9 for KD following <cit.>. All methods in our experiment were evaluated using SGD. Our SKD and RSKD algorithms used three substage cascades per process by default, and we set λ to 0.9, α to 200, β to 300, and γ to 0.9. For CIFAR-100, the initial learning rate of each stage cascade was 0.01 for MobileNetV2<cit.>, ShuffleNetV1<cit.>, and ShuffleNetV2<cit.>, and 0.05 for other models. The learning rate decayed by a factor of 0.1 every 30 epochs after the first 120 epochs until the last 150 epochs, and the batch size was set to 128. For ImageNet, we followed the standard PyTorch<cit.> practice, with each stage cascade having an initial learning rate of 0.2, a weight decay of 0.001 using the MSRA initialization technique<cit.>, and a batch size of 256. For UCF101, the pretrained model was trained from scratch using randomly initialized weights. Initially, it was trained for 200 epochs using SGD with a learning rate of 0.01 and weight decay of 0.0001. Subsequently, it was trained for an additional 100 epochs using Adam optimizer with a learning rate of 0.0002 and weight decay of 0.05. The batch size was set to 16 clips per GPU. All training and validation processes were implemented using the PyTorch framework on an NVIDIA Tesla P100 industrial GPU server. §.§ Details about RSKD Formulation The final distillation result of our formalized RSKD is given as follows: ℓ^final_KD=ℓ(_S,_t)+η·ℓ^Rp-CS_KD +ξ·ℓ^Fe-CS_KD +τ·ℓ^Re-CS_KD, where ℓ(_S,_t) represents the standard knowledge distillation loss, and η, ξ, and τ are hyperparameters that control the importance of the consistency losses. The ^Rp-CS, ^Fe-CS, and ^Re-CS superscripts refer to the response-based, feature-based, and response-feature-based consistency losses, respectively. We believe that the effectiveness of RSKD for knowledge transfer may depend on the complexity of the datasets involved. Similar to the learning process of humans, when the knowledge to be mastered is elementary, the learning process is short and resembles the response-based substage. On the other hand, when the knowledge to be learned is complex, the learning process is long and resembles the feature-based and response-based substages. Therefore, inappropriate ratios of η, ξ, and τ may compromise the accuracy of the student model's predictions. §.§ Comparison of Training Accuracy Video Classification on CIFAR-100 The verification accuracy of top-1 is shown in <ref> and <ref>, where we evaluate RSKD on CIFAR-100. In the absence of an ensemble, <ref> presents the experimental results of teacher-student networks with the same network structure style, while <ref> shows the results of teacher-student networks with different network structure styles. It is worth noting that while RSKD does not always achieve the optimal value in some teacher-student combinations when the teacher and student network styles are the same, it outperforms other distillation methods in experiments of different network architectures. This phenomenon can be explained by the fact that the small size of CIFAR-100 data may not fully demonstrate the advantages of stage learning in the same style of network architecture. Further discussion on this topic can be found in <ref>, <ref>, <ref>, and <ref>. The experimental results confirm that the CS structure in RSKD can uncover the hidden knowledge in different network architectures, leading to better distillation performance. Video Classification on ImageNet We evaluate our SKD, RSKD, and ∙RSKD on ImageNet, and present the verification accuracy of top-1 and top-5 in <ref> and <ref>, respectively. <ref> contains the results of teachers and students having the same network architecture, while <ref> shows the results for teachers and students from different network architectures. To ensure experimental fairness, we employ ensemble and cascading strategies in all baseline models. Specifically, we use a three-level cascading and three-branch ensemble configuration as the default setting, allowing for a more accurate performance comparison. The experimental results confirm our theoretical hypothesis that our SKD, RSKD, and ∙RSKD achieve significant improvements on ImageNet with the increase of the data set. RSKD improves the accuracy of top-1 and top-5 by about 1% when compared with networks with the same teacher-student network style, while ∙RSKD improves the accuracy of top-1 and top-5 by about 1%-1.5%. For networks with different teacher-student network styles, RSKD improves the accuracy of top-1 and top-5 by about 1%-1.5%, while ∙RSKD improves the accuracy of top-1 and top-5 by about 1.5%-2%. Our methods outperform the most state-of-the-art distillation methods on ImageNet. Video Classification on UCF101 We assessed the performance of our SKD, RSKD, and ∙ RSKD techniques on the UCF101 dataset, and presented the validation accuracies for top-1 and top-5 in <ref> and <ref> respectively. <ref> encompasses the outcomes for both teachers and students with identical network architectures, while <ref> showcases the results obtained from teachers and students with distinct network architectures. To ensure the fairness of our experiments, we employed ensemble and cascading strategies across all baseline models. Specifically, we employed a default setup of a three-stage cascading and three-branch ensemble configuration, allowing for more precise performance comparisons. The experimental results confirm our theoretical hypothesis that with the increase of real video data, our SKD, RSKD, and ∙ RSKD techniques achieve significant improvements on UCF101. In comparison to networks with the same teacher-student network architecture, RSKD enhances the top-1 and top-5 accuracy by approximately 2%, while ∙ RSKD achieves an improvement of about 2% - 3% in both top-1 and top-5 accuracy. For networks with different teacher-student network architectures, RSKD improves the top-1 and top-5 accuracy by approximately 2% - 2.5%, whereas ∙ RSKD yields an enhancement of around 4% - 5% in top-1 and top-5 accuracy. §.§ Comparison of Training Efficiency We evaluate the training costs of state-of-the-art distillation methods and demonstrate that RSKD is highly efficient. As illustrated in <ref> and <ref>, RSKD achieves an optimal balance between model performance and training costs. The training accuracy of RSKD surpasses the most advanced baseline distillation models while maintaining minimal training time and the same model size. Compared to SKD, RSKD is more efficient at only one-third of the size of SKD and saves nearly three times the training time. §.§ Visualization of Correlation We visualize the differences in correlation matrices between teacher and student logits for various distillation methods to demonstrate the effectiveness of RSKD in preserving the teacher's knowledge. In CIFAR-100, we compare four students: SKD∘, SKD⋆, SKD†, and RSKD, as shown in <ref>. The results indicate that RSKD achieves the most consistent correlation between teachers and students using the CS structure, resulting in highly similar logits with minimal discrepancies. This suggests that RSKD is a practical approach for preserving the teacher's knowledge in the student. In ImageNet, we further evaluate the correlation between teacher and student logits for different combinations of teacher-student networks. <ref> demonstrate that RSKD minimizes the differences in correlation matrices for different teacher-student networks, indicating that RSKD can preserve the teacher's knowledge across different network architectures. Therefore, our method can generate student logits that closely resemble the teacher's, providing reliable results. These results show that preserving the correlation between teacher and student logits is crucial for effective knowledge transfer in distillation. The preservation of correlation ensures that the student network reproduces the same predictions as the teacher network, even if the student network is smaller or has a different architecture than the teacher network. The correlation visualization provides further evidence that RSKD can effectively preserve the teacher's knowledge in the student network. §.§ Ablation Study We conducted ablation experiments on the RSKD method to evaluate the impact of cascade operation, DF method, CS structure, and K value on the accuracy of CIFAR-100, ImageNet and UCF101 datasets. The results are presented in <ref>, <ref>, and <ref>. Our experiments demonstrate that the response-based structure of the backbone network achieves optimal results in general. For small datasets with the same teacher-student model structure, the cascade operation and DF method have a more significant effect on the model performance. In contrast, reasonable CS structures have the same or slightly improved performance. However, the CS structure is more effective for models with larger datasets or different teacher-student model structures. This is because models with CS structures are less prone to overfitting, especially with a large amount of data or heterogeneous teacher-student networks. The value of K is simultaneously determined using the grid search method. For teacher-student networks with the same structure, in the case of an optimal solution, the value of K tends to be larger. Conversely, for teacher-student networks with different structures, in the case of an optimal solution, the value of K is relatively smaller. This disparity arises from the fact that models with a homogeneous structure are more influenced by the first K channels in each layer, whereas heterogeneous models are less affected by the range of K values. In summary, the RSKD method with the appropriate combination of cascade operation, DF method, and CS structure can significantly improve the model's accuracy, particularly for large or diverse datasets. § CONCLUSION In this paper, we have proposed novel weakly supervised teacher-student architectures, SKD and RSKD, that transform the KD process into a substage learning process, improving the quality of pseudo labels. We extensively investigate the relationship between the type of sub-stage learning process and the teacher-student structure, and demonstrate the validity of our method on the video classification task of CIFAR-100, ImageNet package dataset and UCF101 real dataset. Our analysis of the design methodology of the multi-branch substage composite structure and the optimization of the corresponding loss function has further validated the human-inspired design of the teacher-student network structure and substage learning process, leading to improved performance. Importantly, our work is highly relevant to label-efficient learning on video data, which aims to explore new methods for video labeling and analysis. Our proposed SKD and RSKD methods offer promising approaches to improve the quality of pseudo labels in video classification tasks, which is a challenging problem due to the complexity and high dimensionality of video data. Inspired by human staged learning strategies, our approach offers an intuitive framework that is not limited to specific computer vision tasks. We believe that our strategies can be further applied to other computer video tasks such as video segmentation, single object tracking, and multiple object tracking. These tasks have gained considerable attention in the computer vision community, and we believe that our approach can be leveraged to enhance their performance as well. Overall, our work contributes to advancing the field of weakly supervised learning and can have significant practical implications in real-world applications. IEEEtran [ < g r a p h i c s > ]Chao Wang received a B.E. degree in Process Equipment and Control Engineering from Jiangnan University in 2007, and an M.S. degree in Computer Application Technology from Northeast University in 2010. In the same year, he joined the China Academy of Railway Sciences and is currently an Assistant Researcher. Since assuming this position in 2012, he has been in charge of or involved in obtaining 8 invention patents and 3 utility model patents. His published papers cover interdisciplinary research topics in the fields of big data, neural networks, machine learning, and rail transportation. His research interests include intelligent rail transit systems, big data systems, cloud computing, computer vision, and graph neural networks related to machine learning. Chao Wang has led or participated in several key projects in China's rail transportation industry, achieving significant socioeconomic benefits. In 2021, his team won the second prize of the Science and Technology Progress of Beijing Rail Transit Society Award. The project he participated in received the first prize of the China Academy of Railway Sciences Award in 2020. In 2018, the project he led received the Innovation Award of the Communication Signal Research Institute. An invention patent he participated in won the China Excellent Patent Award in 2017. [ < g r a p h i c s > ]Zheng Tang (M’16) earned his B.Sc. (Eng.) degree with Honors from the joint program by Beijing University of Posts and Telecommunications and Queen Mary University of London in 2014. He further obtained his M.S. and Ph.D. degrees in Electrical and Computer Engineering from the University of Washington in 2016 and 2019, respectively. In 2019, Dr. Tang joined Amazon as an Applied Scientist for the Amazon One team, serving there until 2021. Currently, he is a Senior Data Science Engineer at NVIDIA's Metropolis division, a position he has held since 2021. He has already had 3 U.S. patents and 17 peer-reviewed publications. His research interests include intelligent transportation systems, object tracking, re-identification, and other topics in the computer vision and machine learning realm. Dr. Tang has been an Associate Editor for the IEEE Transactions on Circuits and Systems for Video Technology since 2021, and an Organizing Committee Member for the AI City Challenge Workshops in conjunction with CVPR since 2020. He will also serve as the Challenge Chair for AVSS 2023, and previously served as an Area Chair for MLSP 2021. He received the Best AE Award of T-CSVT in 2021. A team he led triumphed in the 2nd AI City Challenge Workshop in CVPR 2018, securing the first rank in two tracks. Additionally, his paper was a finalist for two Best Student Paper Awards at ICPR 2016.
http://arxiv.org/abs/2307.04967v1
20230711020037
Detecting Tidal Features using Self-Supervised Representation Learning
[ "Alice Desmons", "Sarah Brough", "Francois Lanusse" ]
astro-ph.GA
[ "astro-ph.GA", "astro-ph.IM" ]
[ Detecting Tidal Features using Self-Supervised Representation Learning equal* Alice DesmonsUNSW Sarah BroughUNSW Francois LanusseCEA UNSWSchool of Physics, University of New South Wales, NSW 2052, Australia CEAAIM, CEA, CNRS, Université Paris-Saclay, Université Paris Diderot, Sorbonne Paris Cité, F-91191 Gif-sur-Yvette, France Alice [email protected] Machine Learning, ICML 0.3in ] Low surface brightness substructures around galaxies, known as tidal features, are a valuable tool in the detection of past or ongoing galaxy mergers. Their properties can answer questions about the progenitor galaxies involved in the interactions. This paper presents promising results from a self-supervised machine learning model, trained on data from the Ultradeep layer of the Hyper Suprime-Cam Subaru Strategic Program optical imaging survey, designed to automate the detection of tidal features. We find that self-supervised models are capable of detecting tidal features and that our model outperforms previous automated tidal feature detection methods, including a fully supervised model. The previous state of the art method achieved 76% completeness for 22% contamination, while our model achieves considerably higher (96%) completeness for the same level of contamination. § INTRODUCTION The currently accepted model of the Universe, known as the Lambda Cold Dark Matter (ΛCDM) Cosmological Model, postulates that galaxies evolve through a process which is referred to as the `hierarchical merger model’, wherein the growth of the universe's highest-mass galaxies is dominated by merging with lower-mass galaxies (e.g. ). During the merging process, the extreme gravitational forces involved cause stellar material to be pulled out from the galaxies, forming diffuse non-uniform regions of stars in the outskirts of the galaxies, known as tidal features. These tidal features contain information about the merging history of the galaxy, and can thus be used to study the galaxy evolution process. In order to draw accurate and statistically robust conclusions about this evolution process, we require a large sample of galaxies exhibiting tidal features. One thing that makes this difficult is the extremely low surface brightness of tidal features, which can easily reach μ_r≥ 27 mag arcsec^-2. With the next generation of wide-field optical imaging surveys reaching new limiting depths, such as the Vera C Rubin Observatory's Legacy Survey of Space and Time (LSST; ) which is predicted to reach μ_r∼ 30.1 mag arcsec^-2 <cit.>, assembling a statistically significant sample of galaxies with tidal features is becoming more feasible. One challenge associated with surveys like LSST, due to commence in 2024 and run for 10 years, is the amount of data predicted to be released, with LSST predicted to output over 500 petabytes of imaging data including billions of galaxies <cit.>. Current tidal feature detection and classification is primarily achieved through visual identification (e.g. ), but this amount of data is virtually impossible to classify visually by humans, even using large community based projects such as Galaxy Zoo <cit.>, and hence we are in urgent need of a tool that can automate this classification task and isolate galaxies with tidal features. With the promising recent results of machine learning in galaxy classification tasks (e.g. ), we turn to machine learning to construct a model which can take galaxy images as input, convert them into representations - low-dimensional maps which preserve the important information in the image - and output a classification based on whether the galaxy possesses tidal features. We use a recently developed machine learning method that is essentially a middle-point between supervised and unsupervised learning, known as Self-Supervised machine Learning (SSL; ). Such models do not require labelled data for the training of the encoder, which learns to transform images into meaningful low-dimensional representations, but can perform classification when paired with a linear classifier and a small labelled dataset. Instead of labels, SSL models rely on augmentations to learn under which conditions the output low-dimensional representations should be invariant. These types of models have been successfully used for a variety of astronomical applications (e.g. ) Compared to supervised models, self-supervised models are also much easier to adapt to perform new tasks, and apply to datasets from different astronomical surveys <cit.> making this kind of model perfect for our goal of applying the tool developed using HSC-SSP data to future LSST data. § METHODS §.§ Sample Selection The dataset used for this work is sourced from the Ultradeep (UD) layer of the HSC-SSP Public Data Release 2 (PDR2; ) for deep galaxy images. We use the Ultradeep field, which spans an area of 3.5 deg^2 and reaches a surface brightness depth of μ_r∼ 28.0 mag arcsec^-2 as it reaches depths faint enough to detect tidal features. We assemble an unlabelled dataset of ∼44,000 galaxies by parsing objects in the HSC-SSP PDR2 database using an SQL search and only selecting objects which have at least 3 exposures in each band and have i-band magnitudes 15 < i < 20 mag. We set a faint magnitude limit of 20 mag to ensure that objects are bright enough for tidal features to be visible. We access the HSC-SSP galaxy images using the ‘Unagi’ Python tool <cit.> which, given a galaxy’s right ascension and declination, allows us to create multi-band ‘HSC cutout’ images of size 128 × 128 pixels, or 21 × 21 arcsecs, centred around each galaxy. Each cutout is downloaded in five (g, r, i, z, y) bands. For the training of the linear classifier we require a small labelled dataset of galaxies with and without tidal features. We use the HSC-SSP UD PDR2 dataset assembled by <cit.> composed of 211 galaxies with tidal features and 641 galaxies without tidal features. These galaxies were selected from a volume-limited sample from the cross-over between then Galaxy and Mass Assembly survey <cit.> and HSC-SSP with spectroscopic redshift limits 0.04 ≤ z ≤ 0.2 and stellar mass limits 9.50 ≤ log_10(M_⋆/M_⊙) ≤ 11.00 and have i-band magnitudes in the range 12.8 < i < 21.6 mag. To increase the size of our tidal feature training sample we classified additional galaxies from our HSC-SSP PDR2 unlabelled dataset of ∼ 44,000 objects, according to the classification scheme outlined in <cit.>. Our final labelled sample contains 760 galaxies, 380 with tidal features, labelled 1, and 380 without, labelled 0. We split our labelled dataset set into training, validation, and testing datasets composed of 600, 60, and 100 galaxies respectively. §.§ Image Pre-processing and Augmentations Before the images are augmented and fed through the model we apply a pre-processing function to normalise the images. The augmentations we use for this project are: * Orientation: We randomly flip the image across each axis (x and y) with 50% probability. * Gaussian Noise: We sample a scalar from 𝒰(1,3) and multiply it with the median absolute deviation of each channel (calculated over 1000 training examples) to get a per-channel noise σ_c. We then introduce Gaussian noise sampled from σ_c × 𝒩(0,1) for each channel. * Jitter and Crop: For HSC-SSP images we crop the 128 × 128 pixel image to the central 109 × 109 pixels before randomly cropping the image to 96 × 96 pixel. Random cropping means the image center is translated, or `jittered', along each respective axis by i, j pixels where i, j ∼ 𝒰(-13,13) before cropping to the central 96 × 96 pixels. §.§ Model Architecture The model we utilise to perform classification of tidal feature candidates consists of two components; a self-supervised model used for pre-training, and a `fine-tuned' model used for classification. All models described below are built using the TensorFlow framework <cit.>. §.§.§ The Self-Supervised Architecture For our task of classifying tidal feature candidates we use a type of self-supervised learning known as Nearest Neighbour Contrastive Learning of visual Representations (NNCLR; ). We closely follow <cit.> in designing the architecture and training process for our model. The model was compiled using the Adam optimiser <cit.> and trained for 25 epochs on our unlabelled dataset of ∼ 44,000 HSC-SSP PDR2 galaxies. §.§.§ The Fine-tuned Architecture The fine-tuned model is a simple linear classifier which takes galaxy images as input and converts them to representations using the pre-trained self-supervised encoder. These representations are passed through a `Dense' layer with a sigmoid activation, which outputs a single number between 0 and 1. This fine-tuned model was compiled using the Adam optimiser <cit.> and a binary cross entropy loss. It was trained for 50 epochs using the labelled training set of 600 HSC-SSP galaxies. Training was completed within ∼ 1 minute using a single GPU. §.§.§ The Supervised Architecture To draw conclusions about the suitability of self-supervised models for the detection and classification of tidal features, we compare our results with those of a fully supervised model. We do not construct this model from scratch, but instead use the published model designed by <cit.> to classify merging galaxies. The output layer was changed from two neurons with softmax activation, to a single neuron with sigmoid activation. The network was compiled using the Adam optimiser <cit.> with the default learning rate and loss of the network was determined using binary cross entropy. We additionally changed the input image dimension from 64 × 64 pixels with three colour channels to 96 × 96 pixels with five colour channels to ensure extended tidal features remain visible. We train this fully supervised network from scratch using the labelled training set of 600 HSC-SSP galaxies. §.§ Model Evaluation To evaluate our model performance we use the true positive rate (also known as recall or completeness) and false positive rate (also known as fall-out or contamination). The true positive rate (TPR) ranges from 0 to 1 and is defined as the fraction of galaxies correctly classified by the model as having tidal features with respect to the total number of galaxies with tidal features. The false positive rate (FPR) also ranges from 0 to 1 and is defined as the fraction of galaxies incorrectly classified by the model as having tidal features with respect to the total number of galaxies without tidal features. In addition to using the TPR for a given FPR to evaluate our model, we also use the area under the receiver operating characteristic (ROC) curve, or ROC AUC, to evaluate performance. § RESULTS §.§ Self-Supervised vs. Supervised Performance Figure <ref> illustrates the testing set ROC AUC for a supervised and self-supervised network as a function of the number of labels used in training for our HSC-SSP dataset. Each point represents the ROC AUC averaged over ten runs using the same training, validation, and testing sets for each run. We average the ROC AUC over the 10 runs and remove outliers further than 3σ from the mean. Our SSL model maintains high performance across all amounts of labels used for training, having ROC AUC = 0.911 ± 0.002 when training on the maximum number of labels and only dropping to ROC AUC = 0.89 ± 0.01 when using only 50 labels for training. The supervised model also maintains its performance regardless of label number, but only reaches ROC AUC = 0.867 ± 0.004 when training on the maximum number and ROC AUC = 0.83 ± 0.01 when using 50 labels for training. This figure not only shows that an SSL model can be used for the detection of tidal features with good performance, but also that it performs consistently better than the supervised network regardless of the number of training labels. We also calculated the average TPR reached by the self-supervised model on the testing set for a given FPR = 0.2, averaging over 10 runs and removing outliers. When training using 600 labels, the model reaches TPR = 0.94 ± 0.01, and this only drops to TPR = 0.90 ± 0.01 when using a mere 50 labels for training. §.§ Detection of Tidal Features One advantage of self-supervised models over supervised models is the ability to use just one labelled example to find examples of similar galaxies from the full dataset. By using just one image from our labelled tidal feature dataset as a query image, and the encoded 128-dimensional representations from the self-supervised encoder, we can perform a similarity search that assigns high similarity scores to images which have similar representations to the query image. This is demonstrated in Figure <ref> where we select a random galaxy with tidal features from our training sample and perform a similarity search with the 44,000 unlabelled HSC-SSP galaxies. In Figure <ref> the query image is shown on the right alongside the 24 galaxies which received the highest similarity scores. This figure shows the power of self-supervised learning, where using only a single labelled example, we can find a multitude of other tidal feature candidates. We can also visualise how the model organises the galaxy images in representation space, by using Uniform Manifold Approximation and Projection (UMAP; ) which reduces the encoded representations to an easier to visualise 2 dimensional projection. Figure <ref> illustrates this 2D projection, created by binning the space into 100 × 100 cells and randomly selecting a sample from that cell to plot in the corresponding cell location. We also enquire whether the scores given to galaxies by the linear classifier are related to the galaxies' positions in the UMAP projection, by colouring the UMAP plot according the scores given to each galaxy by the linear classifier, shown in the right panel of Figure <ref>. We find that the majority of galaxies which were assigned a high classifier score, indicating a high likelihood of tidal features, are located on the left side of the UMAP projection plot. This reinforces the idea that the encoded representations contain meaningful information about tidal features. § DISCUSSION AND CONCLUSIONS In this work, we have shown that SSL models composed of a self-supervised encoder and linear classifier can not only be used to detect galaxies with tidal features, but can do so reaching both high completeness (TPR = 0. 94 ± 0.1) for low contamination (FPR = 0.20) and high area under the ROC curve (ROC AUC = 0.91 ± 0.002). This means that such models can be used to isolate the majority of galaxies with tidal features from a large sample of galaxies, thus drastically reducing the amount of visual classification needed to assemble a large sample of tidal features. One major advantage of this model over other automated classification methods, is that this level of performance can be reached using only 600 labelled training examples, and only drops mildly when using a mere 50 labels for training maintaining ROC AUC = 0.89 ± 0.01 and TPR = 0.90 ± 0.1 for FPR = 0.2. This makes SSL models easy to re-train on data from different surveys with minimal visual classification needed. Following <cit.>, we emphasise the usefulness of being able to perform a similarity search using just the self-supervised encoder and one example of a galaxy with tidal features to find other galaxies with tidal features from a dataset of tens of thousands of galaxies. The level of comparison that can be carried out with respect to the results obtained here and other works is limited due to the scarcity of similar works. There is only one study focusing on the detection of tidal features using machine learning, namely the work of <cit.> who used a supervised network to identify galaxies with tidal features from the Wide layer of the Canada-France-Hawaii Telescope Legacy Survey <cit.>. <cit.> found that their method outperformed other automated methods of tidal feature detection, reaching 76% completeness (or TPR) and 22% contamination (or FPR). Our SSL model, trained on 600 galaxies performs considerably better, reaching a completeness of 96% for the same contamination percentage. Most importantly, our model consistently outperforms a fully supervised model trained on the same data, reaching ROC AUC = 0.911 ±0.002 while the fully supervised model only reaches a maximum ROC AUC of 0.864 ± 0.004. The code use to create, train, validate, and test the SSML model, along with instructions on loading and using the pre-trained model as well as training the model using different data can be downloaded from GitHub[<https://github.com/LSSTISSC/Tidalsaurus>]. apalike
http://arxiv.org/abs/2307.05087v1
20230711073756
SAR-NeRF: Neural Radiance Fields for Synthetic Aperture Radar Multi-View Representation
[ "Zhengxin Lei", "Feng Xu", "Jiangtao Wei", "Feng Cai", "Feng Wang", "Ya-Qiu Jin" ]
cs.CV
[ "cs.CV", "eess.IV" ]
Journal of Class Files, Vol. 14, No. 8, August 2021 Shell et al.: A Sample Article Using IEEEtran.cls for IEEE Journals SAR-NeRF: Neural Radiance Fields for Synthetic Aperture Radar Multi-View Representation Zhengxin Lei, Graduate Student Member, IEEE, Feng Xu, Senior Member, IEEE, Jiangtao Wei, Graduate Student Member, IEEE, Feng Cai, Graduate Student Member, IEEE, Feng Wang, Member, IEEE, and Ya-Qiu Jin, Life Fellow, IEEE This work was supported by the Natural Science Foundation of China under Grant U2130202. (Corresponding author: Feng Xu.). The authors are with the Key Laboratory for Information Science of Electromagnetic Waves (MoE), Fudan University, Shanghai 200433, China.(e-mail: [email protected]) ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== SAR images are highly sensitive to observation configurations, and they exhibit significant variations across different viewing angles, making it challenging to represent and learn their anisotropic features. As a result, deep learning methods often generalize poorly across different view angles. Inspired by the concept of neural radiance fields (NeRF), this study combines SAR imaging mechanisms with neural networks to propose a novel NeRF model for SAR image generation. Following the mapping and projection pinciples, a set of SAR images is modeled implicitly as a function of attenuation coefficients and scattering intensities in the 3D imaging space through a differentiable rendering equation. SAR-NeRF is then constructed to learn the distribution of attenuation coefficients and scattering intensities of voxels, where the vectorized form of 3D voxel SAR rendering equation and the sampling relationship between the 3D space voxels and the 2D view ray grids are analytically derived. Through quantitative experiments on various datasets, we thoroughly assess the multi-view representation and generalization capabilities of SAR-NeRF. Additionally, it is found that SAR-NeRF augumented dataset can significantly improve SAR target classification performance under few-shot learning setup, where a 10-type classification accuracy of 91.6% can be achieved by using only 12 images per class. synthetic aperture radar, neural Radiance field, deep learning, few-shot learning, image representation. § INTRODUCTION Synthetic Aperture Radar (SAR) has been widely utilized in the field of earth remote sensing due to its capability for all-weather and all-day observation. However, the complexity of SAR imagery interpretation poses challenges for deep learning-based methods employed in SAR image classification and target recognition. The performance of these methods is often limited by the diversity and scale of training samples. Additionally, SAR imagery exhibits high sensitivity to observation configurations, resulting in substantial variations between images acquired under different conditions. Particularly, SAR images undergo significant changes with varying viewing angles, making it challenging to characterize and learn their multi-view features effectively. In particular, deep learning approaches are significantly impacted by variations in viewing angles, leading to weaker generalization capabilities across different viewing angles. This further highlights the issue of limited sample availability for SAR-based deep-learning interpretation methods. To address the challenges of few-shot learning and cross-view generalization in SAR imagery, there are currently two main approaches: transfer learning and new-view generation methods. The main idea behind transfer learning is to pre-train the network using other readily available data that share similar semantic features with SAR imagery and then transfer it to the SAR dataset. Common types of pretraining data include data from other sensors and simulated data. However, the differences between data from different modalities or sources can pose challenges for feature transfer in the network. For instance, pretraining and transferring optical data to SAR imagery may introduce errors due to the fundamental inconsistencies between optical and SAR features<cit.>. Another approach is the new-view sample generation method, which utilizes generative models to train multi-view representation models from existing perspective images. These models can generate new SAR images, thereby augmenting the training dataset with samples from different angles. The first type of method involves training the network through physics-based simulation, known as the EM-simulation method <cit.>, which is often limited by the variety and realism of the simulated data. The other type is pure generative models including Generative Adversarial Networks (GAN) or AutoEncoders, such as adversarial autoencoder<cit.>. However, these methods are still primarily data-driven and do not fully incorporate the physical principles of SAR imaging into the network. As a result, they can only learn the interpolation capability between adjacent angles, achieving a smooth transition effect as the viewing angle changes. The effectiveness of these methods in addressing the above-mentioned challenges is limited. In the field of optical imagery, the NeRF (Neural Radiance Fields)<cit.> model introduced a method based on implicit representation. By incorporating a physical model of rendering optical 3D voxels into a neural network, the NeRF model achieved dense reconstruction of 3D voxels from multi-view observed images. Furthermore, by utilizing imaging model projection, the NeRF model can successfully generate images of new perspectives, effectively addressing the problem of multi-view image generation in the optical image domain. The NeRF model has demonstrated impressive results in this regard. This paper presents a new SAR-NeRF model, which is based on the fundamental scattering and imaging mechanism of SAR. It constructs a neural network model for SAR image voxel rendering using mapping and projection algorithms (MPA)<cit.>. The imaging space is divided into voxels which are then sampled by ray grids from different view angles, enabling the learning of multi-view SAR image representations. Extensive experiments are conducted using rendered synthetic data and measured MSTAR data, accompanied by quantitative evaluations. In summary, the main contributions of this paper can be summarised as follows: * Constructing SAR neural radiance fields: The SAR neural radiance field method is developed using voxel rendering techniques and the viewpoint-sampling point transformation equation. This method enables the learning of the distribution of attenuation coefficients and scattering intensities in the sampling space through the utilization of multi-view SAR observation data. * Introducing SAR image voxel rendering and viewpoint-sampling point transformation equation: The paper proposes a SAR image voxel rendering method that is easily integrated with neural networks. Generating voxel distributions based on observed viewpoints effectively partitions the imaging space into voxels. * We achieved multi-view representation and generation of SAR images and reconstructs target geometry models based on multi-view SAR images. Extensive demonstration and evaluation are conducted, involving a wide range of datasets and numerous experiments. The validation of multi-view SAR image generation is accomplished, showcasing the capability of extracting three-dimensional models based on rendered data. The remaining sections of this paper are organized as follows: In Section 2, a brief introduction is provided on SAR image simulation methods, generative adversarial networks, and neural radiance fields of related works. In Section 3, a neural network model for SAR image voxel rendering is constructed based on the mapping and projection principles of SAR imaging. The voxel partitioning method is also presented based on observed viewpoints in the imaging space. In Section 4, experiments are conducted to generate multi-view images using various datasets. Quantitative evaluation metrics are designed, and the extraction of geometric models from multi-view SAR observation images is achieved. Section 5 summarizes the paper, and concluding remarks are provided. § RELATED WORKS §.§ Physics-Based SAR image simulation methods Physics-based SAR image simulation methods are commonly used to simulate real-world environments and can address the challenges encountered in actual scenarios, thereby mitigating the issue of limited SAR image observation samples. These methods can be broadly classified into two categories: coherent echo simulation methods and incoherent image generation methods. Regarding coherent echo simulation methods, Xu et al. proposed the Bidirectional Ray Tracing (BART) technique for computing the Radar Cross Section (RCS) of large-sized three-dimensional targets with rough surfaces. This method efficiently calculates the RCS of complex scattering scenarios involving large-scale 3D ships on rough sea surfaces, enabling numerical calculations of RCS for both monostatic and Bistatic configurations<cit.>. Yue et al. presented an improved Generalized Gaussian Correlation (GGCS) coherence model to generate coherent SAR images. They introduced adjustments to the scatterer number restrictions and flexible selection of Gaussian scattering distribution parameters, providing a more general and realistic approach for SAR image representation<cit.>. Zhang et al. proposed a method based on the Fast Beamforming Algorithm (FBAM) and the Gaussian Optics Physical Optics (GO-PO) technique. This method computes the complex scattering of complex ship targets on rough sea surfaces and compares the results with real ship targets, validating the effectiveness of the approach<cit.>. For non-coherent image generation methods, Xu et al. proposed the Mapping Projection method for simulating polarimetric SAR imaging of complex terrain scenes. The expression for SAR imaging of polarimetric scattering in complex scenes is derived and successfully simulated SAR imaging under various configurations<cit.>. Fu et al. introduced a differentiable renderer that enables forward rendering from 3D models to 2D images and inverse reconstruction from 2D images to 3D models. They demonstrated the feasibility of inverse imaging methods for SAR image generation<cit.>. Balz et al. developed a real-time SAR simulation system based on GPU processing. They utilized a rasterization approach for real-time single-bounce simulation, significantly improving the speed of SAR image simulation<cit.>. Note that there is a substantial amount of work in this field, while only a few examples are introduced. §.§ Generating Adversarial Networks In the domain of SAR image generation, generative adversarial networks (GANs) have been extensively employed to address the issue of generating new azimuth angles in SAR images. Ding et al. proposed a pose generation method that utilizes azimuth interpolation to generate linearly synthesized SAR images with specific azimuth angles<cit.>. Subsequently, many studies have utilized the generative capabilities of GANs for data augmentation in SAR images<cit.>. Liu et al. used CycleGAN to complete the angle of SAR aircraft targets<cit.>. At the same time, Zhang et al. incorporated an azimuth discrimination model into an improved DCGAN to linearly synthesize SAR images with different azimuth angles<cit.>. Oh et al. proposed PeaceGAN to estimate the attitude angles and target class information of SAR target images<cit.>]. Although these methods can improve the classification accuracy in target recognition through synthetic samples, significant discrepancies exist between the generated and true images. Additionally, some works have employed other deep neural networks to simulate SAR images. Guo et al. utilized a deep feature transformation method based on differential vectors to generate realistic samples considering labels, azimuth angles, and target characteristics<cit.>. Song et al. introduced the AAE for image generation network, which significantly enhances the recognition accuracy under limited sample conditions<cit.>. Dong et al. employed an improved recurrent neural network to model sequence azimuth target images for predicting missing azimuth angle SAR images<cit.>. However, the methods above only approach SAR azimuth angle generation from the perspective of image representation using neural networks without considering the actual scattering mechanisms and the image projection geometry of SAR systems. Tushar et al. proposed a pose synthesis method based on sparse modeling of available images in the training data, utilizing the anisotropic scattering behavior of the scattering center of interest related to the viewing angle to simulate nearby attitudes<cit.>. Nevertheless, this method cannot generate SAR images under different pitch angles and requires significant manual annotation costs. §.§ Neural Radiance Fields The integration of deep learning with relevant data priors to solve related problems has recently sparked a research trend in Implicit Neural Representation (INR) <cit.>. Neural Radiance Fields (NeRF) applies INR to the task of novel view synthesis in optical images and represents a data-driven approach centered around neural volume rendering<cit.>. The training process of NeRF involves two main steps: scene encoding and rendering. During the scene encoding phase, NeRF learns the positions and colors of each point in the scene using a set of input images and corresponding camera parameters. It represents each point as a latent vector and employs neural networks to map the input images and camera parameters to these vectors. In the rendering phase, NeRF utilizes the trained model to generate images from new viewpoints. It samples points along each ray and calculates the color and density of each point using the scene encoding network, ultimately producing the final image. A key advantage of NeRF is its ability to generate realistic synthesized images, including rich geometric details and lighting effects. By incorporating the physical principles of the sensor observation into neural networks, NeRF achieves the presentation and re-editing of 3D content from naturally observed scenes, providing a new direction for few-shot methods. § SAR NEURAL RADIANCE FIELDS Based on the principles of SAR imaging and mapping projection<cit.>, this paper first constructs a forward-generation model called SAR Neural Radiance Fields (SAR-NeRF). The model converts the viewpoint information of SAR images, such as radar altitude, azimuth, and pitch angle, into 3D voxel sampling point information. This sampling point information is then input into an MLP encoder to estimate the attenuation coefficient and scattering intensity of the corresponding voxels. Subsequently, the MPA-SAR voxel rendering equation is employed to generate the final SAR image. The flowchart of SAR Neural Radiance Fields is illustrated in Fig. 1. In this section, we will discuss four aspects: the mapping and projection principles of SAR imaging, the construction of the sampling space mapping relationship, SAR image voxel rendering, and learning of the radiance field. §.§ SAR 3D voxel rendering equation based on MPA SAR achieves two-dimensional high-resolution imaging through pulse compression and synthetic aperture techniques, as illustrated in Fig.2. The platform flies at an altitude H along the x-axis direction, continuously emitting signals toward the ground and receiving the scattered echoes from ground targets. Throughout this process, the radar antenna maintains a fixed viewpoint (typically a broadside view). Let O denote the center point of the illuminated area, and R represent the slant range between point O and the radar flight path. θ_r and θ_a denote the vertical and azimuth beamwidths, respectively, while W represents the swath of the imaging area, and L_s represents the effective synthetic aperture length. From Fig. 2, it can be observed that the illuminated area of the SAR image is determined by the actual aperture and radiation pattern of the radar antenna. As the radar moves along the azimuth direction, it continuously receives echoes from the targets and scenes and performs imaging through signal processing. The geometric relationship of the strip map imaging in broadside view is determined by parameters such as the orbit altitude, incidence angle, and azimuth angle. Since the radar is far from the targets, the incidence angle θ variation for the same target can be neglected as the radar moves along the azimuth direction. Therefore, we assume that the scattering contribution from the same target collected at different radar positions is the same. Hence, in the imaging simulation, we calculate the contribution of each row separately within each azimuth resolution interval. The schematic diagram of the mapping and projection is shown in Fig. 3. Considering one single cross-section in the azimuth direction as the incident plane, we establish a polar coordinate system (r,θ) with the radar position as the origin, where θ represents the radar's incidence angle and r represents the slant range. By determining the sampling range of the radar's received pulse echoes and the range of incidence angle variation, we can define the imaging space of the radar as r∈[r_0,r_1] and, θ∈[θ_0,θ_1]. Now, let's assume that a single grid unit in the imaging space (x,r,θ) corresponds to a voxel d v, with dimensions dx,dr,r dθ, respectively. According to the radiation transfer theory, when the incident wave I_i passes through a single voxel, the backscattered intensity per unit area, denoted as I_scan be expressed as follows<cit.>: I_s(x,r,θ)=E^+(x,r,θ)P(x,r,θ)E^-(x,r,θ)I_i dr E^+(x,r,θ)=exp[-∫_r_0^r dr^' k_e^+(x,r^',θ)] E^-(x,r,θ)=exp[-∫_r^r_0 dr^' k_e^-(x,r^',θ)] E^+ and E^- represent the accumulated attenuation coefficients in the forward and backward directions. The phase function P denotes the scattering coefficient of the voxel, while k_e^+(x,r,θ) and k_e^-(x,r,θ) represent the extinction coefficients in the forward and backward directions, respectively. The product of the scattering intensity of the scattering element and the effective penetrating area gives the contribution of scattering energy, as shown in the following equation: S(x,r) dx=∫_θ_0^θ_1I_s(x,r,θ)r dx dθ By substituting Eqs. (1), (2), and (3) into Eqs. (4) we can obtain the scattering energy of a single pixel in the SAR image, as shown in the following equation: S_i,j =∬ S(x,r) dx dr =∫_x_i^x_i+1∫_r_j^r_j+1∫_θ_0^θ_1exp[-∫_r_0^rdr^' k_e^+(x,r^',θ)]· P(x,r,θ) exp[-∫_r^r_0dr^' k_e^-(x,r^',θ)]r dθ dr dx In natural environments, the random distribution of objects can be quite intricate, making it challenging to derive analytical solutions for the phase function and extinction coefficient in Eqs. (5). Therefore, it is necessary to discretize Eqs. (5) to facilitate its computational treatment. Given the great distance between the radar and the targets, we can use the variable s as a substitute for the pitch angle, where ds=r dθ. Consequently, we can partition the imaging space (x,r,s) into a grid by defining x_m=mΔ x,r_p=pΔ r,s_q=qΔ s. This discretization process yields the discrete form of Eqs. (5), as expressed in Eqs. (6). Each grid point in the coordinate system corresponds to a voxel within the imaging space. S_i,j=Δ xΔ rΔ s∑_q^'=0^N_θ - 1∏_p^'=p_0^p_jexp [ -Δ rk_e^+ ( m_i,p^',q ) ]· P ( m_i,p_j,q^')∏_p^'=p_j^p_0exp [ -Δ rk_e^- ( m_i,p^',q ) ] The variables i and j correspond to the grid indices of the pixels in the SAR image, while N_θ represents the number of samples in the scanning angle, with Eqs. (6), we established the model of a three-dimensional voxel rendering method based on the principles of mapping and projection. §.§ SAR image 3D voxel rendering equation vectorization In the previous section, we derived the three-dimensional voxel rendering equation based on MPA<cit.>. However, this equation is established in the grid coordinate system, where each voxel has variations in shape and size, making it challenging to integrate with neural networks. Therefore, in this section, we further optimize the voxel rendering equation (equation (6)) by using the center coordinates of the voxels to represent them. Let's assume that the SAR image has dimensions of N_a· N_r, and the number of samples in the scanning angle is N_θ. Using the voxel partitioning method from the previous section, we divide the sampling space into N_a· N_r· N_θ voxels, where the center coordinates of the (i,j,k)-th voxel are denoted as (m_i,p_j,q_k), the attenuation coefficient at that point is represented as σ_i,j,k, and the scattered intensity along the projection direction is denoted as S_i,j,k. Based on this, we can obtain the simplified SAR three-dimensional voxel rendering equation, where the scattered intensity of the (i,j)-th pixel unit can be expressed as I_i,j=∑_k=1^N_θ(∏_j^'=1^je^jkσ_i,j^',k)S_i,j,k(∏_j^'=j^1e^jkσ_i,j^',k) Here for simpling, we choose to ignore the polarimetric characteristics (i.e., assuming forward loss is equivalent to backward loss), Eqs. (7) can be written as follows: I_i,j=∑_k=1^N_θS_i,j,k(∏_j^'=1^je^σ_i,j^',k) For brevity, this paper represents the cumulative multiplication operation in Eqs. (8) using matrix operation. Let's assume k=k^', and I(k^') represents the matrix collection of I_i,j when k=k^'. We can write I(k^') in matrix form as follows: I ( k^' ) = [ [ S_1,1,k^'∏_j^'=1 ^1exp(σ_1,j^',k^') ⋯ S_1,N_r,k^'∏_j^'=1 ^N_rexp(σ_1,j^',k^'); ⋮ ⋱ ⋮; S_N_a,1,k^'∏_j^'=1 ^1exp(σ_N_a,j^',k^') ⋯ S_N_a,N_r,k^'∏_j^'=1 ^N_rexp(σ_N_a,j^',k^') ] ] In this case, we can separate I(k^') into the product of the scattering intensity matrix and the extinction coefficient matrix: I(k^')=S(k^')E(k^'), where S(k^') and E(k^') are defined as follows: S(k^')=[ S_1,1,k^' ⋯ S_1,j,k^' ⋯ S_1,N_r,k^' ⋯ ⋯ ⋯ S_i,1,k^' ⋯ S_i,j,k^' ⋯ S_i,N_r,k^' ⋯ ⋯ ⋯ S_N_a,1,k^' ⋯ S_N_a,j,k^' ⋯ S_N_a,N_r,k^' ] E(k^')=𝐞𝐱𝐩[ σ(k^')· TRI(k^')] The function 𝐞𝐱𝐩(∙) is used to perform an exponential operation on every element in the matrix, TRI(k^') represents an upper triangular matrix with the same size as σ(k^'), and σ(k^') are defined as follows: σ(k^')=[σ_1,1,k^' ⋯ σ_1,j,k^' ⋯ σ_1,N_r,k^' ⋯ ⋯ ⋯ σ_i,1,k^' ⋯ σ_i,j,k^' ⋯ σ_i,N_r,k^' ⋯ ⋯ ⋯ σ_N_a,1,k^' ⋯ σ_N_a,j,k^' ⋯ σ_N_a,N_r,k^' ] §.§ Sampling of 3D voxels by 2D ray array In the previous section, we derived the three-dimensional voxel rendering equation that depends on the voxel's center coordinates and the projection direction. Therefore, we can calculate the SAR image using the rendering equation by obtaining the center coordinates of all voxels and the projection direction in the imaging space. In this section, we will utilize SAR imaging parameters such as azimuth angle, orbit height, and pitch angle to design a method of sampling the three-dimensional voxels using two-dimensional rays. In the SAR neural radiance fields, considering that the relative positions between the radar and the targets are different in each SAR image, it is necessary to place the three-dimensional voxels at different observation angles in the same coordinate system. In this study, we define the center of the imaging target as the origin O in the world coordinate system O-XYZ. The radar's position, denoted as O^', is known and its coordinates are given as P_r =(x_r,y_r,z_r). The radar's motion direction is O^' H, the slant range direction is O^' K, and the slant off-nadir direction is O^' V. Thus, a local coordinate system for radar observation, denoted as O^'-KHV, can be established, where k, h, and v represent the unit vectors along the O^' K, O^' H, and O^' V axes, respectively. Fig.4 illustrates the definition of the coordinate system, where θ and φ represent the incident angle and azimuth angle, respectively. Consistent with the previous section, let's assume that the number of sampling points in the radar's incident angle, azimuth direction, and slant range direction is denoted as N_θ, N_a, and N_r, respectively. The corresponding sampling intervals are Δθ, Δ a, and δ r. Taking the i-th sampling point unit along the O^' H direction as the source point of the ray, its coordinates can be written as follows: v_i=(-N_a+1/2+i)Δ ah The cell will emit N_θ rays, and the unit vector k_1 corresponding to θ_1 is expressed as follows: k_1=kR(θ_1-θ) In Eqs.(<ref>), θ represents the radar's pitch angle, and R(θ^') denotes the affine transformation corresponding to rotating around OH by an angle θ^', given by the equation: R(θ^')=[1 0 0 0 cosθ^' sinθ^' 0 -sinθ^' cosθ^' ] The direction of the (i,k)-th ray can be represented as: d_k=k_1 R [ (k - 1/2) Δθ] Combining Eqs. (<ref>) and (<ref>), we can obtain the representation of the (i,j,k)-th sampling point, with its center coordinates given by: v_i,j,k=v_i+d_k [r + (-N_r+1/2+j )Δ r ] By utilizing affine transformations, we can transform the radar coordinate systems at different observation angles into the same world coordinate system. The transformation equations between the world coordinate system and the radar coordinate system are as follows: v=R_r v_r + P_r R_r=[ -cosφ -cosθsinφ -sinθsinφ 0 sinθ -cosθ sinφ -cosθcosφ -sinθcosφ ] Where v and v_r represent the coordinates of the same point in the world coordinate system and the radar coordinate system, respectively. P_r represents the coordinates of the origin of the radar coordinate system in the world coordinate system. Based on this, the correspondence between the radar grid coordinate index and the radar coordinate system is derived. Let's assume that the middle position of the azimuth corresponds to the origin O in the grid coordinate system. Therefore, the spatial position of each grid coordinate (m_i,p_j,q_k) can be represented as follows: m_i=-(N_a/2)+i p_j = r_ min/Δ + j -1 q_k=N_θθ_1/θ_2-θ_1+k-1 By using Eqs. (<ref>), (<ref>), and (<ref>), we can obtain the coordinates corresponding to the grid index (m_i,p_j,q_k) as follows: v_i,j,k=m_iΔ xh + k_1 𝐑(kΔθ)(r_ min + jΔ r) Where r_ min represents the minimum value in the range direction. By substituting Δ x=Δ a, Δ s=rΔθ, and equation (22) into equation (6), we can obtain the expression for the scattering energy of a single pixel in the SAR image in the coordinate system. S_i,j =rΔ aΔ rΔθ∑_k=0^N_θ - 1∏_j^' = 0^jexp[-Δ r k_e^+(v_i,j^',k) ]· P(v_i,j,k)∏_j^' = j^0exp[-Δ r k_e^-(v_i,j^',k) ] §.§ SAR Neural Radiance Fields In the field of optical imaging, the NeRF model introduced an implicit representation approach, which integrates the physical model of volumetric rendering into a neural network. This enables dense reconstruction of 3D voxels from multi-view observed images and further synthesis of new view images using the imaging model. The SAR-NeRF novel proposed in this paper draws inspiration from this idea. It uses a neural network to establish an implicit representation of SAR 3D voxels, encoding the distribution of attenuation coefficient σ_i,j,k and scattering intensity S_i,j,k in the space. The SAR-NeRF are depicted in Fig.5. The attenuation coefficient σ_i,j,k is related to the voxel's position. If the voxel is located inside the target,σ_i,j,k is relatively small, while if the voxel is located outside the target, σ_i,j,k tends to approach 0. On the other hand, the scattering intensity S_i,j,k depends not only on the voxel's position but also on the ray's direction. If the voxel has a smaller angle with the ray, it will have a higher scattering intensity. The representations of σ_i,j,k and S_i,j,k are given as follows: σ_i,j,k=F(v_i,j,k) S_i,j,k=G(v_i,j,k,d_i,j,k) where v and d represent the position and direction of the sampling point and γ( p ) is as follows: γ( p )=[ sin( 2^0π p ),cos( 2^0π p )⋯sin( 2^L-1π p ),cos( 2^L-1π p ) ] Furthermore, this paper has also devised a tailored design for the activation function of the F(v_i,j,k) structure. Let us consider two sampling points placed inside and outside the target, denoted as v_1 and v_2, respectively. It is evident that v_1 is located within the target, implying an inevitable attenuation with σ_1<0. Similarly, v_2 resides outside the target, indicating an absence of attenuation with σ_2=0. Consequently, it can be easily deduced that σ≤0. Likewise, concerning the scattering intensity S, since v_1 exists within the target, it undoubtedly contributes to the scattered energy. Conversely, v_2, situated outside the target, inevitably lacks any contribution to the scattered energy. Let us assume that the output of the SAR-NeRF, prior to the activation function, is denoted as σ_r and S_r. With this in mind, we can provide the following expression. f(x)=exp(∑_i=0^jσ_i )S_r if v∈ T 0 if v∉ T Here, v represents the coordinates of a sampling point in space, and T represents the region occupied by the target. Based on Eqs. (<ref>), we can derive the final output of the SAR-NeRF as follows: σ=- ReLU(σ_r) Î=-tanh(σ_r)exp(∑_i=0^jσ_i)S_r Since SAR images are functions of voxel density and scattering strength, we can fit the MLP by minimizing the error between the predicted image Î_i,j(σ,S) and the ground truth image I_i,j, which can be expressed as: min_σ,S1/N_a N_r∑_i = 1^N_a∑_j = 1^N_rÎ_i,j(σ,S)-I_i,j_2^2 By optimizing the activation function of the network, the convergence speed of the network can be significantly increased. This optimization also addresses the challenge of training SAR-NeRF when the background scattering energy is zero. Moreover, this activation function enhances the differentiation between attenuation coefficients inside and outside the target, facilitating the extraction of geometric models. §.§ Reconstruction of 3D geometric model SAR-NeRF accomplishes the prediction of attenuation coefficient and scattering intensity distributions in the sampling space. However, there exists a strong correlation between the attenuation coefficient and the voxel distribution of the target in space. As described in the previous section, if a voxel is located inside the target, its attenuation coefficient σ_i,j,k is greater than zero, while if it is outside the target, σ_i,j,k equals zero. By leveraging this concept, we can reconstruct the three-dimensional geometric model of the target using SAR-NeRF. Firstly, based on prior knowledge, the neural radiance range of SAR-NeRF, referred to as the neural radiance space, is determined. The voxel distribution in this space is obtained by averaging the samples taken within the neural radiance space. Finally, the voxel information is input into the SAR-NeRF to obtain the distribution of attenuation coefficients in this space. Points with σ_i,j,k equal to zero are removed, resulting in the three-dimensional voxel model of the target. The specific process is illustrated in Fig.6. § EXPERIMENTS The experiments in this study are divided into three main parts: forward rendering experiments, multi-view image generation from rendered images, and multi-view image generation from real SAR images. In the forward rendering tests, the effectiveness of the SAR voxel rendering method is validated. The experiments on multi-view image generation from rendered images demonstrate the reliability of SAR Neural Radiative Field (SAR-NeRF) in the task of generating multi-view images, as well as its ability to learn geometric information of the target. The experiments on multi-view image generation from real SAR images primarily verify the effectiveness of SAR-NeRF on real data. §.§ Validation with forward rendering experiment We first validated the effectiveness of SAR image voxel rendering. A typical building was chosen as our validation target model, which is simplified as a cuboid structure. Through the mapping and projection mechanism of SAR imaging, we know that the representation of the building in SAR images can be composed of ground scattering (SG), wall scattering (SW), roof scattering (SR), and shadows (S). The distribution of scattering components varies with the changes in wall height (h) and roof width (w). The model shown in Fig. 7(a) satisfies this condition. w/h>α At this point, we increase the height of the building, causing SW and SR to completely overlap. The model shown in Figure 7(c) satisfies this condition. w/h=α Upon further increasing the height of the building based on Fig. 7(c), the distribution of SW exceeds that of SR. The model depicted in Fig. 7(e) satisfies this condition. w/h<α This experiment provides preliminary evidence that supports the rationality and effectiveness of SAR voxel rendering. §.§ Multi-view image generation based on rendered image §.§.§ Dataset This section demonstrates the experimental results of multi-view image generation on simulated rendered images. To obtain realistic scattering textures of the models and eliminate background scattering effects, the simulated rendered image dataset includes rendering images of three types of target models: cuboid, upright four-sided platform (with a smaller top and a larger base), and four-sided platform. The images have a size of 128×128 and a spatial resolution of 0.3m/pixel. As shown in Figure 8, the pitch angle of the three instances' rendered images is set to 45^∘, with a radar altitude of 10,000 meters. §.§.§ Evaluation metrics For evaluating the performance of multi-view image generation using SAR-NeRF, besides visual comparisons between the generated images and the ground truth datasets, we utilize the peak signal-to-noise ratio (PSNR) and the Learned Perceptual Image Patch Similarity (LPIPS) as quantitative evaluation metrics. PSNR is a distortion-based metric that tends to favor smooth or blurry reconstruction results. As a complement, LPIPS calculates the perceptual distance between two images, which is more aligned with human perception. In our case, we use AlexNet to extract features for LPIPS calculation. §.§.§ Experiment set up Subsequently, we conducted tests on the simulated rendered SAR images of the three model categories. For each model category, under the observation conditions of a radar height of 10,000 meters (default setting in this experiment) and a pitch angle of 45^∘, we selected 36 images from the azimuth angle range of 0^∘,359^∘ with a 10^∘ interval as the training set, while the remaining 324 images were used as the test set. Firstly, we examined the similarity of the generated samples when applied to the test set. Figure 9 showcases the visual comparison between the SAR-NeRF generated synthesized perspective images (lower row of each panel) and the corresponding original rendered ground truth images from the test set (upper row of each panel). In order to assess the SAR-NeRF's generalization ability to arbitrary angles, the leftmost and rightmost images in each panel, marked with red boxes, represent the SAR rendered images at azimuth angles of 0^∘ and 90^∘ (which belong to the training set), respectively, while the intermediate angles have an interval of 10^∘. It can be observed that as the radar observation angle varies, SAR-NeRF is capable of inferring the corresponding changes in the synthesized rendered SAR images and reasonably outputs the distribution of scattered energy in the synthesized images. To quantitatively evaluate the generation performance, we varied the angle interval/training set size in the aforementioned experiments. The quantitative comparisons are presented in Table 1. Overall, our model achieves excellent performance with a reasonable amount of data, and even with a reduced data size, SAR-NeRF still demonstrates commendable performance. Based on the aforementioned experiments, this study further validates the learning capability of SAR-NeRF for voxel distribution in the sampling space by simultaneously varying the azimuth and pitch angles. We extended the pitch angle range from the original 45^∘ to [35^∘, 55^∘]. For the training set, we selected all odd pitch angles in the range [35^∘, 55^∘] and varied the azimuth angle from 0^∘ to 360^∘ with a 10^∘ interval. For the interpolation experiment, we used all even pitch angles in the range [35^∘, 55^∘] and varied the azimuth angle, which served as the unseen test set. Additionally, we generated images with pitch angles of 30^∘ and 60^∘ as the extrapolation experiment's unseen test set. Figure 10 and Figure 11 respectively demonstrate the results of the interpolation and extrapolation experiments on the cuboid model. Building upon the interpolation and extrapolation experiments with the pitch angle, this study proceeded with the extraction of three-dimensional voxels. The selected sampling space had dimensions of 20×20×20m³, and it was uniformly sampled to obtain 256×256×256 sample points. These sample points were then fed into the SAR-NeRF model trained in the previous step for prediction. Figure 12 illustrates the results of the voxel extraction process, showcasing the effectiveness of the approach in generating three-dimensional models. Building upon the previous experiments, this study further conducted experiments on the upright pyramid and inverted pyramid models. Figure 13 presents the three-dimensional voxel extraction results for the pyramid models, demonstrating the effectiveness of the approach. §.§ Multi-view image generation on MSTAR dataset §.§.§ dataset In this section, the effectiveness of SAR Neural Radiance Fields (SAR-NeRF) on real SAR image datasets was further validated. The dataset used in this section is the Moving and Stationary Target Acquisition and Recognition (MSTAR) dataset, which provides high-resolution real SAR data. The 2S1 vehicle in the dataset was selected as the experimental target. The SAR-NeRF model was evaluated for multi-view image generation from a single pitch angle and multiple azimuth angles, as well as from multiple pitch angles and multiple azimuth angles. It is important to note that the original amplitude values of the MSTAR data were used in this study, and the image size was set to 128×128. Therefore, the generated images by SAR-NeRF retained the statistical characteristics of the real SAR images. §.§.§ technical details In the scenario of a single pitch angle, we have selected the MSTAR dataset with an pitch angle of 17 degrees. From this dataset, we have chosen 299 images as our samples. Specifically, we have taken 24 images with azimuth angles spaced 15 degrees apart as the training set, while the remaining 275 images form the testing set. Figure 14 showcases the results of SAR Neural Radiance Field (NeRF) generation on the MSTAR dataset. The observation from Figure 14 reveals that the SAR Neural Radiance Field (NeRF) not only accomplishes a fine fit to the target details but also reconstructs the shadow regions of the objects. This further validates the effectiveness and practicality of our SAR-NeRF voxel rendering approach based on mapping projection principles. Building upon this experiment, we further explored azimuth angle intervals of 5^∘, 10^∘, and 15^∘. The experimental results substantiate that SAR-NeRF maintains impressive performance even when the sample quantity undergoes variations, as illustrated in Table 2. Building upon the previous experiments, this study further investigates the multi-view image generation capabilities of the SAR Neural Radiance Field (NeRF) under varying pitch angles. Specifically, we selected the 2S1 vehicle and utilized data at pitch angles of 15^∘, 17^∘, and 30^∘, with azimuth angle intervals of 5^∘, as the training set for SAR-NeRF. The remaining data, which were not used during training, were employed for azimuth angle interpolation experiments. The results of these experiments are depicted in Figure 15. Additionally, a set of data was generated at pitch angles of 12^∘,20^∘,35^∘, where corresponding real SAR images were not available. Therefore, only the generated results are showcased in Figure 16. The aforementioned experiments provide evidence that the SAR Neural Radiance Field (NeRF) is capable of learning the spatial geometric information of the target scene, rather than simply interpolating between adjacent angles for smooth effects. In addition, this study also explored the extraction of three-dimensional models from the MSTAR dataset. However, progress in this aspect was limited due to the challenges posed by background noise, clutter, and variations in the background within the dataset, as well as the limited coverage of pitch angles. §.§ Data augumentation using SAR-NeRF under FSL conditions To validate the performance improvement of using NeRF-generated data for FSL classification tasks, we conducted experiments using ten classes of MSTAR data as the dataset for the classification network, with ResNet50 as the classification network. First, we selected 36, 24, 12, 8, 4 SAR images from each class of MSTAR data to create five sets of test samples. We trained the classification network on these samples to obtain the classification accuracy without using SAR-NeRF-enhanced data. Next, we used these five sets of test samples as training data for SAR-NeRF and generated MSTAR data from additional angles. We then used this augmented data as the training set for the classification network and obtained the classification accuracy with SAR-NeRF-enhanced data. Table 4 presents the impact of using SAR-NeRF-enhanced data on the classification accuracy under the FSL conditions. From this table, it can be observed that when the training sample size is 36 shots and 24 shots, the classification accuracy without using augmented data already exceeds 95%, showing only a small difference compared to the classification accuracy with SAR-NeRF-enhanced data. However, as the training sample size decreases to 12 shots and 8 shots, the classification accuracy with SAR-NeRF-enhanced data exhibits a significant improvement over the accuracy without augmented data. When the training sample size reduces to 4 shots, although the classification accuracy with SAR-NeRF-enhanced data still shows improvement, the magnitude of improvement decreases. In summary, SAR-NeRF can significantly enhance the performance of the classification task when the training sample size is appropriate. However, when the training sample size is already sufficient, the original classification network is capable of meeting the classification performance requirements. On the other hand, when the training sample size is extremely limited, the generation capability of SAR-NeRF itself may be compromised, resulting in a diminished impact on the performance enhancement of the classification network. IEEEtran
http://arxiv.org/abs/2307.07629v1
20230714205949
Contracting with Heterogeneous Researchers
[ "Han Wang" ]
econ.TH
[ "econ.TH" ]
Optical Studies of Seven Bright Southern Cataclysmic Variable Stars David C. Katz August 12, 2023 =================================================================== We study the design of contracts that incentivize a researcher to conduct a costly experiment, extending the work of <cit.> from binary states to a general state space. The cost is private information of the researcher. When the experiment is observable, we find the optimal contract and show that higher types choose more costly experiments, but not necessarily more Blackwell informative ones. When only the experiment result is observable, the principal can still achieve the same optimal outcome if and only if a certain monotonicity condition with respect to types holds. Our analysis demonstrates that the general case is qualitatively different than the binary one, but that the contracting problem remains tractable. Keywords: Adverse Selection, Bayesian Persuasion, Information Acquisition. JEL Classification: D82, D83. § INTRODUCTION Acquiring information through contracting is a prevalent practice when facing significant decisions. Governments may engage experts to evaluate vaccine efficacy, while business owners may consult with economists to design investment plans. The principal relies on the agent to gather information in the hopes of making more informed decisions. For instance, in vaccine testing, the agent has a significant degree of flexibility in selecting experimental designs but must comply with legal requirements, such as pre-registering trials and publicly disclosing their results. Despite having hard information from the researcher's experiment, two major concerns remain regarding the agent's incentives: Firstly, the agent may have more information about the cost of acquiring information than the principal. Secondly, the principal may lack the ability to observe or validate the research method based solely on the observed result. We study a principal-agent model of information acquisition, motivated by the example of vaccine testing. The principal seeks to hire a researcher to gather information about the state of the world. Although the researcher has no intrinsic preferences over the decision being made, the principal can incentivize the researcher to design an experiment by offering payments before the true state is revealed. We assume that the researcher can choose any experiment at a private cost. The timing of the model is as follows: the principal offers a contract, and the researcher decides whether to accept or reject the offer. If the offer is accepted, the researcher commits to a stochastic information structure, which will generate a public signal. Finally, the principal chooses an action based on the information revealed by the signal. This paper builds on the work of <cit.>, who analyzed the optimal contract with a stylized structure of two states and binary signals. We broaden this work by allowing for more states, which is necessary for many real-world decision problems. Our analysis shows that including more states can significantly affect the results, but the contracting problem remains tractable. Our findings are particularly relevant in many applications, as numerous decision problems involve more than two states. For example, in vaccine testing, the goal is often to determine the possible side effects, which can involve multiple states. Similarly, companies purchasing consumer data from a platform may need to segment the population based on demographics, location, and browsing history. Each segment can represent a different state in such cases, which can significantly impact the optimal contract. This paper assumes that research results are observable and contractible. We distinguish between two environments based on the observability of research methods: the methods-based contracting environment, where research methods are observable to the principal, and the results-based contracting environment, where research methods are not observable. This distinction helps address the hidden-action problem and facilitates effective comparison. We will consider the optimal contract within the most general class under methods-based contracting and explore whether the induced outcome can be achieved using the more restricted class of results-based contracts. Under methods-based contracting, our goal is to determine the specific experiment choice function that the principal wants to implement. When there are only two states, <cit.> establishes a condition for the optimal experiment choice function, called “Blackwell monotonicity”. This means that higher types choose more informative experiments in the Blackwell order, thus simplifying the principal's contracting problem. However, in scenarios involving more than two states, we present an example that shows the optimal experiment choice function may not exhibit Blackwell monotonicity. The intuition is that lower costs make it better to choose more extreme posteriors. With two states, posteriors move in one dimension, but with more than two states, posteriors can “rotate” towards the simplex's corner in a way that potentially violates Blackwell monotonicity. Nevertheless, we establish a weaker monotonicity condition, asserting that higher types opt for more costly experiments, which can still streamline the principal's contracting problem. In the context of results-based contracting, we first explore results-based contracts that incorporate type revelation, departing from <cit.>'s emphasis on payments that depend solely on research results. The principal gains greater flexibility in designing incentives by permitting payments to be conditional on the reported type. This provides an explanation for budget proposals in grant applications. Our finding connects 's work with <cit.> and demonstrates that we can construct an outcome-equivalent direct contract with results-based payments for any incentive-compatible methods-based contract. We then study results-based contracts without type revelation. Analyzing this specific class of contracts is relevant for many applications as real-world contracts may lack a screening mechanism. When there are only two states, <cit.> shows that any binary, Blackwell-monotone experiment choice function could be implemented through results-based contracts and hence that the outcome of the optimal methods-based contract can be achieved. Blackwell monotonicity is also necessary for implementation. To accommodate more than two states, we define a novel monotonicity concept, which depends on the cost function. We show that this concept characterizes the existence of a results-based contract outcome-equivalent to the optimal methods-based contract. As a result, we can identify when the principal can focus on results-based contracts without any loss, which is helpful in many applications that involve multiple states. An intriguing example arises when the principal wants to implement a “symmetric” experiment, that is, one where the cost of generating each piece of evidence is the same. Moreover, we highlight the connections between different notions of monotonicity and illustrate the construction of such contract for scenarios involving two types. Our construction remains applicable even when the optimal experiment choice function does not exhibit Blackwell monotonicity. Overall, our work extends the literature on learning incentives and offers insights for devising effective and flexible contracts in various settings. Related Literature. — This paper belongs to the literature on contracting for acquiring information, which has been extensively studied in various settings. Early work includes <cit.>, where the agent incurs a cost for each observation he draws while the principal is unaware of this cost and the number of observations made. <cit.> builds upon this setting by modeling the agent as an information designer and examining the principal's contracting problem with adverse selection and moral hazard. In a related setting with only adverse selection, <cit.> examines cases where the agent has private information about the set of feasible experiments, and the principal cannot make payments. <cit.> and <cit.> study a model with only moral hazard, where the principal cannot observe the agent's choice of information structure. Both of them address the impacts of risk aversion and limited liability. Our paper shares the assumption with <cit.> and <cit.> that research results are observable and contractible. These two papers provide valuable insights into how to incentivize the researcher in this setting. Nevertheless, incentivizing information acquisition can be challenging when research results are not verifiable. Several papers have proposed solutions to this problem. One approach is to provide incentives based on the ex-post evaluation of the researcher's advice, such as the realized state or the ex-post payoff of the principal. Examples of such work include <cit.>, <cit.>, <cit.>, <cit.>, and <cit.>. Another approach is to consider peer monitoring with multiple experts. For instance, <cit.> examine this case by letting payments contingent on the entire vector of reports. In our model, it is the decision maker who designs contracts for procuring information. Recent research has also explored situations where a data broker offers contracts in order to sell information, as surveyed by <cit.>. For example, <cit.> examines a model in which a data broker sells experiments to a decision-maker who has private information about the prior belief. <cit.> investigates a setting where a data broker sells market segmentations to a firm with private cost. <cit.> explores the data broker's optimal selling mechanism when the decision maker has the option to conduct an experiment at an additional cost. Methodologically, this paper is related to mechanism design and information design. See, for instance, <cit.>, <cit.>, <cit.>, <cit.> and, <cit.>. Our model differs from Bayesian persuasion in assuming the agent has no stake in the principal's decision problem. As a result, the agent designs the information to maximize the payment net cost rather than persuades the decision-maker to choose a preferred action. Furthermore, specifying experiments for different types of the researcher can be viewed as a comparative statics question within the literature of rational inattention, as examined by <cit.> and <cit.>. § MODEL SETTING §.§ Notations We write ℝ_+ for the set of non-negative real numbers and ℝ_++ for the set of positive real numbers. The notation conv(S) for a set S⊆ℝ^k denotes the convex hull of S. Let the space of probability distributions over a finite set S be Δ(S)={p∈ℝ_+^|S|: ∑_s∈ Sp(s)=1}, where |S| is the cardinality of S. For a distribution p ∈Δ(S), we write supp(p)={s∈ S: p(s)>0}. A full support distribution p has p(s)>0 for all s∈ S. §.§ Setup We extend the principal-agent model of information acquisition presented in <cit.> to accommodate more than two states. The principal (she) is faced with a Bayesian decision problem and can hire a researcher (he) to learn about the uncertain state of the world. Suppose that the state of the world ω can take values in a finite set Ω. The principal's Bayesian decision problem is a triplet D=(A, u, p_0), including a finite set of actions A, a utility function u: Ω× A→ℝ and a full support prior belief over the states p_0∈Δ(Ω). We let v(p):=max_a∈ A[∑_ω∈Ω p(ω)· u(ω,a)] be the maximal achievable expected utility at belief p∈Δ(Ω). The researcher has the same prior belief p_0. Once hired, he can choose any experiment to learn about the state. After observing the signal realization, everyone updates the belief according to Bayes rule. Under the common prior assumption, it is convenient to represent signal structures as distributions over posteriors that they induce. Given p_0, for any distribution over posteriors τ, there exists a signal structure inducing it if and only if τ satisfies 𝔼_p∼τ[p]=p_0 <cit.>. The requirement that the expectation over posteriors equals the prior is known as Bayes plausibility. We denote by X(p_0) ⊆Δ(Δ(Ω)) the set of distributions over posteriors that have finite support and satisfy Bayes plausibility. We will write X instead of X(p_0) to simplify the notation. Let an experiment be τ∈ X. We also call an experiment a research method and refer to the realized posterior as its result. The researcher is better informed about the cost of acquiring information. He has a private type θ∈Θ, and the cost of experiment τ is θ· C(τ). Consider Θ:={θ_1,θ_2,…,θ_N}⊆ℝ_++, where θ_1>θ_2>…>θ_N. A larger subscript is associated with a more efficient type that has a lower cost of running experiments. Type θ is drawn from a commonly known distribution with full support. Let F denote the cumulative distribution function and f denote the corresponding probability mass function. We assume the cost C(τ) is posterior-separable <cit.>, i.e., there is a continuous and strictly convex function c: Δ(Ω)→ℝ_+ with c(p_0)=0 such that C(τ)=𝔼_p∼τ[c(p)]. In this paper, we will take c as a primitive of the model, while conveniently representing 𝔼_p∼τ[c(p)] as C(τ). This paper considers a setting with quasi-linear expected-utility preferences. An outcome is a pair (τ, ϕ)∈ X ×ℝ that specifies an experiment and a transfer. The principal's payoff is 𝔼_p∼τ[v(p)]-ϕ, where 𝔼_p∼τ[v(p)] is the expected utility from decision problem D under experiment τ. The researcher runs an experiment at some cost. A type-θ researcher's payoff is given by ϕ-θ· C(τ). We assume that the researcher has no stake in the decision problem D; he only cares about the payment from the principal and the experiment's cost. The principal maximizes her expected payoff by making a take-it-or-leave-it offer to the researcher. We take the principal's perspective and study the design of optimal contracts. In line with <cit.>, we assume the experiment's results are observable and contractible. We consider two distinct environments: methods-based contracting, where the principal is capable of observing the choice of experiments, and results-based contracting, where such observation is not feasible. § METHODS-BASED CONTRACTING In this section, we study a benchmark model assuming that research methods are observable and contractible. By the revelation principle, we can restrict attention to the class of direct revelation contracts, namely “methods-based contracts”. We focus on how to design incentives for the researcher to report his type truthfully. We generalize the analysis in <cit.> to accommodate more complicated D, particularly when the world has more than two states. We define a methods-based contract as a pair of functions (𝒳,T), where 𝒳: Θ→ X is an experiment choice function and T: Θ→ℝ is a payment function. If the researcher reports truthfully, the principal's payoff is U_P(𝒳,T)=𝔼_θ∼ F[𝔼_p∼𝒳(θ)[v(p)]-T(θ)]. The payoff of a type-θ researcher reporting θ^' is U_θ(θ^'|𝒳,T)=T(θ^')-θ· C(𝒳(θ^')). We say that a methods-based contract (𝒳,T) is incentive-compatible (IC) if every type of researcher prefers truthful reporting, i.e., ∀θ∈Θ, θ∈_θ^'∈Θ U_θ(θ^'|𝒳,T). We say that (𝒳,T) is individually rational (IR) if every type of researcher prefers to accept the contract upon truthful reporting, i.e., ∀θ∈Θ, U_θ(θ|𝒳,T)≥ 0. The principal searches over the set of IC and IR methods-based contracts to maximize her ex-ante payoff. The methods-based contracting problem is defined as follows. max_𝒳,T U_P(𝒳,T) s.t. (𝒳,T) is IC and IRMethods Before the analysis, let us introduce two definitions of monotone choice functions. Consider an experiment choice function 𝒳:Θ→ X. * 𝒳 is c-monotone, if C(𝒳(θ_j))≥ C(𝒳(θ_i)), ∀ i,j with j>i. * 𝒳 is Blackwell-monotone, if it is c-monotone for every convex and continuous function c: Δ(Ω)→ℝ. <cit.> These two monotonicity concepts rely on different orderings of experiments. It is known that the definition of Blackwell monotonicity is equivalent to saying that 𝒳(θ_j) is more informative than 𝒳(θ_i) for all i,j with j>i. Note that c-monotonicity is a weakening of Blackwell monotonicity. While c-monotonicity depends on the function c, Blackwell monotonicity does not. As shown in <cit.>, any incentive-compatible contract leads to c-monotonicity and the principal's methods-based contracting problem has much in common with the classic adverse selection problem <cit.>. Define a virtual type function g(θ) by g(θ_N)= θ_N and g(θ_k) =θ_k+F(θ_k+1)/f(θ_k)(θ_k-θ_k+1) for k≤ N-1. We can rewrite the principal's methods-based contracting problem as follows. (𝒳^*, T^*) solves the principal's problem (<ref>) if and only if the following conditions hold: 𝒳^*∈𝒳 𝔼_θ[𝔼_p∼𝒳(θ)[v(p)-g(θ)c(p)]] s.t. 𝒳 is c-monotone T^*(θ_1) = θ_1 C(𝒳^*(θ_1)) T^*(θ_k) = θ_k C(𝒳^*(θ_k))+∑_i=1^k-1(θ_i-θ_i+1) C(𝒳^*(θ_i)) for k≥ 2 Moreover, if T^* is derived from 𝒳^* according to (<ref>) and (<ref>), then among all IC and IR methods-based contracts (𝒳^*,T), T^*(θ)≤ T(θ), ∀θ∈Θ. See Lemma 4 in <cit.>. Lemma <ref> provides a tractable way of finding an optimal methods-based contract. We can solve 𝒳^* from the constrained problem (<ref>) and derive the corresponding T^* by applying (<ref>) and (<ref>) to 𝒳^*. Notably, T^* is the cheapest way to implement 𝒳^*. The constrained problem (<ref>) features an additively separable objective in θ, while the c-monotonicity constraint imposes requirements on experiments across different types. Our approach begins with considering a relaxed program that omits the c-monotonicity constraint, making it easier to solve. Subsequently, we will verify that the solution derived from the relaxed program satisfies the c-monotonicity constraint. We define the relaxed problem as follows. It can be written as a set of type-specific problems, i.e., choosing an experiment for each θ. 𝒳^*∈𝒳 𝔼_θ[𝔼_p∼𝒳(θ)[v(p)-g(θ)c(p)]] 𝒳^*(θ)∈τ∈ X 𝔼_p∼τ[v(p)-g(θ)c(p)] ∀θ∈Θ The following proposition shows that we can often safely ignore the c-monotonicity constraint. Any solution to the relaxed program must satisfy c-monotonicity, then it also solves the constrained problem (<ref>). If the virtual type g(θ) is strictly increasing in θ, then any experiment choice function that solves the relaxed program must be c-monotone. Consider θ,θ^'∈Θ such that θ< θ^'. If virtual type g is strictly increasing, then g(θ)<g(θ^'). Recall that we interpret θ as the more efficient type in {θ,θ^'}. To show that 𝒳^* is c-monotone, we need C(𝒳^*(θ))≥ C(𝒳^*(θ^')). Suppose that C(𝒳^*(θ))<C(𝒳^*(θ^')). By the optimality of 𝒳^*, experiment 𝒳^*(θ^') solves the type-specific problem for θ^'. This implies that 𝔼_p∼𝒳^*(θ^')[v(p)-g(θ^')c(p)]≥𝔼_p∼𝒳^*(θ)[v(p)-g(θ^')c(p)]. Rearranging terms and using g(θ)<g(θ^'), we get 𝔼_p∼𝒳^*(θ^')[v(p)]-𝔼_p∼𝒳^*(θ)[v(p)] ≥ g(θ^')[C(𝒳^*(θ^'))- C(𝒳^*(θ))] > g(θ)[C(𝒳^*(θ^'))- C(𝒳^*(θ))] This can be written as 𝔼_p∼𝒳^*(θ^')[v(p)-g(θ)c(p)]> 𝔼_p∼𝒳^*(θ)[v(p)-g(θ)c(p)], meaning that 𝒳^*(θ) does not solve the type-specific problem at θ. This contradicts the optimality of 𝒳^*. Proposition <ref> verifies that the candidate experiment choice function is indeed optimal. It is based on the intuition that people will demand more information, as the marginal cost goes down. The proof of this proposition holds for any finite state space. It is worth noting the connection to <cit.>: With |Ω|= 2, <cit.> proves a stronger statement, namely, any choice function that solves the relaxed program must be Blackwell-monotone. However, this statement does not hold in the case of |Ω|> 2.[For state spaces with |Ω|≥ 2, the impact of the marginal cost parameter on the choice of experiments has been examined by <cit.> and <cit.>. According to their results, a more efficient type will choose more extreme posteriors. This result is stated formally in Proposition 3 of <cit.> and Proposition 4 of <cit.>. We will discuss how our results relate to theirs in the Appendix <ref>.] Here is an example with three states, where the experiment choice function that solves the relaxed program is not Blackwell-monotone. Suppose that there are three states, Ω={ω_1,ω_2,ω_3}. Let the action space be A={a_1,a_2,a_3} and the prior belief be p_0=(1/3,1/3,1/3). The utility function u(a,ω) is described using the following table. [ ω_1 ω_2 ω_3; a_1 5 4 2; a_2 0 5 3; a_3 5 1 5; ] The researcher has two possible types, Θ={θ_1,θ_2}, where θ_1=9/4, θ_2=2, and f(θ_1)=f(θ_2)=1/2. We can calculate the virtual type g(θ): g(θ_1)=5/2 and g(θ_2)=2. Let the information cost be the reduction in Shannon entropy c(p)=H(p_0)-H(p), where H(p)=-∑_ω∈Ωp(ω)log(p(ω)). We solve the optimal choice function using the characterization in <cit.> and <cit.>. See Figure <ref> for a representation in the belief simplex. We defer the numerical results to the Appendix <ref>. Note that for any finite Ω, given two distributions over posteriors τ, τ^'∈ X, τ is more informative than τ^' in the Blackwell order, only if (τ^')⊆((τ)). From Figure <ref>, for 𝒳^* to be Blackwell-monotone, every vertex of the thick-lined triangle should be inside the dashed-lined triangle. However, we find two vertices of the thick-lined triangle that are not (i.e., p_1^' and p_1^''). Therefore, 𝒳^* does not satisfy Blackwell monotonicity. With a higher virtual type, every posterior will move towards a corner of the simplex. The simplex is a one-dimensional line segment when there are two states, limiting posteriors to move in only one dimension. However, with more than two states, the simplex becomes more complex and enables movement in multiple dimensions. § RESULTS-BASED CONTRACTING In Section 3, we assumed that the principal could observe and design contracts based on the researcher's research method. However, due to the research method's stochastic nature, only one posterior can be realized from an experiment. One may wonder what happens if the principal has no way to verify the choice made by the researcher. We will assume that research methods are not observable. The realized posterior is observable and contractible, so payments can be made contingent on it. §.§ Results-based payments with screening We start by analyzing direct contracts in a setting similar to <cit.>, where the agent has both private information and private decisions. In a direct contract, the principal suggests the choice of experiment based on the reported type and pays as a function of both the reported type and the realized posterior. As before, an experiment choice function is a map 𝒳:Θ→ X. A contingent payment function is a map t:Θ×Δ(Ω) →ℝ. If the researcher reports his type to be θ∈Θ and the realized posterior is p∈Δ(Ω); then the researcher will get payment t(θ,p)∈ℝ. We say that (𝒳,t) is incentive-compatible if 𝔼_p∼𝒳(θ)[t(θ,p)]-θ𝒞(𝒳(θ))≥𝔼_p∼τ[t(θ^',p)]-θ𝒞(τ) ∀θ,θ^'∈Θ, ∀τ∈ X. Incentive compatibility involves two constraints: obedience and honesty. Obedience means that the researcher follows the suggested experiment choice function, and honesty requires that the researcher reports truthfully. When research methods are not observable to the principal, the researcher may choose outside the set of suggested experiments, i.e., {𝒳(θ):θ∈Θ}. The incentive compatibility condition ensures that it should not be a profitable deviation for the researcher to choose any τ∈ X. We will show that given certain regularity conditions, achieving the outcome of any incentive-compatible methods-based contract is possible by using results-based payments with screening. Additionally, We will show that an optimal methods-based contract must satisfy this regularity condition. Therefore, the principal can attain the same expected payoff as under methods-based contracting. Experiment choice function 𝒳 is non-redundant if (𝒳(θ))⊆Δ(Ω) is affinely independent, ∀θ∈Θ. Suppose that 𝒳 is non-redundant. For any methods-based contract (𝒳,T) that is incentive-compatible, there is a contingent payment rule t:Θ×Δ(Ω) →ℝ, such that (𝒳,t) is incentive-compatible and 𝔼_p∼𝒳(θ)[t(θ,p)]=T(θ). By affine independence, | (𝒳(θ))|≤|Ω|, ∀θ∈Θ. It follows that for given θ, there exists a hyperplane that contains all points in {(p,c(p)):p∈ (𝒳(θ))}.[We can select one arbitrarily if there are multiple hyperplanes that contain all these points. In addition to affine independence, if | (𝒳(θ))|=|Ω|, then such hyperplane is uniquely pinned down by {(p,c(p)):p∈ (𝒳(θ))}.] We can identify this hyperplane by an affine function H_θ:Δ(Ω)→ℝ. Set T=min_θ∈Θ[min_p∈Δ(Ω)[T(θ)-θ H_θ(p)]]. We can define a contingent payment rule as t(θ,p)= T(θ) if p∈(𝒳(θ)) T if p∉(𝒳(θ)) , ∀θ∈Θ. Immediately, 𝔼_p∼𝒳(θ)[t(θ,p)]=T(θ) holds from the definition. We need to show that (𝒳,t) is incentive-compatible. Obedience Upon reporting θ, the researcher's payoff of choosing any experiment τ∈ X is bounded by that of choosing 𝒳(θ): 𝔼_p∼τ[t(θ,p)-θ c(p)] ≤ 𝔼_p∼τ[T(θ)-θ H_θ(p)] = T(θ)-θ H_θ(𝔼_p∼τ[p]) = T(θ)-θ H_θ(𝔼_p∼𝒳(θ)[p]) = 𝔼_p∼𝒳(θ)[t(θ,p)-θ c(p)], where by construction we have t(θ,p)-θ c(p)≤ T(θ)-θ H_θ(p), ∀ p∈Δ(Ω), the first equality holds by H_θ being affine, the second equality holds by τ and 𝒳(θ) both being Bayes-plausible, and the last equality holds because H_θ(p)=c(p), ∀ p∈ (𝒳(θ)). Honesty Truthful reporting is guaranteed by the incentive compatibility of (𝒳,T). Conditional on obedience, the inequality in (<ref>) reduces to one of the incentive-compatible constraint for (𝒳,T), i.e., T(θ)-θ·𝒞(𝒳(θ))≥ T(θ^')-θ·𝒞(𝒳(θ^')). Non-redundancy is required for this contingent payment function to work. If 𝒳(θ) has affinely dependent support, the researcher reporting θ can deviate to an experiment whose support is a proper subset (𝒳(θ)). Doing so will preserve the same payment but strictly lower the cost. Thus, it is profitable for the researcher. Our analysis complements <cit.> by showing that it is possible to induce any desired experiment-payment pair using contingent payments on both the reported type and the realized posterior. Because type revelation gives more flexibility in designing incentives, Theorem <ref> speaks not just to the optimal methods-based contract but to all incentive-compatible methods-based contracts. The intuition is closely related to Proposition 1 in <cit.>: when the researcher has unlimited liability, a “forcing” contract can solve the hidden-action problem. The following lemma allows us to apply Theorem <ref> to an optimal methods-based contract. Therefore, results-based contracting can achieve the same expected payoff for the principal as methods-based contracting. There always exists an optimal methods-based contract (𝒳^*,T^*) such that 𝒳^* is non-redundant. See the Appendix <ref>. In concluding this section, we will discuss the applications of results-based payments with screening. The main takeaway is that screening helps provide learning incentives. A type report can be interpreted as a cost estimate. To put this in context, consider the process of applying for a grant. Applicants are typically required to submit not just a proposal that outlines their projects but also a budget proposal that breaks down all the anticipated expenses. The practice of screening, particularly in terms of cost assessment, has been gaining increasing traction within the field of medical research. While there is a recognized shortfall in transparency concerning the cost reporting of clinical trials, ongoing efforts are being made to enhance this practice. A notable example of these initiatives is led by Doctors Without Borders, also known as Médecins Sans Frontières <cit.>. §.§ Results-based payments without screening In this subsection, the principal is restricted to using payments that can only depend on the realized posterior. Following the terminology in <cit.>, we define a results-based contract as a contingent payment function t:Δ(Ω)→ℝ. We say that (𝒳,t) is incentive-compatible if 𝔼_p∼𝒳(θ)[t(p)]-θ𝒞(𝒳(θ))≥𝔼_p∼τ[t(p)]-θ𝒞(τ) ∀θ∈Θ, ∀τ∈ X. Because the principal cannot price discriminate according to different reports, a results-based contract must provide incentives for all researcher types. We aim to investigate whether (𝒳^*,T^*) can be implemented using a results-based contract. In the case of two states, <cit.> demonstrates that the optimal choice function 𝒳^* must be Blackwell-monotone. In such a case, a binary, Blackwell-monotone choice function is implementable by a results-based contract that induces the payment rule T^*. However, as shown in Example <ref>, this may not be the case when there are more than two states, as the optimal choice function need not be Blackwell-monotone. Thus, it is important to understand how Blackwell monotonicity relates to implementability in a broader context. We will introduce an additional assumption on 𝒳^* going forward to simplify the analysis. This assumption requires that every experiment has the same number of posteriors as |Ω|. While this assumption is restrictive, it allows for a clear characterization of implementable experiment choice functions.[It rules out cases where the number of posteriors in the support of the desired experiment is strictly less than the number of states. Without this assumption, comprehending the outcome-equivalence result becomes challenging due to freedom in separating the adjacent types.] Experiment choice function 𝒳 has full dimension, if it is non-redundant and satisfies | (𝒳(θ))|=|Ω|, ∀θ∈Θ. In particular, the optimal experiment choice function in Example <ref> has full dimension. The number of posteriors included in the support generally depends on the function v(p)-g(θ) c(p) and the prior. With Shannon entropy cost, <cit.> provides a test for whether an action should be chosen at optimum, which can help determine the number of posteriors. While this assumption may not be applicable in all cases, it can be innocuous in some applications, such as when the principal faces a "matching-the-state" decision problem with a uniform prior. Before the main results, it will be convenient to introduce the following notations. We denote the support of 𝒳^*(θ) by S_θ. Slightly abusing notations, we will write S_θ_k as S_k when we refer to an indexed type space. Under full dimensionality, S_k is a basis for the affine subspace that contains Δ(Ω). For any posterior in the simplex, there is a unique way to represent it as an affine combination of the elements in S_k. Write S_k={p_1^k, p_2^k, …, p_|Ω|^k}. Formally, ∀ p∈Δ(Ω), ∃ unique α^k(p)≡ (α_1^k,α_2^k,…,α_|Ω|^k)∈ℝ^|Ω| such that p=∑_i=1^|Ω|α_i^kp_i^k and ∑_i=1^|Ω|α_i^k=1. Define H_k(p)=∑_i=1^|Ω|α_i^k(p)c(p_i^k).[The definition here is identical to that of H_θ in the proof of Theorem <ref>. To simplify notation, we write H_k instead of H_θ_k.] This function H_k is affine over Δ(Ω) by definition and will play an important role in the following analysis. We introduce a new property of choice functions named strong c-monotonicity. The definition involves an average of affine functions H_k weighted by the relative distances between types. It requires that certain inequalities hold when we compare the cost function to this affine function over a particular set of posteriors. An experiment choice function 𝒳 is strongly c-monotone if the following inequalities hold for all i,j∈{1,2,…, N} such that i<j. c(p)≤∑_k=i^j-1θ_k-θ_k+1/θ_i-θ_jH_k(p) , ∀ p∈ S_i and c(p) ≥∑_k=i^j-1θ_k-θ_k+1/θ_i-θ_jH_k(p) , ∀ p∈ S_j The theorem below establishes an outcome equivalence between results-based contracting and methods-based contracting. If the principal wants to implement an experiment choice function with full dimension, the equivalence between two contracting regimes requires exactly strong c-monotonicity. Let (𝒳^*,T^*) be an optimal methods-based contract and suppose that 𝒳^* is fully dimensional. There exists a contingent payment rule t: Δ(Ω) →ℝ such that (𝒳^*,t) is incentive-compatible and 𝔼_p∼𝒳^*(θ)[t(p)]= T^*(θ), if and only if 𝒳^* is strongly c-monotone. See the Appendix <ref>. With two states, it is known that the outcome-equivalence result relies on Blackwell monotonicity, a notion independent of the specific cost function. Theorem <ref> generalizes 's insights to encompass settings with more than two states, pointing out that this result does depend on the cost function in general settings. Furthermore, we demonstrate that payments based solely on the realized posterior can be applied more broadly. In Example <ref>, we can establish that the optimal choice function is strongly c-monotone, even though it is not Blackwell-monotone. As per Theorem <ref>, results-based contracting attains the same optimal value for the principal as methods-based contracting. Later we will explain how to construct the optimal results-based contract. To help readers understand strong c-monotonicity, we have a detailed discussion in Section <ref>. In particular, symmetric settings often guarantee strong c-monotonicity, which carries significant implications for designing incentives. §.§.§ More on strong c-monotonicity We first introduce two conditions that are closely related to strong c-monotonicity. These conditions can be useful in determining whether an experiment choice function is strongly c-monotone. * A sufficient condition for strong c-monotonicity is for all k∈{1,2,…, N-1}, c(p)≤ H_k(p) , if p∈∪_i<k S_i; and c(p) ≥ H_k(p) , if p∈∪_i>k S_i. * A necessary condition for strong c-monotonicity is for all k∈{1,2,…, N-1}, c(p)≤ H_k(p) , if  p∈ S_k-1; and c(p) ≥ H_k(p) , if  p∈ S_k+1. (Put S_0=∅. ) See the Appendix <ref>. To interpret, the idea behind these two conditions is that 𝒳 has a nested structure with respect to c. Namely, for k∈{1,2,…,N-1}, we can identify two sets in the simplex: U_k={p∈Δ(Ω)|c(p)≥ H_k(p)} and D_k={p∈Δ(Ω)|c(p) ≤ H_k(p)}. Geometrically, D_k is a convex set that inscribes the convex hull of S_k and U_k is the closure of D_k's complement. As shown in Figure <ref>, it is easy to visualize these sets for the case with three states. Condition (<ref>) means for any fixed k, if i<k, then S_i⊆ D_k and if i>k, then S_i⊆ U_k. Condition (<ref>) only has the nested requirement for adjacent types, meaning S_k-1⊆ D_k and S_k+1⊆ U_k. Note that we only need N-1 affine hyperplanes H_k here.[We may define a partial order on X and rewrite condition (<ref>) and (<ref>) in terms of the order. But we have to treat the most efficient type θ_N separately because we don't need c(p)≤ H_N(p) for p∈ S_i such that i<N.] The sufficient condition consists of (N-1)^2 inequalities, while the necessary condition only has 2(N-1)-1. In the special case with two types (N=2), they have the same content, so both are equivalent to strong c-monotonicity. With N=2, conditions (<ref>), (<ref>) and (<ref>) are equivalent. Next, we compare strong c-monotonicity to the previously mentioned concepts: c-monotonicity and Blackwell monotonicity. The following statements hold: * Strong c-monotonicity implies c-monotonicity; the converse is not true. * With |Ω|=2, strong c-monotonicity is equivalent to Blackwell monotonicity. * With |Ω|>2, Blackwell monotonicity does not imply strong c-monotonicity; strong c-monotonicity does not imply Blackwell monotonicity. See the Appendix <ref>. The first statement in Proposition <ref> motivates us to use the name of strong c-monotonicity. The relationship between strong c-monotonicity and Blackwell monotonicity crucially depends on the dimension of state space. The reason is that with |Ω|=2, the convex hull of S_k conincides with D_k, but with |Ω|>2, the convex hull of S_k is generally a proper subset of D_k. The next Proposition <ref> strengthens c-monotonicity to strong c-monotonicity. An experiment τ is symmetric if there exists h ∈ℝ such that for every p∈ (τ), c(p)=h. We say that an experiment choice function 𝒳 is symmetric if for every θ∈Θ, 𝒳(θ) is symmetric. If an experiment choice function 𝒳 is c-monotone and symmetric, then it is strongly c-monotone. If 𝒳 is symmetric, then for every k, H_k is a constant function, i.e., ∃ h_k∈ℝ: ∀ p ∈Δ(Ω), H_k(p)=h_k. Thus, D_k is the sublevel set of function c where it takes on the constant value of h_k. Fix k ∈{1,2,…,N-1}. First, suppose i<k. By c-monotonicity, we have h_i<h_k. For any p∈ S_i, c(p)=h_i<h_k, which means p∈ D_k. Next, suppose i>k. c-monotonicity implies h_i>h_k. For any p∈ S_i, c(p)=h_i>h_k, which means p∈ U_k. Condition (<ref>) holds and so does strong c-monotonicity. Recall that the optimal experiment choice function is always c-monotone. If it is symmetric, then the optimal methods-based contract can be replicated by a results-based contract. We also know that the challenge with providing learning incentives must come from “asymmetry”. In a pure moral hazard model, <cit.> shows that an experiment is implementable if and only if it is symmetric. One may ask whether symmetry is necessary for strong c-monotonicity in our setting. The answer is no. We can refer back to the optimal experiment choice function in Example <ref>, which satisfies strong c-monotonicity but not symmetry. Therefore, it is possible to obtain the equivalence result even when the principal seeks to implement an experiment choice function that forgoes symmetry. §.§.§ An illustration of our results with Example <ref> We will illustrate an application of Theorem <ref> to Example <ref> by showing that the optimal experiment choice function is strongly c-monotone and then discussing how to construct an outcome-equivalent results-based contract. example-1 (Continued.) With two types, strong c-monotonicity reduces to a simple condition S_2⊆ U_1. As shown in the Figure <ref>, the vertices of the thick-lined triangle represent posteriors in S_1={p_1, p_1^', p_1^''} for the less efficient type θ_1; the vertices of the dashed-lined triangle are posteriors in S_2={p_2, p_2^', p_2^''} for the more efficient type θ_2. The light grey area represents set D_1; the dark grey area represents set U_1. We find S_2⊆ U_1. We can pick a sufficiently small t∈ℝ and construct a results-based contract: t(p)=θ_1c(p), if p∈ S_1 θ_2c(p)+(θ_1-θ_2)H_1(p), if p∈ S_2 t, otherwise It is easy to check that (𝒳^*,t) is incentive-compatible and 𝔼_p∼𝒳^*(θ)[t(p)]= T^*(θ). For each type of researcher, we can draw his payoff as a function of the realized posterior. Under strong c-monotonicity, it follows that there is a hyperplane weakly above all the points. We can identify this hyperplane by an affine function s_k,t:Δ(Ω)→ℝ. The type-θ_k researcher's highest expected payoff is s_k,t(p_0), achieved by choosing 𝒳^*(θ_k). Moreover, according to (<ref>) and (<ref>), simple calculation can confirm that s_k,t(p_0)=T^*(θ_k)-θ_k𝒞(𝒳^*(θ_k)). Strong c-monotonicity is important for the type-θ_1 researcher not to mimic type-θ_2. Suppose that it is violated. The contract t we constructed won't be incentive-compatible. For example, if c(p_2)<H_1(p_2), then under contract t, the type-θ_1 researcher's payoff at p_2 will be above s_1(p_2). The researcher can get a strictly higher expected payoff by choosing an experiment with support {p_2,p_1,p_1^',p_1^''} than that of choosing 𝒳^*(θ_1). Interestingly, we can strengthen this result, as the failure of strong c-monotonicity will prevent us from getting any incentive-compatible results-based contract that induces (𝒳^*,T^*). In other words, strong c-monotonicity is necessary for implementing the optimal methods-based contract. * § PROOFS OMITTED FROM MAIN TEXT Define six posterior beliefs as follows. p_1=(0.3626,0.4899,0.1475) p_1^'=(0.0491,0.7308,0.2201) p_1^''=(0.3626,0.1475,0.4899) p_2=(0.4141,0.4790,0.1069) p_2^'=(0.0340,0.7898,0.1762) p_2^''=(0.4141,0.1069,0.4790) We solved the optimal experiment choice function following Proposition 2 of <cit.>. 𝒳^*(θ_1) is a distribution over posteriors such that p_1 occurs with probability 0.3838, p_1^' occurs with probability 0.0933 and p_1^'' occurs with probability 0.5229. 𝒳^*(θ_2) is a distribution over posteriors such that p_2 occurs with probability 0.2186, p_2^' occurs with probability 0.2125 and p_2^'' occurs with probability 0.5689. Note that p_1^'∈(𝒳^*(θ_1)) and p_1^'∉((𝒳^*(θ_2))), which means 𝒳^*(θ_2) is not Blackwell more informative than 𝒳^*(θ_1). We will show that for every θ, there exists an experiment 𝒳^*(θ)∈ X that solves the problem (<ref>) and has affinely independent support. Notice that each type-specific problem in (<ref>) has the same structure as the sender's problem in a Bayesian persuasion model. It is known that solutions exist and can be found using the concavification approach <cit.>. Suppose that a solution to problem (<ref>) has affinely dependent support. We can form a new experiment by dropping some redundant posterior that still preserves the optimality. The proof follows directly from <cit.>. Characterizing incentive-compatible results-based contracts Before the proof of Theorem <ref>, we want to introduce a necessary and sufficient secant hyperplane condition for a results-based contract to be incentive-compatible. This is a generalization of Proposition 2 in <cit.>. Suppose that 𝒳^* is fully dimensional. Recall that we denote the barycentric coordinates of p over S_k as α^k(p). Let u_k,t(p)= t(p)-θ_k c(p) be the type-θ_k researcher's payoff when holding belief p. For the following proofs, define s_k,t(p)=∑_i=1^|Ω|α_i^k (p)[u_k, t(p_i^k)]. By definition, s_k,t represents the affine hyperplane uniquely determined by |Ω| points from {(p,u_k, t(p)): p∈ S_k} and s_k,t equals u_k,t at every p∈ S_k. Suppose that 𝒳^* has full dimension. A results-based contract (𝒳^*,t) is incentive-compatible if and only if the secant hyperplane condition holds: s_k,t(p) ≥ u_k,t(p) ∀ p∈Δ(Ω) and ∀ k ∈{1,2,…, N} () We will prove this direction by contradiction. Suppose that the secant hyperplane condition (<ref>) is not true. There must exist q∈Δ(Ω) such that s_k,t(q) < u_k,t(q) for some k. Let |Ω|=n. Under full dimensionality, write S_k={p^1,p^2,…, p^n}. We want to show that replacing a posterior in S_k by q can form a Bayes-plausible information structure τ, then τ is a profitable deviation for the type-θ_k researcher. By Bayes plausibility of 𝒳^*(θ_k), we can find λ=(λ_1,…, λ_n)∈ℝ_++^n with ∑_i=1^nλ_i=1 such that p_0=∑_i=1^nλ_ip^i. Because S_k is affinely independent, we can find μ=(μ_1,…, μ_n)∈ℝ^n with ∑_i=1^nμ_i=1 such that q=∑_i=1^nμ_ip^i. Note that there is μ_i>0 for some i, otherwise we cannot have ∑_i=1^nμ_i=1. Denote j∈min{λ_i/μ_i: μ_i>0}. Because μ_j>0, we can write p^j as an affine combination using q and other elements in S_k. Thus, p^j=1/μ_j(q-∑_i≠ jμ_ip^i). Substituting p^j, we write p_0=∑_i=1^nλ_ip^i as p_0=λ_j/μ_jq+∑_i≠ j(λ_i-μ_i/μ_jλ_j)p^i. We will verify that p_0 is written as a convex combination of points in {q}∪(S_k∖{p^j}). Recall that λ_i>0, ∀ i and μ_j>0. It is trivial that λ_j/μ_j>0. For all i with μ_i<0, λ_i-μ_i/μ_jλ_j>0; for all i≠ j with μ_i>0, λ_i-μ_i/μ_jλ_j≥λ_i-μ_i/μ_iλ_i=0. Next, summing them up, we have λ_j/μ_j+∑_i≠ j(λ_i-μ_i/μ_jλ_j)= λ_j+∑_i≠ jλ_i=1. Define a distribution over posteriors τ such that q occurs with probability λ_j/μ_j and for every i≠ j, p^i occurs with probability λ_i-μ_i/μ_jλ_j. We show that the type-θ researcher gets strictly higher expected payoff by choosing τ instead of 𝒳^*(θ_k): 𝔼_p∼τ[u_k,t(p)] =τ(q) u_k,t(q)+∑_i≠ jτ(p^i) s_k,t(p^i) >τ(q) s_k,t(q)+∑_i≠ jτ(p^i) s_k,t(p^i) =𝔼_p∼τ[s_k,t(p)] =s_k,t(𝔼_p∼τ[p]) =s_k,t(𝔼_p∼𝒳^*(θ_k)[p]) =𝔼_p∼𝒳^*(θ_k)[s_k,t(p)] =𝔼_p∼𝒳^*(θ_k)[u_k,t(p)], where the strict inequality holds by assumption that s_k,t(q) < u_k,t(q); the first and last equalities follow from the definition of s_θ,t, which satisfies s_k,t(p)=u_k,t(p) for all p∈ S_k; the third equality is true because s_k,t(p) is an affine function; and the fourth equality follows from Bayes plausibility, which means 𝔼_p∼τ[p]=𝔼_p∼𝒳^*(θ_k)[p]=p_0. This contradicts with incentive compatibility for the type-θ researcher. () Fix θ_k and take any experiment τ. The researcher's payoff of choosing τ is bounded from the above by s_k,t(p_0), because 𝔼_p∼τ[u_k,t(p)]≤𝔼_p∼τ[s_k,t(p)]=s_k,t(𝔼_p∼τ[p])=s_k,t(p_0), where the inequality follows from condition (<ref>), the first equality holds due to s_k,t being affine and the second equality uses Bayes plausibility. Notice that 𝒳^*(θ_k) attains the payoff s_k,t(p_0), so it is optimal for the type-θ_k researcher. This proves condition (<ref>), so (𝒳^*,t) is incentive-compatible. () Fix a sufficiently small t∈ℝ. We can define a contingent payment rule t^*:Δ(Ω)→ℝ as follows. t^*(p)=θ_1c(p) for p∈ S_1 θ_ic(p)+∑_k=1^i-1(θ_k-θ_k+1)H_k(p) for p∈ S_i and i ∈{2,3,…,N} t otherwise For every k∈{1,2,…,N}, we solve the affine function s_k,t^*(p) associated with t^*: s_1,t^*(p) =0 s_j,t^*(p) =∑_k=1^j-1(θ_k-θ_k+1)H_k(p) ∀ j ∈{2,3,…,N} Notice that by definition, every s_k,t^* is continuous and does not depend on t. Set t=min_q∈Δ(Ω)min_k∈ℕ, 1≤ k≤ N{s_k,t^*(q)+θ_kc(q)}. t is well-defined, because the minimum of a finite number of continuous functions is continuous and Δ(Ω) is compact. Suppose that 𝒳^* is strongly c-monotone. We need to verify two statements: (i) the results-based contract (𝒳^*,t^*) is incentive-compatible, and (ii) t^* induces T^*. (i) In order to show that (𝒳^*,t^*) is incentive-compatible, we will prove that (𝒳^*,t^*) satisfies the secant hyperplane condition (<ref>) in Lemma <ref>. We will verify that at every belief p∈Δ(Ω), s_k,t^*(p)≥ u_k,t^*(p) for all k. ∙ Suppose p∉∪_k=1^NS_k. In this case, t^*(p)=t. By definition, t≤ s_k,t^*(p) +θ_k c(p) holds for all k. We have s_k,t^*(p)≥ t^*(p)-θ_k c(p)≡ u_k,t^*(p), ∀ k. ∙ Suppose p∈∪_k=1^NS_k. We first calculate the difference: s_1,t^*(p) - u_1,t^*(p) = 0 for p∈ S_1 (θ_1-θ_j)c(p)-∑_k=1^j-1(θ_k-θ_k+1)H_k(p) for p∈ S_j and j ≥ 2 which is non-negative because by strong c-monotonicity, c(p)≥∑_k=1^j-1θ_k-θ_k+1/θ_1-θ_j H_k(p) holds for any p∈ S_j and j ≥ 2. Thus, we have s_1,t^*(p) ≥ u_1,t^*(p). For j≥ 2, we also find the difference is non-negative: s_j,t^*(p) - u_j,t^*(p) = ∑_k=i^j-1(θ_k-θ_k+1)H_k(p)-(θ_i-θ_j)c(p) for p∈ S_i and all i s.t. 1 ≤ i < j-1 0 for p∈ S_j-1 0 for p ∈ S_j (θ_j-θ_i)c(p)-∑_k=j^i-1(θ_k-θ_k+1)H_k(p) for p∈ S_i and all i > j By strong c-monotonicity, ∑_k=i^j-1(θ_k-θ_k+1)H_k(p)-(θ_i-θ_j)c(p), ∀ p∈ S_i and ∀ i s.t. 1 ≤ i< j-1; (θ_j-θ_i)c(p)-∑_k=j^i-1(θ_k-θ_k+1)H_k(p) and ∀ i s.t. i> j-1. Therefore, we have s_j,t^*(p) ≥ u_j,t^*(p), ∀ j≥ 2. (ii) Next, we want to show that t^* induces the payment function T^*. For i=1: 𝔼_p∼𝒳^*(θ_1)[t^*(p)]=𝔼_p∼𝒳^*(θ_1)[θ_1c(p)]=T^*(θ_1) For i≥ 2: 𝔼_p∼𝒳^*(θ_i)[t^*(p)] =𝔼_p∼𝒳^*(θ_i)[ θ_ic(p)+∑_k=1^i-1(θ_k-θ_k+1)H_k(p)] =θ_i𝔼_p∼𝒳^*(θ_i)[c(p)]+∑_k=1^i-1(θ_k-θ_k+1)𝔼_p∼𝒳^*(θ_i)[H_k(p)] =θ_i𝔼_p∼𝒳^*(θ_i)[c(p)]+∑_k=1^i-1(θ_k-θ_k+1)𝔼_p∼𝒳^*(θ_k)[H_k(p)] =θ_i𝔼_p∼𝒳^*(θ_i)[c(p)]+∑_k=1^i-1(θ_k-θ_k+1)𝔼_p∼𝒳^*(θ_k)[c(p)] =T^*(θ_i) () Suppose that a results-based contract (𝒳^*,t) is incentive-compatible and t induces T^*. We need to show that 𝒳^* must be strongly c-monotone. We claim that such results-based contract must satisfy certain conditions. Next, using the claim, we rewrite the secant hyperplane condition (<ref>). s_k+1,t(p)-s_k,t(p)=(θ_k-θ_k+1)H_k(p), ∀ p∈Δ(Ω) and ∀ k∈{1,2,…,N-1} Firstly, let us consider the type-θ_j researcher does not have incentive to choose posteriors from S_i: s_j,t(p) ≥ u_j,t(p):=t(p)-θ_j c(p), ∀ p∈ S_i By definition, s_i,t(p)=t(p)-θ_ic(p), ∀ p∈ S_i. We can substitute t(p)=s_i,t(p)+θ_ic(p) into the above inequality and get s_j,t(p) - s_i,t(p) ≥(θ_i-θ_j) c(p), ∀ p∈ S_i. Then we express s_j,t(p) - s_i,t(p)=∑_k=i^j-1[s_k+1,t(p) - s_k,t(p)] as the sum of differences between adjacent types and use (<ref>) to rewrite each term. Because θ_i-θ_j>0, dividing it on both sides does not change the sign of the inequality. ∑_k=i^j-1θ_k-θ_k+1/θ_i-θ_jH_k(p)≥ c(p), ∀ p∈ S_i Secondly, consider the type-θ_i researcher does not have incentive to choose posteriors from S_j: s_i,t(p)≥ t(p)-θ_ic(p), ∀ p∈ S_j We can plug in t(p)=s_j,t(p)+θ_j c(p) and write it as (θ_i-θ_j) c(p) ≥ s_j,t(p)-s_i,t(p), ∀ p∈ S_j. Similarly, we get c(p) ≥∑_k=i^j-1θ_k-θ_k+1/θ_i-θ_jH_k(p), ∀ p∈ S_j Step 1. We want to show that achieving the lowest payment implies 𝔼_𝒳^*(θ_k)[s_k+1,t(p)-s_k,t(p)]=(θ_k-θ_k+1)𝔼_𝒳^*(θ_k)[ c(p)]. By assumption that t attains the payment rule T^*, we have 𝔼_𝒳^*(θ)[t(p)]=T^*(θ), ∀θ∈Θ. Because T^* is given by (<ref>) and (<ref>), we have for k ∈{1,2,…,N-1}, 𝔼_𝒳^*(θ_k+1)[t(p)]-θ_k+1𝔼_𝒳^*(θ_k+1)[c(p)]=𝔼_𝒳^*(θ_k)[t(p)]-θ_k+1𝔼_𝒳^*(θ_k)[c(p)] By the way s_k,t is defined, s_k,t(p)=t(p)-θ_kc(p) at p ∈ S_k, ∀ k∈{1,2,…,N}. Therefore, we can write 𝔼_𝒳^*(θ_k+1)[t(p)]=𝔼_𝒳^*(θ_k+1)[s_k+1,t(p)]+θ_k+1𝔼_𝒳^*(θ_k+1)[c(p)] and 𝔼_𝒳^*(θ_k)[t(p)]=𝔼_𝒳^*(θ_k)[s_k,t(p)]+θ_k𝔼_𝒳^*(θ_k)[c(p)]. Plugging these back and simplifying, we get 𝔼_𝒳^*(θ_k+1)[s_k+1,t(p)]=𝔼_𝒳^*(θ_k)[s_k,t(p)]+θ_k𝔼_𝒳^*(θ_k)[c(p)]-θ_k+1𝔼_𝒳^*(θ_k)[c(p)] By s_k+1,t being affine and 𝔼_𝒳^*(θ_k+1)[p]=𝔼_𝒳^*(θ_k)[p]=p_0, 𝔼_𝒳^*(θ_k+1)[s_k+1,t(p)]=𝔼_𝒳^*(θ_k)[s_k+1,t(p)]. Therefore, we can write 𝔼_𝒳^*(θ_k)[s_k+1,t(p)-s_k,t(p)]=𝔼_𝒳^*(θ_k)[(θ_k-θ_k+1) c(p)] Step 2. Next, we will show that the following condition holds: s_k+1,t(p)-s_k,t(p)=(θ_k-θ_k+1) c(p), ∀ p∈ S_k We prove this by contradiction. Suppose that it is not true. There exists some q∈ S_k such that s_k+1,t(q)-s_k,t(q)<(θ_k-θ_k+1)c(q). Because s_k,t(p)=t(p)-θ_kc(p), ∀ p∈ S_k, we can write the inequality s_k+1,t(q)-[t(q)-θ_kc(q)]<(θ_k-θ_k+1)c(q) equivalently as s_k+1,t(q)<u_k+1,t(q), which contradicts incentive compatibility for the type-θ_k+1 researcher in (<ref>). Step 3. Following Step 2, the affine function s_k+1,t(p) is also determined by |Ω| points in {(p,u_k+1,t(p)): p∈ S_k}, so s_k+1,t(p)=∑_i=1^|Ω|α_i^k[u_k+1,t(p_i^k)]. By definition, s_k,t(p)=∑_i=1^|Ω|α_i^k[u_k,t(p_i^k)]. Moreover, u_k+1,t(p)-u_k,t(p)=[t(p)-θ_k+1c(p)]-[t(p)-θ_kc(p)]=(θ_k-θ_k+1)c(p). We can rewrite the difference as s_k+1,t(p)-s_k,t(p)=(θ_k-θ_k+1)∑_i=1^|Ω|α_i^k[c(p_i^k)]=(θ_k-θ_k+1) H_k(p), which holds for all p∈Δ(Ω). Statement 1. A sufficient condition for strong c-monotonicity Consider i, j∈{1,2,…,N} and i<j. Because θ_1>θ_2>…>θ_N>0, for k∈{i,i+1,…,j-1}, θ_k-θ_k+1/θ_i-θ_j is positive and satisfies ∑_k=i^j-1θ_k-θ_k+1/θ_i-θ_j=1. Fix p∈ S_i. By definition of H_i, we have c(p)= H_i(p). By condition (<ref>), for k∈{i+1,i+2,…,j-1}, we have c(p)≤ H_k(p). Notice that multiplying θ_k-θ_k+1/θ_i-θ_j on both sides will not change the direction of the inequality. Summing up over k, we obtain the inequality c(p)≤∑_k=i^j-1θ_k-θ_k+1/θ_i-θ_jH_k(p). Similarly, fix p∈ S_j. By condition (<ref>), for k∈{i+1,i+2,…,j-1}, we have c(p)≥ H_k(p). Summing up over k, we obtain c(p)≥∑_k=i^j-1θ_k-θ_k+1/θ_i-θ_jH_k(p). Statement 2. A necessary condition for strong c-monotonicity Fix k∈{1,2,…, N-1}. Setting i=k and j=k+1 in the definition of strong c-monotonicity, we have c(p)≥ H_k(p), ∀ p∈ S_k+1. Next, we will show that c(p)≤ H_k(p), ∀ p∈ S_k-1. * If k>1, we let i=k-1 and j=k+1. Strong c-monotonicity yields c(p)≤θ_k-1-θ_k/θ_k-1-θ_k+1H_k-1(p)+θ_k-θ_k+1/θ_k-1-θ_k+1H_k(p), ∀ p∈ S_k-1 By definition, H_k-1(p)=c(p), ∀ p∈ S_k-1. Replacing H_k-1(p) with c(p) on the right-hand side and simplifying, we get c(p)≤ H_k(p), ∀ p∈ S_k-1. * If k=1, c(p)≤ H_k(p), ∀ p∈ S_k-1 holds vacuously, because we put S_0=∅. We will restrict our attention to experiment choice functions that have full dimension for our comparison, as strong c-monotonicity is only defined for this class of functions. Statement 1. Firstly, we want to show that strong c-monotonicity implies c-monotonicity. By Proposition <ref>, Strong c-monotonicity implies condition (<ref>). It follows that for all k∈{1,2,…,N-1}, c(p)≥ H_k(p), ∀ p∈ S_k+1. For every k, we have 𝔼_p∼𝒳^*(θ_k+1)[c(p)]≥ 𝔼_p∼𝒳^*(θ_k+1)[H_k(p)] = H_k(𝔼_p∼𝒳^*(θ_k+1)[p]) = H_k(𝔼_p∼𝒳^*(θ_k)[p]) = 𝔼_p∼𝒳^*(θ_k)[H_k(p)] = 𝔼_p∼𝒳^*(θ_k)[c(p)], where the first and third equalities follow from H_k being affine, the second equality holds because 𝔼_p∼𝒳^*(θ_k+1)[p]=𝔼_p∼𝒳^*(θ_k)[p]=p_0 and the last equality is true by definition H_k(p)=c(p), ∀ p ∈ S_k. Recall that we write 𝔼_p∼𝒳^*(θ_k)[c(p)] succinctly as C(𝒳^*(θ_k)). Combining all inequalities, we get C(𝒳^*(θ_N))≥ C(𝒳^*(θ_N-1))⋯≥ C(𝒳^*(θ_1)), meaning that 𝒳^* is c-monotone. Next, we want to show that c-monotonicity does not imply strong c-monotonicity. An example will suffice. In the following, we discuss the cases of |Ω|=2 and |Ω|>2. ∙ For |Ω|=2, we claim that strong c-monotonicity is equivalent to Blackwell monotonicity (See the proof of statement 2 below). Any experiment choice function that is c-monotone but involves Blackwell incomparable experiments would work. ∙ For |Ω|>2, we provide a specific example where the experiment choice function that is c-monotone but not strong c-monotone. Define six posterior beliefs as follows. q_1=(0.5770,0.0000,0.4230) q_1^'=(0.0001,0.9998,0.0001) q_1^''=(0.0002,0.0008,0.9990) q_2=(0.6799, 0.0001, 0.3200) q_2^'=(0.0005, 0.9993, 0.0002) q_2^''=(0.0004, 0.0170, 0.9827) 𝒳(θ_1) is a distribution over posteriors such that q_1 occurs with probability 0.58, q_1^' occurs with probability 0.33 and q_1^'' occurs with probability 0.09. 𝒳(θ_2) is a distribution over posteriors such that q_2 occurs with probability 0.49, q_2^' occurs with probability 0.33 and q_2^'' occurs with probability 0.18. 𝒳 has full dimension. Consider a quadratic cost function c(p)=(p(ω_1)-1/3)^2+(p(ω_2)-1/3)^2+(p(ω_3)-1/3)^2. Firstly, 𝒳 is c-monotone, because C(𝒳(θ_1))=0.3832<0.44678=C(𝒳(θ_2)). Secondly, 𝒳 is not strongly c-monotone. With two types, strong c-monotonicity is equivalent to c(p)≥ H_1(p), ∀ p∈(𝒳(θ_2)). We find q_2^''∈(𝒳(θ_2)) but c(q_2^'')≥ H_1(q_2^''), so 𝒳 is not strongly c-monotone. As preparation for proving statements 2 and 3, we state a geometric version of Blackwell's theorem in <cit.> that holds for any finite state space. (<cit.>) For τ, τ^'∈ X and τ^' has affinely independent support, τ^' is a mean-preserving spread of τ if and only if (τ)⊆((τ^')). Statement 2. For |Ω|=2: We will first show D_k=(S_k) and then use Proposition <ref> and Lemma <ref> to connect strong c-monotonicity with Blackwell monotonicity. Note that by strict convexity of c, (S_k)⊆ D_k. We will prove D_k⊆(S_k) by contradiction. Suppose there is q∈Δ(Ω) such that q∈ D_k and q∉(S_k). Specifically, write S_k={p_k,p_k^'}. By affine independence of S_k, there exists λ∈ℝ such that q=λ p_k+(1-λ)p_k^'. Because q∉(S_k), λ∉[0,1]. * If λ<0, then 1-λ>0. q=λ p_k+(1-λ)p_k^' (1-λ)p_k^'=q-λ p_k p_k^'=1/1-λ q + -λ/1-λ p_k p_k^'∈({q,p_k}) We reach a contradiction, since c(p_k^')<1/1-λ c(q)+-λ/1-λ c(p_k)≤1/1-λ H_k(q)+-λ/1-λ c(p_k) =1/1-λ H_k(q)+-λ/1-λ H_k(p_k)= H_k(p_k^')=c(p_k^'), where the strict inequality follows from the strict convexity of c, the weak inequality is due to q∈ D_k, the first and third equalities hold by H_k(p)=c(p) at every p∈ S_k and the second equality holds because H_k is affine. * If λ>1, then 1-λ<0. q=λ p_k+(1-λ)p_k^' λ p_k=q-(1-λ)p_k^' p_k=1/λ q + -(1-λ)/λ p_k^' p_k∈({q,p_k^'}) Similarly, we reach a contradiction because c(p_k)<1/λ c(q) + -(1-λ)/λ c(p_k^') ≤1/λ H_k(q) + -(1-λ)/λ c(p_k^')=c(p_k). Next, we will verify that (i) Blackwell monotonicity is implied by the necessary condition for strong c-monotonicity and (ii) Blackwell monotonicity implies the sufficient condition for strong c-monotonicity. Thus, all of these are equivalent. (i) By condition (<ref>) and D_k=(S_k), S_k-1⊆(S_k), ∀ k∈{1,2,…, N-1}. It implies Blackwell monotonicity, because the Blackwell order is transitive. (ii) By Blackwell monotonicity, S_j⊆(S_i), ∀ i,j∈{1,2,…, N} and j<i. Because D_k=(S_k), we immediately have S_i⊆ D_k, ∀ i<k. The rest of the proof is to show S_i⊆ U_k, ∀ i>k. We will prove this by contradiction. Write S_i={p_i,p_i^'}. Suppose that p_i∉U_k. It follows that p_i∈ D_k=(S_k) and p_i∉S_k. Thus, there is λ∈(0,1) such that p_i=λ p_k+(1-λ) p_k^'. By affine independence of S_k, there is μ∈ℝ and μ≠λ such that p_i^'=μ p_k+(1-μ) p_k^'. We can write p_k and p_k^' as affine combinations of p_i and p_i^': [ p_i; p_i^' ] = [ λ 1-λ; μ 1-μ ][ p_k; p_k^' ][ p_k; p_k^' ] = [ 1-μ/λ-μ λ-1/λ-μ; -μ/λ-μ λ/λ-μ ][ p_i; p_i^' ] By affine independence of S_i, the representation of an affine combination is unique. Because λ(λ-1)<0, we can never have both p_k and p_k^' belong to (S_i). However, this contradicts the fact that S_k⊆(S_i), for i>k. Statement 3. For |Ω|>2: Example <ref> shows that for a fixed c, strong c-monotonicity does not guarantee Blackwell monotononicity. Towards the other direction, it is possible to find a Blackwell-monotone choice function that violates strong c-monotonicity with some properly chosen function c. (The idea is to draw a convex set D_1 that includes one of the posteriors in S_2.) § RELATION TO A GENERAL RESULT IN YODER (2022) Proposition 4 in <cit.> contains a result that holds for any finite state space. Firstly, it provides a necessary condition for incentive compatibility in (<ref>). Secondly, it can be applied to strengthen our Proposition <ref>. Recall that the optimal experiment choice function can be obtained from solving a set of type-specific problems (<ref>). The objective associated with a more efficient type is more convex than the one associated with a less efficient type, in the sense that their difference is strictly convex. It follows from <cit.> that ∀ i<j:  S_j∩conv(S_i) ⊆ext(conv(S_i)) where ext(conv(S_i)) represents the set of extreme points of conv(S_i). To interpret, all of the results produced by a more efficient researcher (with type θ_j) must be at least as extreme as any result produced by a less efficient researcher (with type θ_i). Also, it is known that this ordering is equivalent to the Blackwell ordering when |Ω|=2, but in general is neither stronger nor weaker. Under the full dimension assumption, we have a result showing that strong c-monotonicity is strictly stronger than condition (<ref>). Suppose that |Θ|=2. Strong c-monotonicity implies (<ref>); while the converse is not true. Assuming that strong c-monotonicity holds, i.e., S_2⊆ U_1, we have (S_2∩ (S_1))⊆(U_1∩ (S_1))=S_1=ext(conv(S_1)). Conversely, Example <ref> shows that given the quadratic function c, the experiment choice function is not strongly c-monotone. However, it satisfies condition (<ref>), because S_2∩ (S_1)=∅. To emphasize, strong c-monotonicity depends on the cost function c while the ordering of experiments in (<ref>) does not. Typically, if an experiment choice function satisfies condition (<ref>), it is not necessary to have the outcome equivalence between results-based contracting and methods-based contracting.
http://arxiv.org/abs/2307.04234v1
20230709172453
Extreme N-emitters at high-redshift: signatures of supermassive stars and globular cluster or black hole formation in action?
[ "R. Marques-Chaves", "D. Schaerer", "A. Kuruvanthodi", "D. Korber", "N. Prantzos", "C. Charbonnel", "A. Weibel", "Y. I. Izotov", "M. Messa", "G. Brammer", "M. Dessauges-Zavadsky", "P. Oesch" ]
astro-ph.GA
[ "astro-ph.GA" ]
Observatoire de Genève, Université de Genève, Chemin Pegasi 51, 1290 Versoix, Switzerland CNRS, IRAP, 14 Avenue E. Belin, 31400 Toulouse, France Institut d'Astrophysique de Paris, UMR 7095 CNRS, Sorbonne Université, 98bis, Bd Arago, 75014 Paris, France Bogolyubov Institute for Theoretical Physics, National Academy of Sciences of Ukraine, 14-b Metrolohichna str., Kyiv, 03143, Ukraine The Oskar Klein Centre, Department of Astronomy, Stockholm University, AlbaNova, SE-10691 Stockholm, Sweden Cosmic Dawn Center (DAWN), Niels Bohr Institute, University of Copenhagen, Jagtvej 128, København N, DK-2200, Denmark Marques-Chaves et al. Extreme N-emitters at high-redshift Recent JWST spectroscopic observations of the z=10.6 galaxy GN-z11 have revealed a very peculiar UV spectrum showing intense emission lines of nitrogen, which are generally not detected in galaxy spectra. This observation indicates a super-solar N/O abundance ratio at low metallicity, resembling only the abundances seen in globular cluster (GC) stars. This discovery suggests that we might be seeing proto-GCs in formation or possibly even signatures of supermassive stars. To examine if other objects with strong N iv and/or Niii emission lines (N-emitters, hereafter) exist and to better understand their origin and nature, we have examined available JWST spectra and data from the literature. Using the NIRSpec/JWST observations from CEERS we found an extreme N-emitter, at z=8.6782 showing intense and emission. From the observed rest-UV and optical lines we conclude that it is compatible with photoionization from stars and we determine accurate abundances for C, N, O, and Ne, relative to H. We also (re-)analyze other N-emitters from the literature, including three lensed objects at z=2.3-3.5 (the Sunburst cluster, SMACS2031, and Lynx arc) and a low-redshift compact galaxy, Mrk 996. We compare the observed abundance ratios to observations from normal star-forming galaxies, predicted wind yields from massive stars and predictions from supermassive stars (SMS with ∼ 10^4-10^5 ). For we find a highly supersolar ratio log( N/O)=-0.18 ± 0.11, and abundances of log( C/O)= -0.75 ± 0.11 and log( Ne/O)= -0.63 ± 0.07, which are normal compared to other galaxies at the low metallicity (= 7.70 ± 0.18) of this galaxy. The three lensed N-emitters also show strongly enhanced N/O ratios and two of them normal C/O. The high N/O abundances can be reproduced by massive star winds assuming a special timing and essentially no dilution with the ambient ISM. Alternatively, these N/O ratios can be explained by mixing the ejecta of SMS with comparable amounts of unenriched ISM. Massive star ejecta (from WR stars) are needed to explain the galaxies with enhanced C/O (Lynx arc, Mrk 996). On the other hand, SMS in the “conveyer-belt model” put forward to explain globular clusters, predict a high N/O and small changes in C/O, compatible with , the Sunburst cluster, SMACS2031, and GN-z11. Based on the chemical abundances, possible enrichment scenarios and other properties, such as their compactness and high ISM density, we discuss which objects could contain proto-GCs. We suggest that this is the case for , SMACS2031, and the Sunburst cluster. Enrichment in the Lynx arc and Mrk 996 is likely due to normal massive stars (WR), which implies that the star-forming regions in these objects cannot become GCs. Finally, we propose that some N-emitters enriched by SMS could also have formed intermediate mass black holes, and we suggest that this might be the case for GN-z11. Our observations and analysis reinforce the suggested link between some N-emitters and proto-GC formation, which is supported both by empirical evidence and quantitative models. Furthermore, the observations provide possible evidence for the presence of supermassive stars in the early Universe (z>8) and at z ∼ 2-3. Our analysis also suggests that the origin and nature of the N-emitters is diverse, including also objects like GN-z11 which possibly host an AGN. Extreme N-emitters at high-redshift: signatures of supermassive stars and globular cluster or black hole formation in action ? R. Marques-Chaves1, D. Schaerer1,2, A. Kuruvanthodi1, D. Korber1, N. Prantzos3, C. Charbonnel1,2, A. Weibel1, Y. I. Izotov4, M. Messa1,5, G. Brammer6, M. Dessauges-Zavadsky1, P. Oesch1,6 Received date; accepted date ======================================================================================================================================================================================================== § INTRODUCTION Long known as the most distant spectroscopically-confirmed galaxy <cit.>, GN-z11 has recently lead to new exciting and intriguing results, after the first spectra of this galaxy were obtained with the JWST. Indeed, the JWST/NIRSpec observations of <cit.> allowed to confirm a very high redshift of this source (z=10.60) and showed the presence of hydrogen, carbon, oxygen, magnesium, and neon emission lines in the rest-UV and rest-optical spectrum, often seen in star-forming galaxies at low-redshift and detected at z ∼ 4-8 in other JWST spectra <cit.>. Most surprisingly, however, the spectrum of GN-z11 revealed the presence of strong and lines <cit.>, which are very rarely detected in galaxies <cit.>. Furthermore, the object is found to be very compact <cit.>, which could indicate the presence of massive compact star clusters or point to an active galactic nucleus (AGN) <cit.>. The discovery of the peculiar emission line spectrum has triggered a series of papers discussing in particular their origin and the nature of GN-z11. <cit.> first suggested that the strong N emission lines may imply an unusually high N/O abundance. They also discussed whether the emission would be powered by star formation or photoionization from an AGN, without reaching clear conclusions on this issue. The quantitative analysis of the emission line spectrum of GN-z11 by <cit.> confirmed the high N/O abundance, with a lower limit of four times solar, finding also possibly a less extreme C/O ratio, and a metallicity (O/H), which is sub-solar, although not well constrained. Using a suite of photoionization models, <cit.> inferred the N/O abundance with a lower uncertainty and constrained the metallicity to = 7.84^+0.06_-0.05, confirming in particular a large overabundance of N/O ≈ 3 × solar. The finding of an exceptionally high N/O abundance at low metallicity (typically ten times the normal N/O value at this O/H) has triggered different speculations about the sources and processes explaining this enrichment. The scenarii discussed include enrichment from massive stars winds (WR stars) or AGB stars, i.e. relatively “classical scenarii”, or more “exotic” options such as pollution from PopIII star-formation, tidal disruption of stars from encounters with black holes, ejecta from very massive stars formed through collisions in dense clusters, and supermassive stars <cit.>. Supermassive stars, for example, have been invoked by <cit.> and <cit.> since very strong enrichment of N and low metallicity is difficult to explain and requires fairly fined-tuned conditions with classical scenarios <cit.>. Furthermore, such stars (with masses 1000 ) have been proposed to form by runaway collisions in very dense stellar clusters, and they could explain the long-standing problem of multiple stellar populations and peculiar abundance patterns observed in globular clusters (GC), as discussed by <cit.> and <cit.>. If correct, this would probably represent the first observational evidence of supermassive stars, which are also of great interest, for example for understanding the seeds of supermassive black holes <cit.> . Not only the abundance ratios observed in GN-z11 resemble those of GC stars. Its compactness and high ISM density also indicate conditions expected in young very massive clusters, which could be proto-GCs <cit.>. GN-z11 might thus also be the first high-redshift object where the long sought-for peculiar abundance patterns characterizing GCs are observed <cit.>. These exciting and surprising findings obviously pose the question of the uniqueness of GN-z11, beg for more examples, and call for a better understanding of similar objects, if they exist. Indeed, although very rare, other galaxies showing emission lines of or in the UV (referred to as N-emitters subsequently) are known, as pointed out by <cit.> and found in the compilation of <cit.>. Apart from objects clearly identified as AGN, the Lynx arc, a lensed z=3.36 galaxy identified for and emission is probably the first N-emitter studied in detail <cit.>. From photoionization modeling <cit.> derive a high N/O ratio and sub-solar metallicity. Another strongly lensed object at z=2.37, the multiply-imaged compact star cluster in the Sunburst arc which has extensively been studied in recent years <cit.>, shows emission, as shown in the high S/N spectrum of <cit.>. <cit.> have shown that N/O is also elevated (∼ 4 × solar) at a metallicity ∼ 1/5 solar. Finally, in the low-redshift Universe, Mrk 996 uniquely stands out as the only galaxy showing strong emission in the UV <cit.>, and this blue compact dwarf galaxy has long been known as very peculiar, showing e.g. a high electron density, the presence of strong emission lines from WR stars in the optical, and a high N/O abundance, at least in its core <cit.>. Here we present a detailed analysis of the z=8.68 galaxy observed with NIRSpec/JWST by the public CEERS survey <cit.>. This object has previously been studied by several authors <cit.>, but none of these have analysed the carbon and nitrogen abundance and its rest-UV spectrum. Only very recently, <cit.> have analyzed the UV spectrum in detail. Similarly to GN-z11, this galaxy exhibits a very peculiar rest-UV spectrum, making it clearly an N-emitter. Showing numerous emission lines of H, C, N, O, Ne, and the auroral line, it allows us to accurately determine the chemical abundances of these elements and offers thus a unique opportunity to study the second N-emitter in the early Universe and to enlarge the sample of these rare objects. We also analyze the other known N-emitters and compare their properties to those of and GN-z11. Finally, we confront the observed abundance patterns with predictions from normal massive stars and with predicted enrichment patterns from supermassive stars. The paper is structured as follows. In Sect. <ref> we describe the observational data, reduction, and measurements used in this work. We then discuss the nature of the ionizing source of (Sect. <ref>). The chemical abundances and other physical properties of are derived in Sect. <ref>. In Sect. <ref> we compare the abundance ratios of to other N-emitters and normal star-forming galaxies, and we present different chemical enrichment scenarios to explain them. We also discuss the possible link between and proto-GCs. The main results of our work are summarized in Sect. <ref>. Throughout this work, we assume concordance cosmology with Ω_ m = 0.274, Ω_Λ = 0.726, and H_0 = 70 Mpc^-1. § CEERS-1019: A NEW STRONG N EMITTER AT HIGH REDSHIFT CEERS-1019 (α, δ [J2000] = 215.0354^∘, 52.8907^∘) was initially identified as a z_ phot≃ 8.6 dropout galaxy by <cit.> and spectroscopically confirmed at z_ spec = 8.683 by <cit.> through strong Lyα emission (see also and ). It is one of the most distant Lyα emitter known and is thought to reside in an over-dense region and ionized bubble boosting substantially its Lyα transmission <cit.>. <cit.> also report a tentative detection of N v λ 1240 (4.6σ), suggesting a hard ionizing spectrum of this source. Recently, much deeper spectroscopy of CEERS-1019 was reported and analyzed by <cit.>, <cit.>, and <cit.> using NIRSpec, along with NIRCam and MIRI imaging. Although with some discrepancies, these works derived important physical properties of CEERS-1019 such as its stellar mass (log(M_⋆/M_⊙≃ 8.7-10.1), gas-phase metallicities (≃ 7.6-8.0), ionizing indicators (e.g., O32 ≃ 13-18), among others. Interestingly, <cit.> reported a tentative (2.5σ) detection of a broad component in Hβ that could be related to AGN activity (the presence of an AGN will be further discussed in Section <ref>). Here, we re-analyze the available JWST data of CEERS-1019. §.§ JWST NIRSpec and NIRCam observations JWST/NIRSpec spectra are available for CEERS-1019 as part of the Cosmic Evolution Early Release Science (CEERS[<https://ceers.github.io/>]; ) program. These observations include both low-resolution PRISM and medium-resolution grating (G140M/F100LP, G235M/F170LP, and G395M/F290LP), providing spectral resolution of R≃ 100 and R≃ 1000, respectively, and a spectral coverage ≃ 1-5μm. Standard 3-shutter slits and a 3-point nodding pattern were used. The total exposure time for each medium-resolution grating was 3107 seconds, split into three individual exposures of 14 groups each. Deeper observations were obtained with the low-resolution PRISM, with a total exposure time of 6214 seconds. Both PRISM and medium-resolution observations were obtained with an aperture position angle PA ≃ 89.32 deg (see Figure <ref>). Data reduction was performed using the official JWST pipeline [<https://jwst-pipeline.readthedocs.io/>] for Level 1 data products and msaexp[<https://github.com/gbrammer/msaexp>] for Levels 2 and 3. Bias and dark current are subtracted followed by the correction of the 1/f noise and the “snowball” events. We use the calibration reference data system (CRDS) context jwst_1063.pmap to correct spectra for flat-field and implement the wavelength and photometric calibrations. 2D spectra of each slitlet are then drizzle-combined and the background is subtracted following the three-shutter dither pattern. Finally, 1D spectra are extracted using the inverse-variance weighted kernel following <cit.>. Figure <ref> shows the NIRSpec spectra of CEERS-1019. CEERS-1019 was also observed with JWST/NIRCam with the F115W, F150W, F200W, F277W, F356W, F410M, and F444W filters with exposure times of ∼ 3000 seconds <cit.>. NIRCam images were reduced using the grizli reduction pipeline <cit.>, which includes procedures for masking the “snowball” artifacts and minimizing the impact of 1/f noise. Photometry of is performed using SExtractor <cit.> in dual mode. For each NIRCam filter, we use the point-spread functions (PSFs) provided by G. Brammer within the grizli PSF library,[ <https://github.com/gbrammer/grizli-psf-library> ] which are based on models from webbpsf <cit.>. Images are then PSF-matched to F444W, which has the largest PSF within the NIRCam filters. We measure the flux of in each filter using a circular aperture of 0.16^'' radius (4 pix) and apply an aperture correction derived in F444W using the "FLUX_AUTO" measured in a Kron-like aperture with default Kron parameters. Then, we scale all fluxes to total fluxes based on the encircled energy of the circularized Kron aperture on the F444W PSF from webbpsf (see Weibel et al. in prep. for more details) As shown in the bottom left panel of Figure <ref>, shows a complex morphology with three compact clumps. §.§ Emission line measurements As shown in Figure <ref>, presents intense nebular emission in the rest-frame UV and optical. As a first step, we determine the systemic redshift of using well-detected (≥ 10σ) and uncontaminated (i.e., not blended) emission lines detected in the G395M spectrum. Using the centroids of , , , and we derive the mean value and scatter of z_ sys=8.6782 ± 0.0006. Several rest-frame UV lines are detected with high significance (≥ 5σ) in the deep PRISM spectrum, such as [Unless stated otherwise, refers to the sum of the forbidden [N iv] λ 1483 and the semi-forbidden N iv] λ 1486 lines, which are not resolved in the Prism spectrum.], , , and . This contrasts with the shallower medium-resolution G140M spectrum that shows only Lyα and N iv] at ≥ 3σ. Thus we use the much higher signal-to-noise ratio (S/N) PRISM spectrum to measure the fluxes of the rest-frame UV lines. We fit simultaneously several Gaussian profiles to account for the emission of N iv], C iv, O iii], N iii], and C iii], and a power-law in the form of f_λ∝λ^β to fit the continuum level between 1.3-2.1 μm (λ_0≃ 1300-2200Å). Since these lines are not resolved in the PRISM spectrum,[N iv] λ 1486 presents an observed line width FWHM = 394 ± 95 km s^-1 in the medium-resolution G140M spectrum.] we fixed the line widths of each line to the expected instrumental resolution at their corresponding wavelengths (R≃ 30-45)[<https://jwst-docs.stsci.edu/jwst-near-infrared-spectrograph/nirspec-instrumentation/nirspec-dispersers-and-filters>]. We repeat the fit 500 times while bootstrapping the spectrum according to its 1σ error, and consider the standard deviation of each parameter as its 1σ uncertainty. Table <ref> summarizes our flux measurements. Along with Lyα, N iv] is found to be the strongest emission line in the rest-UV, stronger than C iv and C iii] by a factor ≃ 1.8 and ≃ 1.5, respectively. We also infer a steep UV slope of β_ UV^ spec = -2.11 ± 0.09 from the spectrum, which is consistent with the photometric one (β_ UV^ phot = -2.11 ± 0.15) using the apparent magnitudes in the F150W and F200W filters (F150W=25.25± 0.08 and F200W=25.29 ± 0.07). Flux measurements of rest-optical lines are obtained using the G395M spectrum, which presents a similar depth as the PRISM spectrum but with a much higher resolution. Optical lines are fitted separately over relatively narrow spectral windows (100 Å, rest-frame) and a constant is assumed for the continuum level. The width of the lines is set as a free parameter. In total, we detect up to ten optical emission lines with high significance (Table <ref>), including Balmer lines that are useful for the determination of dust attenuation. To account for wavelength-dependent slit losses and absolute flux calibration, we derive the synthetic photometry of NIRSpec spectra (PRISM and gratings) through each NIRCam filter bandpass and matched it to that obtained from observed photometry. In this process, we use a wavelength-dependent polynomial function yielding scaling factors for the slit-loss correction ranging from approximately 2.0 (F150W) to 3.6 (F444W). Using fluxes and equivalent widths of the detected Balmer lines , , and Hδ, we iteratively derive the dust attenuation E(B-V) = 0.12±0.11 using the <cit.> attenuation curve and following the methodology of <cit.>, which accounts for the internal extinction and underlying hydrogen stellar absorption. Other important lines, such as those that are sensitive to the electron temperature (T_ e, ) and density (n_e, N iv] λλ 1483,1486 and λλ 3727,3729) are also detected and are analyzed in more detail in Section <ref>. For the N iv] and doublets we fit two Gaussian profiles with similar widths and use the expected separation between the two transitions. We find line ratios of F_1483/F_1486 = 0.50±0.22 and F_3727/F_3729 = 0.98±0.27 for the N iv] and doublets, respectively. We also check for the presence of spectral features that are usually associated with Wolf-Rayet (WR) stars. The so-called blue bump around 4600–4700Å, encompassing the emission from N iii λ 4640, C iii λ 4650, and , is detected neither in the G395M nor the PRISM spectra. We derive a 3σ upper limit relative to Hβ of He ii/Hβ≤ 0.26. Similarly, the rest-UV line is not detected. Despite its low resolution, the PRISM spectrum clearly suggests no emission at the expected position of He ii, while the close O iii] emission is well detected (see Figure <ref>). § THE NATURE OF THE IONIZING SOURCE: STAR FORMATION VERSUS AGN We now discuss the nature of the ionizing source of , building upon the recent findings by <cit.> and <cit.>, who suggest a possible AGN activity. In their study, <cit.> reported the detection of N v λ 1242 emission with an integrated flux of (2.8±0.6)× 10^-18 erg s^-1 cm^-2 with a narrow profile FWHM <90 km s^-1 (unresolved in the MOSFIRE spectrum). However, the G140M spectrum does not exhibit any significant emission around the expected position of N v λλ 1238,1242 (Figure <ref>, top left). By considering the flux uncertainty around 1.2μm from the G140M error spectrum and assuming an unresolved line width of FWHM = 352 km s^-1, we infer a 3σ limit of 1.44×10^-18 erg s^-1 cm^-2. This limit stands well below the reported value of <cit.>. Furthermore, according to <cit.>, N v λ 1238 is expected to be twice as strong as N v λ 1242 under standard conditions. Hence, considering the reported flux of <cit.> for N v λ 1242, we would expect 11.6σ and 5.8σ detections for N v λ 1238 and λ 1242, respectively. These limits, however, are incompatible with our observations. <cit.> reported a 2.5σ detection of a broad (≃ 1200 km s^-1) component in Hβ using the medium-resolution NIRSpec G395M spectrum. This broad component is not seen in stronger, forbidden lines like [O iii] λλ 4960,5008, from which they suggest conditions similar to the broad line region (BLR) of an AGN. Using our own reduction of the G395M spectrum and a dual-component Gaussian profile to Hβ, we find a 2.2σ detection for the broad component (Figure <ref>, top middle). Clearly, deeper observations of Hβ (or Hα with MIRI) are needed to unambiguously confirm the presence and nature of the broad component in Hβ, as already discussed and suggested by <cit.>. Indeed, if a single Gaussian profile is used to fit the Hβ profile, a good fit is also found without penalizing significantly the residuals (Figure <ref>, top right). In this case, we find FWHM (Hβ) = 452±68 km s^-1 which differs only by 1.2σ from the nominal FWHM = 369±16 km s^-1 obtained for the much brighter [O iii] λ 5008 line. If the existence of this broad component can be confirmed and attributed to the BRL, it would be expected that high-ionization semi-forbidden lines such as N iv], C iv, or C iii], which probe high-density regimes (n_ crit≳ 10^9 cm^-3), would display similar broad Doppler widths as observed in type-1 AGNs <cit.>. However, these lines appear narrow in , especially N iv] which exhibits a high-significance detection and an intrinsic FWHM ≃ 160 km s^-1 after correcting for instrumental broadening. Thus, our results suggest that the aforementioned semi-forbidden lines unlikely originate from the broad line region. Instead, the properties of these lines, such as the narrow widths and the N iv] line ratio F_1483/F_1486 = 0.50±0.22 (implying densities n_e ≈ 10^4-5 , see Section <ref>), are consistent with narrow line regions of AGN or H ii regions. In the following, we discuss these two scenarios. The lower panels of Figure <ref> present several diagnostic diagrams using different UV nebular lines: C iii]/He ii versus C iv/He ii, [O iii]/He ii, and N v/N iv]. Photoionization models of star-forming galaxies from <cit.> and narrow-line regions of AGN from <cit.> are shown in blue and red, respectively. In the right panel of Figure <ref> we show models of star-forming galaxies from the updated BOND grid using Cloudy <cit.>, which also includes N iv] and is available from the 3MdB[<https://sites.google.com/site/mexicanmillionmodels/>] <cit.>. These models encompass a wide range of parameters, including the ionizing parameter (-4.0 ≤log U ≤ -1.0), hydrogen number density (10^2≤ n_ H / cm^3≤ 10^4), and the power law index of the ionizing spectrum (-2.0 ≤α≤ -1.2). We have selected models with metallicities within the range 0.05 ≤ Z/Z_⊙≤ 0.20, which corresponds to the inferred metallicity for (12+log(O/H) =7.70±0.18, as indicated in Table <ref>). As illustrated in this figure, the position of (indicated by the blue circle) aligns with the predictions of star-forming models in all diagnostic diagrams. Clearly, the absence of He ii and N v, which probe energies >54 eV and >77 eV, respectively, places far away from the region occupied by AGN models. It is worth noting that <cit.> suggested recently that the high N iv]/N iii] ratio observed in is hardly reproduced by star formation models, pointing to an AGN contribution. However, the 3MdB photoionization models used here do predict very high ratios even well above the observed N iv]/N iii] =5.1±2.2, although requiring fairly high ionization parameters (log(U)≲ -2). Other spectral features observed in , such as the intense N iv] emission compared to other UV lines (N iv]/C iv ≃ 1.8, N iv]/C iii] ≃ 1.5, N iv]/N v ≥ 2.6), and narrow profiles (FWHM ≃ 160 km s^-1 for N iv]) differ from those observed in AGNs, even those showing unusually strong Nitrogen lines <cit.>. The so-called Nitrogen-loud QSOs exhibit much weaker N iv] compared to other lines (e.g., N iv]/C iv≃ 0.02-0.38, , ) and, as expected, they present very broad Doppler widths (FWHM ≃ 1500-6000 km s^-1, ). Similarly, some type-2 AGNs also present N iv] emission <cit.>, but notably weaker compared to other high-ionization lines (N iv]/C iv ≃ 0.15, N iv]/C iii] ≃ 0.34, or N iv]/N v ≃ 0.30; ). An exception may be GS-14, a type 1.8 AGN at z≃ 5.55 recently analyzed by <cit.>. GS-14 exhibits broad components in Hydrogen and Helium lines (FWHM ≃ 3400 km s^-1, ) as well as narrow N iv] emission (FWHM ≃ 430 km s^-1, , ), but it also shows clear nebular emission in N v λ 1240 and O vi λ 1033 <cit.> which are not detected in . In contrast, the spectrum of resembles those of other, yet also rare star-forming galaxies with intense emission in Nitrogen lines. Examples such as the Lynx arc <cit.>, SMACS-2031 <cit.>, Mrk 996 <cit.>, and the Sunburst cluster show narrow and prominent N iv] and/or [N iii] lines suggestive of high electron temperatures and densities like (see Section <ref>) and without any hint of AGN activity. The bottom panels of Figure <ref> also show the location of these strong N-emitters, all consistent with star-forming models like . The case of GN-z11, another strong N-emitter reported by <cit.>, appears to be ambiguous, consistent with both models of AGN and star formation, as already discussed in <cit.> and <cit.>. In conclusion, our results suggest that, regardless of the presence of an AGN whose confirmation awaits deeper data, the high-ionization lines observed in are consistent with stellar photoionization. § OBSERVATIONAL AND DERIVED PHYSICAL PROPERTIES OF §.§ ISM properties and element abundances The rich set of emission lines detected from the rest-frame UV-to-optical spectrum allows us to determine the electron temperature and density in the gas and the detailed abundances of numerous elements including H, C, N, O, and Ne. The derived quantities are summarized in Table <ref>. §.§ Electron temperature To derive physical conditions and element abundances we follow the prescriptions of <cit.>. Briefly, these authors adopt the classical three zone model of the H ii region with electron temperatures T_ e(O iii) for the high-ionization zone, and T_ e(O ii) for the low-ionization zone. The intermediate-ionization zone is not used here, since no such lines are detected. The electron temperature T_ e(O iii) is derived both from the ratio of [O iii] line fluxes λ4363/λ(4959+5007) and from the UV-to-optical line ratio of λ1660/λ5007. The former ratio (rest-optical) is determined from the medium-resolution spectrum, the latter from the PRISM spectrum. In both cases we obtain T_e ≈ 18000 K, consistent within 1 σ, and with uncertainties between 1151 and 3252 K. Subsequently, we adopt the electron temperature from the optical line ratios (T_e=18849 ± 3252 K) with the larger uncertainty, which is primarily due to the low S/N detection of . The electron temperature in the low-ionization region is derived from relations obtained from the photoionization models of <cit.>. §.§ Electron density Several density indicators exist in the observed spectral range, but few can be used here in practice. In the UV, the , , and doublets are density estimators. However, the PRISM spectrum is of insufficient resolution to resolve any of these doublet lines. is not detected, and has too low S/N in the medium-resolution spectrum. Although of fairly low S/N, the doublet is detected with a ratio λ1483/λ1487 =0.50 ± 0.22 which indicates a fairly high electron density of n_e ≈ 10^4-5 <cit.>. In the optical, the doublet is clearly detected, but not resolved from the medium-resolution spectra. Our measured line ratio λ3727/λ3729 = 0.98 ± 0.23 is consistent within the uncertainties with that obtained by <cit.> (0.639 ± 0.255), and compatible with n_e > 10^3 cm^-3 <cit.>. The two density estimates could indicate a density gradient between the low and high ionization regions, but are also compatible with a single, relatively high density of n_e ≈ 10^4-5 , whose origin we discuss below. In any case, the most important point to take away from this is that the electron density, although high, is lower than the critical densities of all the relevant emission lines used for the subsequent abundance determinations. This holds for the (semi-)forbidden lines of [O iii] at 1666, 4363, 4959, 5007 (with critical densities n_ crit≥ 6.9 × 10^5 ), the two components of the doublet (n_ crit = 8.7 × 10^4 for 1907 and 10^9 for 1909), (n_ crit = 2 × 10^15 ), (a multiplet whose components have n_ crit≥ 10^9 ), (n_ crit = 3 × 10^9 ), and (n_ crit = 1 × 10^8 ) <cit.>. Only the doublet, whose components have relatively low critical densities of n_ crit = 1 (4) × 10^3 for 3728 (3726), is therefore affected by the high density inferred for , whereas all other lines can safely be used to determine abundances, to which we now proceed. §.§ Ionic and total metal abundances The electron temperature T_ e(O iii) is used to obtain abundances of ions O^2+, N^3+, N^2+, C^3+, C^2+, and Ne^2+; the temperature in the low-ionization region, T_ e(O ii), to derive the ionic abundance of O^+. Ionic abundances are derived following <cit.> for the optical lines, and comparing different methods for the UV lines. For C, N, and O, the observations provide two ionization stages, hence the ionic abundances will be close to the total abundances, and we neglect further ionization corrections. For Ne^2+ we use the ionization correction factor (ICF) following <cit.>. The results are listed in Table <ref>. We derive a total oxygen abundance of = 7.70 ± 0.18, which is dominated by the ionic abundance of O^2+/H^+ (see Table <ref>). Given the high density, could be decreased and hence the O^+/H^+ abundance underestimated. However, in view of the high excitation observed from lines with high critical densities, it is likely that O^2+ is the dominant ionization stage over the majority of the region and hence the determination of O/H close to the correct value. With available line detections the N/O abundance can be determined in different ways. First we use only the UV lines to compute the ionic abundance ratio (N^2++N^3+)/O^2+ using the expressions from <cit.> (V+04) and <cit.> (H+02), assuming the low-density regime. Then we determine N/H from the UV and optical line ratio (N and ) and use O/H determined from the optical lines. Both methods, marked as "UV only" and "UV+opt" respectively, yield values compatible within the errors, and consistent with a high N/O abundance log( N/O)≈ -0.15 ± 0.17. Similarly, for C/O we use the expressions from <cit.>, <cit.> (PM17, and <cit.> (I+23) using either only the rest-UV or a combination of the UV and optical lines. As seen from Table <ref> the ionic abundance ratios derived in this manner are compatible within uncertainties. For the total C/O abundance we adopt log( C/O)=-0.75±0.11 as our default value. The C/O ratio is therefore clearly subsolar, and in fact very similar to the average of normal star-forming galaxies at the same O/H (see below). Finally we also derive the Neon abundance from the and lines and applying an ICF from the oxygen lines, following <cit.>. We find an abundance ratio of log( Ne/O) = -0.63 ± 0.07, somewhat higher than the average value of log( Ne/O) = -0.78 ± 0.01 determined for normal star-forming galaxies by <cit.> at the same metallicity. Although the abundances derived here assume low densities they are not altered by density effects at the density derived for , as already discussed above. Most importantly, the critical densities for the , , and lines involved in the (N^2++N^3+)/O^2+ ratio derived from the UV are all very high (n_ crit > 10^9 ), which further shows that this important ionic abundance ratio can be determined accurately. Taken together, the derived abundances of show that this object has a “metallicity” (O/H) of approximately 1/10 solar <cit.>, an exceptionally high N/O abundance, and a normal C/O abundance, when compared to galaxies of similar metallicity (see Fig. <ref>). The interpretation of these abundances and implications will be discussed below (Sect. <ref>). §.§ Comparison with other studies and caveats ISM properties and abundances of have been determined by several other studies, with whom we now briefly compared our results. <cit.> argue that the doublet can be deblended, from which they infer an electron density of n_e = (1.9±0.2) × 10^3 . From inspection of the doublet they suggest that the density could be higher than n_e > 10^4 . The density inferred here from the doublet (n_e ≈ 10^4 - 10^5 ) is compatible with their finding. Most importantly for the abundance determinations, all available density estimates indicate that the main emission lines should not be affected by density effects. From their 3-σ detection of <cit.> inferred T_e=18630 ± 3682 K, in excellent agreement with our determination. Based on the T_e determination they infer =7.664 ± 0.508 from an average relation between T_e and O/H determined empirically by <cit.>. <cit.> determined =7.72^+0.17_-0.14 using the direct method. Within the quoted uncertainties, our results agree with both of these determinations. A slightly higher O/H abundance (=7.97 ± 0.16), but still compatible with the uncertainties, has been derived by <cit.> using a less accurate R23 strong-line calibration. Finally, assuming AGN models, <cit.> have obtained a higher metallicity for , but similar N/O, C/O, and Ne/O ratios as derived here. Note also that the abundance ratios determined here assume a homogeneous medium both in abundance and density. If pockets of high density and enriched gas coexist with lower density gas with say normal abundance ratios, only a relatively small fraction of enriched gas – i.e. relatively low amounts of Nitrogen – might suffice to explain the observed emission line ratios, since the emissivity of the forbidden line depends on the density <cit.>. However, in this case the inferred N/O abundance would also be lower limit of the true N/O ratio in the enriched pocket. §.§ Other physical properties §.§.§ Morphology As shown in the left panel of Figure <ref> shows a complex morphology in the NIRCam bands consistent with three different clumps/structures separated by ≃ 0.24^'', or ≃ 1.12 kpc at z=8.678 (4.68^'' kpc^-1). These clumps, labeled as A, B, and C as indicated in Figure <ref>, are very compact, only resolved in the NIRCam bands at short wavelengths. To investigate the morphology of in more detail, we model the three galaxy substructures following accurately the methodology applied to the study of stellar clumps in <cit.> and <cit.>. Assuming that clumps have Gaussian profiles, we consider a 15×15 pixel region centered on the galaxy and we fit a model consisting of three 2D Gaussian functions, convolved to the NIRCam instrumental PSF in this field from the grizly library. The best fit to their observed profiles (given by least-squares minimization) returns their fluxes and sizes. We assume that the shape of each substructure is the same in all bands. For this reason, the fit is initially performed in F200W, chosen as the reference filter, and then the shape (size, axis ratio, and position angle) of each clump is kept fixed in the other filters, where only the source flux is fitted. Uncertainties are obtained from Monte Carlo sampling. The results of the model analysis are presented in Table <ref>. Our findings indicate that the morphologies of the three clumps in are compact, with measured FWHMs of 48±5 mas, 62±15 mas, and 43±4 mas for clumps A, B, and C, respectively. Following <cit.> (see also: , , and ), the inferred FWHM suggest that these clumps are resolved, albeit slightly, as their sizes are larger than the pixel size of the NIRCam images (40 mas). Translating these measurements into half-light radii, we find r_ e = 112 ± 12 pc, 145±35 pc, and 101±9 pc for clumps A, B, and C, respectively. §.§.§ Spectral Energy Distribution We now analyze the spectral energy distributions (SEDs) of as a whole (named Total) as well as its sub-components (A, B, and C). We use the SED-fitting code CIGALE <cit.> using the available NIRCam photometry from F115W to F444W, covering the rest-frame wavelength ∼ 1200-4600Å. Stellar population models from <cit.> are used along with the <cit.> Initial Mass Function (IMF) and the Small Magellanic Cloud extinction curve (R_v=2.93, ). The metallicity is fixed to Z=0.004, the closest available value inferred for , and is assumed to be the same for nebular emission and starlight. The dust attenuation (E(B-V)) and ionization paramater (log(U)) are treated as free parameters, ranging from 0.0-0.5 mag and -3.5 to -1.0, respectively. Finally, we explore two different star-formation histories: a constant star-formation model applied to the integrated light of (Total) and instantaneous burst episodes for the three sub-components (A, B, and C). For the former, we include the flux measurements of the Hβ + [O iii] λλ 4960,5008 emission lines in the fitting process. Starting with the integrated emission of (Total), the best-fit model, shown in black in the right panel of Figure <ref>, finds a continuous star-formation rate SFR=161±23 M_⊙ yr^-1 over 14±7 Myr. The stellar mass is M_⋆^ total/ M_⊙=(2.0±0.6)× 10^9 attenuated by E(B-V)=0.17±0.02, in agreement with the values reported in <cit.>. For the three individual components A, B, and C we find burst masses of M_⋆^ A/ M_⊙=(5.7±0.5)× 10^8, M_⋆^ B/ M_⊙=(4.6±0.1)× 10^8, and M_⋆^ C/ M_⊙=(8.6±0.2)× 10^8, respectively. Clumps A and B are well-fitted with very young burst models, having ages of 4.0±0.26 Myr and 5.6±0.7 Myr, respectively. On the other hand, clump C is older than the other components, with a burst age of 15.0± 2.9 Myr. Indeed, the color obtained for clump C F356W - F444W = 0.32 ± 0.29 is significantly lower than those measured in clumps A and B, F356W - F444W ≃ 0.75-1.16, suggesting a weak contribution of nebular emission in F444W (e.g., Hβ and [O iii]), thus negligible star formation over the last ≲ 10 Myr. §.§.§ Stellar mass and SFR surface densities Based on the stellar masses and half-light radii obtained for the individual clumps (Table <ref>), we obtained high stellar mass surface densities of log(Σ_M)=3.86±0.11, 3.55±0.53, and 4.14±0.14 M_⊙ pc^-2 for clumps A, B, and C, respectively (defined as Σ_M = M_⋆ / (2 π r_ eff^2)). It is worth noting that the inferred values of Σ_M may even be higher if each substructure comprises multiple unresolved stellar systems. Nevertheless, these values are already comparable to the densest systems identified at high redshift by <cit.> or <cit.>, and significantly higher than the average log(Σ_M) ≃ 2 M_⊙ pc^-2 observed in nearby young clusters <cit.>. Similarly, the compactness index, defined as C_5 = (M_⋆/10^5 M_⊙)(r_ eff/pc^-1) is also high in the case of . It ranges from C_5≃ 30-90 depending on the clump, exceeding the values of old globular clusters and young massive clusters by at least one order of magnitude <cit.>, suggesting high cluster formation efficiencies <cit.>. The SFR surface density is also found to be very high for clumps A and B with log(Σ_ SFR)=3.27±0.11 and 2.81±0.21 kpc^-2, respectively. In contrast, clump C does not show significant star formation over the last 10 Myr, yielding an upper limit of log(Σ_ SFR)<2.27 kpc^-2. Finally, the derived mass and SFR surface densities in are comparable with those of other prominent N-emitters discussed below, such as GN-z11 (log(Σ_M) ∼ 4.6 M_⊙ pc^-2, ), SMACSJ2031 (log(Σ_M) ∼ 4.0 M_⊙ pc^-2, log(Σ_ SFR) ∼ 1.4 kpc^-2, ), the Sunburst cluster (log(Σ_M) ∼ 4.1 M_⊙ pc^-2, log(Σ_ SFR) ∼ 3.7 kpc^-2, ), or Mrk 996 (log(Σ_M) ∼ 2.8 M_⊙ pc^-2, ). This suggests a potential connection between compactness and a high production efficiency of nitrogen. §.§ Mass of the enriched material The total mass of enriched, ionized gas, which is directly observable, can easily be estimated assuming ionization equilibrium and a constant ISM density <cit.>: M_ ionized = m_p Q_H/α_B n_e = 2.5 × 10^6 (10^3/n_e) (Q_H/10^54) , where Q_H is the ionizing photon production rate which can be determined from H recombination lines, n_e the electron density, m_p the proton mass, and α_B the recombination rate coefficient. For we thus find M_ ionized∼ 1.2 × 10^5 , from the observed luminosity and adopting n_e=10^5 , very similar to M_ ionized∼ 2 × 10^5 inferred for GN-z11 by <cit.>. <cit.> argue that the amount of enriched gas in GN-z11 could be even smaller if the N-emitting gas is found at higher densities, as they suggest. § DISCUSSION §.§ Observed heavy element abundances in comparison to “normal” objects The main elemental abundance ratios derived for are shown in Fig. <ref>, and compared to measurements in other galaxies and regions. To do so we use in particular the recent CNO abundances determined and compiled by <cit.>, who primarily included data from low-redshift star-forming galaxies observed with HST/COS, and data on individual regions from the works of <cit.>, <cit.>, and <cit.>. As well known, the majority of galaxies and regions follow a fairly well-defined sequence of N/O versus O/H and C/O versus O/H <cit.>, which can be understood with chemical evolution models <cit.>. In N/O, for example, only few strong outliers with a large nitrogen excess are known at low redshift <cit.>. In comparison, clearly stands out by having an extremely high Nitrogen abundance, log( N/O) = -0.13 ± 0.11, which is approximately 5.6 times the solar ratio <cit.> and more than a factor 10 higher than the N/O values generally observed at similar metallicities (O/H). This exceptionally high N abundance reflects the very peculiar UV spectrum of , showing unusually strong Nitrogen lines. In contrast to N/O, with log( C/O)=-0.75 ± 0.11, the C/O abundance is fairly normal for the observed metallicity. The Ne/O abundance, log( Ne/O)=-0.63 ± 0.07 is somewhat higher (by ∼ 0.15 dex) than the average value for normal star-forming galaxies derived by <cit.> at the same metallicity. Interestingly, these observed abundance ratios of resemble those of globular cluster stars, similarly to what was pointed out by <cit.> and <cit.> for GN-z11. The origin of these peculiar abundances ratios will be discussed below. §.§ Abundances in other N-emitters Interestingly, the abundance ratios found in resemble those found by <cit.> for the z=10.6 galaxy GN-z11 observed recently with JWST by <cit.>, which are shown by boxes in Fig. <ref>. As shown, the abundances in GN-z11 suffer from large uncertainties, which are in particular due to the fact that the line is shifted beyond the range accessible with NIRSpec and no direct O/H abundance determination is possible for this object from the present data. Using photoionization modeling, <cit.> have further constrained the abundances in GN-z11, obtaining total gas abundances of =7.84±0.06 and log( N/O)=-0.38±0.05, which are quite similar to those obtained here for . Clearly, both and GN-z11 are significantly enriched in Nitrogen, reaching exceptionally high N/O values. The carbon abundance cannot be well constrained in GN-z11, since the electron temperature remains undetermined in this object. The allowed range, derived by <cit.>, is indicated in Fig. <ref>. Very few other galaxies or regions with a high N/O abundance and/or clear detections of nebular lines of N in the UV can be found in the literature. <cit.> list known AGN and galaxies with O vi, N v, or emission lines in the rest-UV. Among the non-AGN in their list one finds the peculiar galaxy named the Lynx arc (at z=3.36), which has been studied by <cit.> and <cit.>, although <cit.> have argued that this object may be an obscured QSO. According to the photoionization models of <cit.>, both the N/O and C/O abundance ratios of this object are elevated, as seen in Fig. <ref>. Although suspected, no direct signs of WR stars have been found in this object <cit.> and the inferred abundances are not explained. Another object showing nebular emission is the strongly lensed galaxy SMACSJ2031.8-4036 at z=3.5 studied in detail by <cit.> and <cit.>. The available VLT observations (with XShooter and MUSE) cover both the rest-UV and optical domain, allowing the detection of numerous emission lines, and thus electron temperature, density and abundance determinations. Interestingly, this object shows indications for high density (n_e 10^5 ) from the doublet and lower densities from other diagnostics <cit.>. The metallicity = 7.76 ± 0.03 is very similar to and it shows a normal C/O abundance (log( C/O)=-0.80 ± 0.09), according to <cit.>. Inspection of their spectra, kindly provided by the authors, shows a clear detection of both and lines, which allows us to determine N/O from the UV lines and the reported T_e using the same methods described above (see Sect. <ref>). We find a relatively high N abundance of log( N/O)=-0.66 ± 0.1, which we also report in Fig. <ref>. Finally, we also find a normal Neon abundance of log( Ne/O)=-0.82 from the reported line fluxes. In the list of <cit.> other non-AGN spectra showing UV lines of Nitrogen show only N v P-Cygni lines, which are most likely due to stellar emission, or are stacked spectra with weak detections, not suitable for our purpose. Another high-redshift object where emission has recently been detected is the strongly lensed and multiply imaged stellar cluster at z=2.368 in the Sunburst arc <cit.>, an exceptional object studied in depth by various authors <cit.>. From a detailed analysis and photoionization modelling, <cit.> infer in particular a high N/O abundance ratio (log N/O = -0.21^+0.10_-0.11), and normal C/O and Ne/O ratios for a metallicity (O/H) of approximately ∼ 0.22 solar. The N/O ratio of this object fares thus among the highest values, comparable to , and C/O is also similar, as also shown in Fig. <ref>. To extend our comparison, we have also examined the low-redshift galaxy Mrk 996, which is a well-known Blue Compact Dwarf (BCD) galaxy with peculiar properties, such as a high electron density, broad emission line components in , and other lines, the presence of Wolf-Rayet stars of WN and WC type, and a high N/O abundance <cit.>. This galaxy also shows N iii] and N iv] emission lines in the UV <cit.>. From integral-field observations <cit.> have found a normal N abundance (log( N/O) ≈ -1.43) across the galaxy and a N-enhancement by a factor ∼ 20 (log( N/O) ≈ -0.13) in the broad line component, emitted in the central region. The two measurements are plotted in Fig. <ref>. The C/O abundance of Mrk 996 can be derived from the and line ratio, which is taken from the HST/COS observations from the CLASSY survey <cit.>, and adopting the electron temperature T_e=10^4 K from <cit.>. We find a high Carbon abundance of log( C/O)= -0.22, close to solar, for this galaxy. However, for its metallicity <cit.> the C/O abundance ratio is comparable to that of other galaxies and regions, hence not unusual. Taken together we thus conclude that all of the six N-emitters show an elevated (supersolar) N/O abundance ratio, whereas the C/O abundance is normal in four of them, and only one of them (the Lynx arc) appears enhanced in C/O. The observed and other properties of these objects are also summarized in Table <ref>. We will now discuss possible scenarios to explain to observed abundance pattern. §.§ Possible chemical enrichment scenarios Galactic chemical evolution models are able to reproduce the observed average trends of the abundance ratios of CNO and H for “normal” galaxies <cit.>, although the evolution of Nitrogen has notoriously been more complicated to explain, since the observations show a behaviour like a primary element at low (subsolar) metallicity <cit.>. To examine the conditions which may be more appropriate for low metallicity dwarf galaxies and regions, which dominate the current samples of extra-galactic CNO measurements in galaxies (the samples shown here), various authors have studied the effects of variable or bursty star-formation histories, outflows and different star-formation efficiencies. Again, such models are able to reproduce the average trends of C/O, N/O and C/N as a function of metallicity and they can also explain the observed scatter in the data, e.g. by the presence of burst phases <cit.>. Since the observed abundance ratios of and possibly other N-emitters are, however, clearly more extreme than those of the bulk of galaxies studied so far, we need to examine the possible nucleosynthetic sources and the conditions capable to explain them. To do so, we first consider two quantitative scenarios, the first involving enrichment from normal massive stars, and the second nucleosynthesis from super-massive stars. These scenarii were considered in recent studies <cit.>. §.§.§ Enrichment from massive stars – “WR-scenario” It is well-known that the stellar winds of massive stars can carry important amounts of newly-created elements such as He and N (from H-burning, the latter resulting at the expense of C and O) or C and O (from He-burning); those elements appear at the stellar surfaces and are ejected by the winds during the so-called Wolf-Rayet (WR) phases, with N enhanced in the WN phase and C enhanced in the subsequent WC phase <cit.>. The stellar wind yields depend strongly on the initial mass and metallicity of the stars, and also on other properties such as stellar rotation and the efficiency of mixing in the stellar interiors, or their evolution in close binary systems <cit.>. Using the recent stellar yields from <cit.> we have computed the cumulative stellar wind yields of a simple stellar population as a function of time, for a <cit.> IMF, three different metallicities ([Fe/H]=-2, -1 and 0, respectively) and three different initial rotational velocities (V_ Rot=0, 150 and 300 km/s, respectively). The latter value of V_ Rot=300 km/s was adopted in <cit.> to discuss the observations of GN-z11. Assuming that stars more massive than 20–25 do not explode but collapse and become black holes <cit.>, the stellar ejecta have exclusively a wind composition for several million years. In the first couple of Myr, that composition is the original one of the stellar envelope, then it is dominated by H-burning products and subsequently, by He-burning products. To compare with observed abundance ratios <cit.> assumed a dilution of the wind ejecta with an equal amount of ISM. Here we assume no such mixing, thus maximizing the effect of the stellar winds on the composition. Physically, this may correspond to the situation where the winds of the previous O-star phase, operating for a few Myr, have opened a cavity in the ISM where the winds of the subsequent WR phase are expanding. Actually, there is mixture with pristine ISM material, since we include the winds released by all stars above 12 and in the considered period of 8 Myr the stars less massive than 20 do not reach the WR phase. In Fig. <ref> we display the evolution of various quantities of the "WR scenario" for stars of [Fe/H]=-1, a value reasonably close to the metallicity of the extragalactic systems studied here. Results are shown up to 8 Myr after the formation of a stellar population of total mass 10^8 with a normal IMF <cit.>. During that period, stars below 25 have not yet ended their lives (by assumption), so that only the wind ejecta populate the cavity crafted by the winds and the radiation of the stars. The mass of the wind ejecta increases steadily, from ∼10^4 after the first Myr to ∼10^6 at 4 Myr and more slowly after that. In Sec. <ref> we discussed the amounts of ionized gas estimated in and GN-z11, which are compatible with the model results for this earliest period after the starburst (horizontal dashed lines in the top panel). The evolution of the wind composition differs between the non-rotating and the rotating stars. The former (solid red curves) have practically no mixing between their convective core and radiative envelope; in consequence, the signatures of H-burning (high N/O and N/C) appear abruptly in the wind, once the mass loss uncovers the former H-burning core. The latter (solid blue curves) undergo rotational mixing, bringing slowly the H-burning products to the surface; as a result, the N/O and N/C ratios increase slowly but steadily, up to the equilibrium value, which is similar to the case of non-rotating stars. The timescale for the appearance of high N abundance is ∼ 3 Myr, in good agreement with the time window inferred by <cit.> for GN-z11. About a Myr later, some amounts of He and He-burning products – mainly C and insignificant O amounts – appear in the wind ejecta of the most massive rotating stars (from 120 to ∼ 70 ) while the less massive ones never reach the WC phase; the combined effect is a strong increase of C/O, a strong decrease of N/C and a small variation of N/O. In contrast, none of the non-rotating stars reaches the WC phase at such low metallicity, and all the CNO ratios remain basically unchanged. After that, the situation is expected to change drastically, as the first SN from M<25 stars explode and eject their core material in the ISM. As shown in Fig. <ref> in the early evolution of a stellar population, there is a period of several Myr during which the N/O ratio in the stellar winds reaches the high N/O ratios observed in and in the other N-emitters analyzed here. However, rapidly after reaching the maximum N/O value, the carbon abundance also increases (very strongly in rotating star or less so without rotation), implying C/O and N/C ratios that are incompatible with the observations of , SMACS2031, and the Sunburst cluster over most of the time (see also Fig. <ref>). In the results displayed here, there is thus only a fairly short period of ∼ 0.5 Myr (yellow shaded area in Fig. <ref>) where all three ratios N/O, N/C, and C/O are compatible with the observations of for the case of rotating stars. In view of the timescales involved (several Myr), the probability of such an occurrence is small but certainly non-negligible. We note that this occurs rather early in the evolution of the starburst, but well within the time window found by the analysis of <cit.> for GN-z11 (violet horizontal segments in the 2nd and 3d panels). We also note that other stellar models than those used here could result in more extended periods of high N/O and N/C ratios. This could be the case, for instance, of stars rotating more rapidly than 300 km/s <cit.>, binary stars, or stars calculated with higher mass loss rates, etc. <cit.>. On the other hand, for the central region of Mrk 996 which shows both N and C enrichment, we find that all the abundance ratios are well reproduced by the models. Furthermore, in this galaxy the WR-scenario is directly supported by the presence of WR stars both of WN and WC types <cit.>. Similarly, N and C enrichment found in the Lynx arc could also be explained by the WR scenario, and earlier studies have argued for the presence of WR stars, from emission line modelling of this peculiar object <cit.>. Is there any direct evidence for WR stars in the N-emitters discussed here? In short, WR stars have been reported only in the low-z galaxy Mrk 996, as mentioned earlier. In the spectral range covered by the observations of , the strongest WR features could be and in the rest-UV and the so-called blue WR-bump centered around . None of these features is detected in the current NIRSpec spectra and the same holds for GN-z11 <cit.>. However, the JWST spectra of these very high-z objects, and in particular for , are of insufficient spectral resolution and S/N to rule out, e.g., emission with equivalents widths 7-10 Å (depending on the adopted FWHM of the WR line), and therefore stellar populations comparable to those of Mrk 996, which has EW(1640)≈ 3-4 Å, cannot be ruled out from the present data. The rest-UV spectrum of SMACS2031 from <cit.> also shows no clear feature of WR stars. is present with an EW(1640)=0.99 ± 0.1 Å, but it is only marginally broader than nebular emission lines. The very high-S/N spectrum of the Sunburst cluster, discussed by <cit.>, also shows no signature of WR stars. Except for the nebular lines, the Sunburst spectrum resembles in fact strongly the spectrum of the well-known massive star cluster R136 in the LMC, which is known to be very young (∼ 1.5 Myr) and to host very massive stars with masses up to ∼ 200 <cit.>. The Sunburst cluster also appears to be too young to host WR stars. Finally, <cit.> have suggested the presence of WR in the Lynx arc, in particular to explain the hard observed ionizing spectrum, but no direct signatures are detected in the relatively low S/N spectra available for this object. In conclusion, except for Mrk 996 where the presence of important populations of WR stars (both of WN and WC types) is established, no direct evidence for WR stars is found in the other N-emitters studied here. However, this does not necessarily exclude the WR-scenario, since WR stars may be present below the detection threshold. §.§.§ Enrichment from super-massive stars (M 1000 ) – SMS scenario An alternate scenario, already invoked by <cit.> to explain the high N-abundance in the compact galaxy GN-z11 at z=10.6 , is that of super-massive stars (SMS), which have previously been proposed to explain the abundance anomalies of the multiple stellar populations seen in old Galactic and extra-galactic globular clusters (GC) and in extra-galactic massive star clusters with ages down to ∼ 1.7 Gyr <cit.>. In essence, this model proposes that gas accretion and collisions of proto-stars in the densest clusters lead to the runaway formation of one or several SMS, with masses M 10^3 that increase with the cluster mass. During some time before two-body relaxation heats the cluster, this mostly convective SMS undergoes accretion (from proto-stars in the cluster and infalling gas) and it ejects processed matter, whose composition reflects the conditions in its hot H-burning core. Namely, the ejected material is strongly enriched in N, Na, and Al, and strongly depleted in O and C as a result of CNO, NeNa, and MgAl nuclear reactions at high temperature. As initially shown by <cit.>, the whole range of abundance anomalies (C-N, O-N, Na-O, Mg-Al anticorrelations) in GC stars is very well accounted for after dilution of the SMS ejecta with proto-GC gas. The constant supply of unprocessed material to the SMS “freezes” its evolution close to the zero-age main sequence, preventing strong He-enrichment of the SMS yields, in agreement with GC multiple band photometry <cit.>. This also solves the so-called “mass budget" problem encountered by all the other scenarios that try to explain the presence and properties of multiple stellar populations in globular clusters <cit.>. For example, <cit.> find that a SMS forming into a dense cluster hosting 10^7 proto-stars can reach and process respectively ∼ 5% and ∼ 45% of the cluster mass. This is significantly higher than the ∼ 2% of wind mass ejected in the massive star scenario (cf. Fig. <ref>). In particular, the super-linear scaling predicted between the amount of material nuclearly processed by the SMS and the cluster mass explains the observed increase of the fraction of second population stars with GC mass <cit.>. This picture is dubbed the “conveyor-belt” SMS model. The high amount of processed matter also implies that any additional matter ejected by the SMS during its final phase (once the conveyer-belt stops) will have very little impact on the final abundance ratios. In Figs. <ref> and <ref> the solid lines show, for three different initial metallicities (0.34 Z_⊙, 0.12 Z_⊙, and 0.018 Z_⊙), the predicted chemical abundance ratios resulting from the mixture of ejecta of 10^4 M_⊙ SMS in the conveyer-belt scenario with different amounts of ISM gas with a normal, initial abundance (stellar models from ). The composition of the SMS ejecta reflects the yields from H-burning via the CNO-cycle. It is very strongly enriched in Nitrogen, with N/O >10, i.e. nearly 100 times super-solar, and very strongly depleted in Oxygen and Carbon. With an increasing fraction of matter from the SMS mixed into the ISM, the predicted N/O and N/C ratios increase strongly. The resulting mixture also shows a decreasing O/H abundance (metallicity) while C/O remains relatively constant. The observed N/O ratio of GN-z11 and can be explained by mixing approximately equal amounts of SMS ejecta with ISM gas, as already shown by <cit.>. The N/O abundance of all other N emitters considered here could also be explained with the SMS scenario. The same is also true for the C/O and N/C abundance ratios, except for the two objects which show a high C/O ratio, Mrk 996 and the Lynx arc. As already mentioned before, C/O in these galaxies reveals the presence of both H- and He-burning products, which, in the case of Mrk 996, is compatible with its observed WR star population. In short, the comparison of the observed N/O, C/O, and N/C ratios suggests that , SMACS2031, and the Sunburst cluster might be explained by the SMS conveyor-belt scenario, implying that they should contain one or several proto-GC, and Mrk 996 and the Lynx arc by the WR scenario. From the available data and the lack of accurate C/O measurements, the case of GN-z11 remains inconclusive. <cit.> have computed the composition of the material ejected through winds along the entire evolution of SMS with masses between 10^3 and 10^5 for 0.1 Z_⊙, neglecting the conveyor belt rejuvenation of the star discussed above (they assume that SMS form through gravitational collapse during the merger of gas-rich galaxies at high-z, see ). In addition, they estimate if and when the SMS become GR unstable as they evolve, as well as the modifications of the composition of the material that can be ejected at the end of the life of the stars in the case they eventually explode due to the CNO cycle and the rp (rapid proton capture) process (for details see ). Their 10^3 and 10^4 models – not shown here – predict strong N-enrichment on the main sequence, confirming the results of <cit.> and <cit.>. However, these two models do not become GR unstable and make it until carbon-oxygen burns. As a consequence, their winds reach super-solar C and O abundances because of the dredge-up of C and O from the core during central He-burning, and they are strongly enriched in He. This implies that without undergoing the conveyor-belt episode that is required to solve the mass budget and the photometric constraints for the GC case, the total yields of such models cannot explain the GC abundance anomalies, nor can they explain the N/O and C/O ratios in CEERS-1019 and in GN-z11 as discussed by <cit.>. On the other hand, <cit.> find that their 5 × 10^4 and 10^5 models at 0.1 Z_⊙ become GR unstable close to or at the end of the main sequence, implying that their winds contain super-solar N and sub-solar C and O before the stars eventually collapse to a black hole or are disrupted by a thermonuclear explosion. The dashed lines in Figs. <ref> and <ref> show the range of abundances expected when the ejecta of their 10^5 model is diluted to various degrees with ISM of initial composition. In addition to the N-enrichment along the main sequence, this includes their estimate of the additional N that is produced during the expected CNO-powered explosion <cit.>. This model accounts well for the observed abundance N/O ratios in , GN-z11, and SMACS2031. And it also produces enough enriched material to be able to pollute sufficient ionized gas, i.e. masses in the observed range (see Sect. 4.7), as shown by <cit.>. From this, we conclude that SMS over a wide range of masses can simultaneously explain the GC abundance anomalies and the N/O and C/O ratios in CEERS-1019, GN-z11, and SMACS2031, if they eject large quantities of H-processed material early on the main sequence, as predicted by the conveyor-belt SMS scenario <cit.>, or if the SMS sheds large amounts of processed material due to instabilities and an explosion during the CNO-cycle <cit.>. In Sect. <ref> we will further argue whether the N-emitters are proto-GCs, and discuss possible implications of the SMS scenario, including the possible formation of an intermediate-mass black hole (IMBH). §.§.§ Other scenarios to explain strong N emission <cit.> have discussed different processes or sources which could explain the observed N-enhancement in GN-z11, including enrichment from AGB stars, pollution from Pop III star-formation, stellar encounters in dense star clusters, or tidal disruption of stars from encounters with black holes. The main conclusions of their qualitative discussion is that these explanations would need very fine-tuned conditions and that the origin of N-enrichment is currently largely unknown. The predictions of classical chemical evolution models including AGB stars are shown e.g. in the studies of <cit.>. <cit.> also show predictions of such models in comparison with GN-z11. Indeed, as well known from earlier works, such models cannot produce high N/O abundance ratios at low metallicity (as observed in the N-emitters discussed here), since these models include also the yields of massive stars and core-collapse supernovae, which produce large amounts of oxygen, and hence no extreme N/O ratios. The pure WR-wind models of <cit.> are essentially the same as our massive star models (WR-scenario). <cit.> have recently argued that GN-z11 shows signs of a type 1 AGN, with emission from very high density and a Broad Line Region (BLR). They further argue that the exceptionally high nitrogen abundance “becomes much less problematic” in the AGN scenario, for several reasons. First, they point out that several “nitrogen-loud” AGN have been found, making GN-z11 less peculiar. And second, they mention that only small amounts of enriched gas are needed if the observed gas is at very high densities. Finally, they mention supernovae from supermassive stellar progenitors, rapidly recycled secondary nitrogen production, or bloated atmospheres of giant/supergiant stars as possible sources of the observed enrichment, without providing quantitative estimates. Clearly, the spectra of and the other N-emitters discussed here are very different from nitrogen-loud AGN, as discussed in Sect. <ref>. Furthermore, except for GN-z11 for which <cit.> show indications of densities n_H 10^10 typical of BLR, the densities inferred here are much lower, typically n ∼ 10^4-5 , and all observed emission line properties are compatible with photoionization from star-formation (Sect. <ref>). The qualitative scenarios sketched by <cit.> for GN-z11 may therefore not be applicable to the other N-emitters discussed here. In any case, more quantitative studies on the detailed chemical abundances of nitrogen-loud AGN and their source of enrichment could be of interest to better understand the common points and differences with other N-emitters. For the Sunburst cluster, <cit.> proposed a model where the super star cluster is surrounded by low- and high-density photoionized clouds and regions (channels) through which ionizing radiation can escape, and they argue that only the high-density clouds in the vicinity of the star cluster are N-enriched and confined by strong external pressure. They estimate that ∼ 500 of nitrogen is needed – an amount which can be produced by the star cluster with a mass ∼ few × 10^7 – and suggest that it originates from young massive stars, ejected, e.g., in dense LBV winds or non-conservative binary mass transfer. SN ejecta are not favored, since the Sunburst is not enriched in C, and the inferred age (4 Myr) is consistent with this explanation. The model of <cit.> is essentially the same as our massive star scenario, although they do not use a specific model to predict the chemical yields of the cluster and its temporal evolution, and our massive star scenario does not include ejecta from mass transfer in binary systems. As already discussed above, such a scenario requires some specific “tuning”, in particular the selection of a fairly specific age at which the composition of ejecta matches the observed abundances. For the Sunburst cluster this seems very plausible; however, it is not clear if this could be generalized to and the other N-emitters. §.§ Are and other N-emitters proto-GC in formation or related to the formation of intermediate-mass black holes ? The unusually high N/O abundances derived for GN-z11 and the Sunburst arc and similarities with the abundance pattern of stars in globular clusters have led several authors to suggest a link between these peculiar objects and GC formation <cit.>. With the finding of a highly supersolar N/O ratio and normal C/O in and similar results for other objects from the literature (in total six N-emitters analyzed here), the question of the nature of the N-emitters must be rediscussed in light of new and additional evidence. We summarize basic observational evidence and our favourite scenarii/explanations in Table <ref>. First, the observed abundance ratios of N/O and C/O, which are accurately measured for five objects, suggest that two of them (the low-z galaxy Mrk 996 and the Lynx arc) are probably explained by pollution from WR stars, as discussed above. If correct, it implies that the cluster(s) dominating presumably these objects cannot be progenitors of GCs. This is due to the fact that massive star wind scenario suffers from the so-called mass budget problem of GCs <cit.>, which basically means that the massive stars cannot produce sufficient amounts of enriched material to explain the observed population of “polluted” (second population) stars in GCs without this first population being much more massive than the second one, in contradiction with observations. In Mrk 996 WR features are detected, and the presence of WR stars is suspected in the Lynx arc. We therefore suggest that they are somewhat peculiar star-forming galaxies (WR galaxies), although we note that <cit.> have also considered a hidden AGN to explain the emission line properties of the Lynx arc. For , GN-z11, SMACS2031, and the Sunburst cluster, the N/O, C/O, and N/C ratios could be explained by the two scenarii discussed earlier, with the enriched matter originating from normal massive stars or from supermassive stars. We favour the SMS scenario for several reasons. First, the WR scenario requires a very special and shorter timing than the SMS scenario. Second, these galaxies contain at least one sufficiently massive and compact region (the Sunburst cluster is of course a cluster) with extreme conditions (very high SFR and mass surface density), and unusually high ISM densities. Such rare conditions may be necessary for the formation of supermassive stars through runaway collisions and for the conveyer-belt model, as proposed by <cit.>. This would also naturally explain why N-emitters are rare. We therefore propose that , SMACS2031, and the Sunburst cluster have been enriched by SMS and that they host (or are) proto-GCs in star-forming galaxies. Finally, the finding of such objects at look-back times between 11.2–13.3 Gyr is also compatible with them hosting proto-GCs. The case of GN-z11 may be somewhat different as it may host an AGN, as suggested by <cit.>. If the high density of the ionized gas (n_e 10^10 ) inferred by these authors is confirmed, it would significantly reduce the amount of ionized gas which needs to be polluted, but it still leaves the source of chemical enrichment unexplained <cit.>. However, this does not exclude pollution from one or several SMS, which might even have seeded the “small” massive black hole (with log(M_ BH/) ∼ 6.2±0.3) or contributed to its growth. Indeed, the final fate of SMS is difficult to predict since in addition to metallicity and mass, other input parameters of the stellar models (mass loss, convection, overshooting, rotation, etc.) may impact the occurrence of the GR instability, its timing, and whether the collapse would trigger an explosion through the CNO-cycle <cit.>. In any case, the formation of IMBH with masses ∼ 10^4 to 10^5 from SMS seems possible at metallicities comparable to that of GN-z11, as shown e.g. by <cit.>. We therefore propose that N-emitters could also be an indication of black hole seed formation from SMS. And these objects could evolve to N-loud quasars, a rare sub-population of quasars showing strong N lines in the UV <cit.>, and which have been suggested to be objects with high N/O and sub-solar metallicities in a rapid growth phase <cit.>. We therefore mark GN-z11 as a possible AGN with BH-formation related to SMS in Table <ref>. Finally, we also consider that the formation of an IMBH with mass 1000   from an SMS is incompatible with the proto-GC scenario, as the presence of such a BH in old GCs seems to be ruled out observationally <cit.>. This is also reflected in Table <ref>. Finally, we wish to remind the reader that <cit.> suggested that also hosts a black hole, although our analysis does not show significant evidence for this and suggests that the object is dominated by star-formation (see Sect. <ref>). If harbours an AGN, the situation could be similar to that of GN-z11, just discussed and point to a possible link between SMS and black hole formation. Also, we note that <cit.> have considered a hidden AGN to explain the emission line properties of the Lynx arc, although the nature of this source remains unclear. To conclude, we also recall that none of the four other N-emitters discussed here show any AGN indication. We are therefore probably left with three good candidates for SMS and proto-GCs, , SMACS2031, and the Sunburst cluster. §.§ Future steps and improvements Clearly, better data and more N-emitters would be helpful to better understand the origin of the strong N emission lines, to further test the proposed enrichment scenarios and the possible presence of SMS, and thus to understand the nature of these rare objects. An important test for the massive star scenario would be to detect direct spectral signatures of WR stars. Deeper, very high S/N spectra, in the rest-optical domain would be ideal for this. The massive star scenario also predicts important amounts of helium in the ejecta, which might be measurable from the analysis of nebular He and H emission lines in rest-optical spectra of sufficient quality. In the SMS scenario, a strong enrichment of aluminum, originating from H-burning from the MgAl chain <cit.>, is predicted (Ramirez-Galeano, in prep.), as observed in GC stars <cit.>. In contrast, massive stars should produce less aluminum <cit.>. Aluminum has spectral signatures in the rest-UV (Al ii λ1670, Al iii λλ1855,1863), which are often seen in absorption in high-z galaxy spectra <cit.>, and which are in emission in some AGNs <cit.>. These features might be an interesting test of the relation between N-emitters and proto-GCs, and to distinguish between the WR and SMS scenarii. To examine if the strong N lines could be related to large density variations and found preferentially in pockets of high density, it will be of interest to obtain multiple density measurements probing the widest possible range of density, regions of different ionization, and possibly also spatial variations. Both high S/N and high-resolution spectra are needed for this, and measurements of fine-structure lines of oxygen and nitrogen with ALMA could also provide insights into this question. Future studies may reveal new N-emitters, improving the statistics and providing more test cases. If strongly enhanced N-emitters are found at significantly lower metallicities (say 7) the SMS scenario might be favored, since WR stars should be less present at low O/H. Also, objects with even higher N/O abundances could exist, if the SMS scenario is correct. § CONCLUSION In this work, we have presented the detailed analysis of at z=8.678 using deep spectroscopy and imaging with NIRSpec and NIRCam obtained from the JWST CEERS program. Low- and medium-resolution NIRSpec spectra covering 1-5μm reveal a wealth of rest-frame UV and optical nebular emission lines of various transitions and ionizing states from H, He, C, N, O, and Ne. In particular, shows remarkably intense Nitrogen emission of N iii and N iv, with N iv] λ1486 emerging as the strongest line within the rest-frame UV spectrum. These emission lines are very rarely seen in galaxy spectra, and – which shows some resemblance with the peculiar object GN-z11 revealed recently by JWST <cit.> – is thus the second “N-emitter” found at z>8. From the analysis of these data, we arrive at the following main results: * Using the well-detected auroral [O iii] λ4363 line we determined the O/H abundance using the direct method, resulting in = 7.70 ± 0.18. We derived the electron temperature from both rest-frame UV and optical [O iii] lines, yielding consistent values of T_e≈ 18000 K. The density-sensitive lines of N iv] 1483/1487 = 0.50± 0.22 and [O ii] 3727/3729=0.98±0.23 suggest a relatively high electron density of n_e≈ 10^3-5 cm^-3. These values are consistent with those reported by other studies for this object <cit.>. * Metal abundances were derived for different ions of C, N, O, and Ne. Notably, we found an exceptionally high N/O abundance of log(N/O)=-0.13±0.11, approximately 5.6 times higher than the solar ratio. Conversely, exhibits relatively normal C/O and Ne/O ratios for its metallicity (O/H), with log(C/O)=-0.75± 0.11 and log(Ne/O)=-0.63±0.07, respectively. This translates to high N/O and N/C, and normal C/O ratios, typically found in globular cluster stars, and which reflect the abundance ratios from H-burning via the CNO-cycle at very high temperature <cit.>. * We have discussed possible chemical enrichment scenarios to explain these peculiar C, N, and O abundance ratios observed in . Enrichment from massive star winds through the WR phase can explain the observed ratios but requires a very short and specific time window (and the presence of WN stars only); it would also come with a very strong He enrichment. Furthermore, no signatures of WR stars are detected in , although their presence cannot be ruled out from the available data. Alternatively, models of super-massive stars (>1000 M_⊙) mixed with ISM with a normal composition can explain the abundance ratios of . In this scenario, the ejected processed material via SMS will exhibit H-burning products only, strong enriched in N and possibly some depletion in O and C, and a normal He content. * We have investigated the possibility of an AGN in , a scenario recently suggested by <cit.> due to the detection of a broad component in Hβ. Our own reduction of the NIRSpec spectrum shows a tentative, broad component in Hβ (FWHM≃ 1150 km s^-1) but detected with a fairly low significance (≃ 2.2 σ). Line ratios using rest-UV lines (N v, N iv], C iv, C iii], O iii], and He ii) suggest that the gas is primarily photoionized by star formation, and any contribution from an AGN would likely be residual. The non-detection of the high-ionization lines of N v λ 1240 and He ii λ 1640 further support this scenario. * shows a complex morphology with three resolved clumps. By analyzing the light distribution of these substructures, we found very compact morphologies with characteristic half-light radii of ≃ 100-150 pc. Multi-wavelength SED fits for each individual clump predict stellar masses of log(M_⋆/M_⊙)≃ 8.66-8.94, resulting in very high stellar mass surface densities log(Σ_M_⋆/(M_⊙ pc^-2) ≃ 3.55-4.14. The star formation rate appears very intense in two clumps (SFR ≃ 80-150 M_⊙ yr^-1), while the remaining clump displays a negligible level of ongoing star formation. represents thus the second example of a rare population of strong N-emitting galaxies at z>8 with highly super-solar N/O abundances, very compact regions, and a high-density ISM. To put this object into context and better understand these N-emitters, we have (re-)analyzed other known N-emitting star-forming galaxies from the literature. This includes three lensed objects, two galaxies (SMACS2031 and the Lynx arc), and one star-cluster (the Sunburst cluster) at z ∼ 2.3-3.5, plus a nearby blue compact dwarf galaxy (Mrk 996), all of them without any clear indications of AGN activity. Similar to , these sources show peculiar abundance ratios with a supersolar N/O ratio along with very dense clustered mass and star formation (log(Σ_M_⋆/(M_⊙ pc^-2)) ≳ 3.5) and high ISM densities (n_e ∼ 10^4-10^5 ). Two galaxies, Mrk 996 and the Lynx arc, show an enhanced C/O ratio compared to normal galaxies at the same metallicity (O/H), indicative of enrichment from WR stars. We have also presented quantitative predictions for the chemical enrichment in two different scenarios, including enrichment from winds of massive stars (called the WR-scenario) or from ejecta of supermassive stars (SMS) with masses 10^3-10^5 , which have been invoked to explain the abundance anomalies observed in present-day globular clusters <cit.>. The WR scenario explains well the two galaxies with enhanced C/O and is supported by direct evidence of WN and WC stars in Mrk 996. As already found by <cit.> for GN-z11, we found that the SMS scenario reproduced well the observed abundance ratios in , SMACS2031, and the Sunburst cluster. These observations probably provide the best indirect evidence so far for the possible existence of SMS in galaxies. Finally, considering the preferred enrichment scenarii and other physical properties, we have also examined which of the N-emitters could host proto-GCs and what their nature is. From our analysis we concluded that , SMACS2031, and the Sunburst cluster host most likely proto-GCs. We also suggested that the peculiar abundances of GN-z11 could be due to SMS, even if this object was confirmed to host an AGN, as proposed by <cit.>. This could also point to the formation of intermediate-mass black holes from SMS and suggest a link between the N-emitters and N-loud quasars. In short, the newly discovered N-emitter and other N-emitters show tantalizing similarities with stars in GCs and the conditions expected during the formation of GCs. They may also offer a unique window into the formation of SMS, their role during the formation of GCs, and also their possible importance as seeds for the formation of massive black holes. More detailed studies and further discoveries of these rare objects will shed further light on these exciting topics and questions. We thank Lise Christensen and Johan Richard for sharing spectra from their VLT observations of SMACS2031. We also thank Mark Gieles, Eros Vanzella, Laura Ramirez Galeano, Anastasios Fragos, Holger Baumgardt, Montse Villar-Martin and other colleagues for stimulating discussions. CC acknowledges support from the Swiss National Science Foundation (SNF; Project 200020-192039). M.M. acknowledges the support of the Swedish Research Council, Vetenskapsrådet (internationell postdok grant 2019-00502). Y.I. acknowledges support from the National Academy of Sciences of Ukraine (Project No. 0123U102248) and from the Simons Foundation. aa
http://arxiv.org/abs/2307.04707v2
20230710170651
Asymptotic Complexity Estimates for Probabilistic Programs and their VASS Abstractions
[ "Michal Ajdarów", "Antonín Kučera" ]
cs.FL
[ "cs.FL" ]
Topological recursion of the Weil–Petersson volumes of hyperbolic surfaces with tight boundaries Timothy Budd and Bart Zonneveld IMAPP, Radboud University, Nijmegen, The Netherlands. August 12, 2023 ================================================================================================= The standard approach to analyzing the asymptotic complexity of probabilistic programs is based on studying the asymptotic growth of certain expected values (such as the expected termination time) for increasing input size. We argue that this approach is not sufficiently robust, especially in situations when the expectations are infinite. We propose new estimates for the asymptotic analysis of probabilistic programs with non-deterministic choice that overcome this deficiency. Furthermore, we show how to efficiently compute/analyze these estimates for selected classes of programs represented as Markov decision processes over vector addition systems with states. § INTRODUCTION Vector Addition Systems with States (VASS) <cit.> are a model for discrete systems with multiple unbounded resources expressively equivalent to Petri nets <cit.>. Intuitively, a VASS with d ≥ 1 counters is a finite directed graph where the transitions are labeled by d-dimensional vectors of integers representing counter updates. A computation starts in some state for some initial vector of non-negative counter values and proceeds by selecting transitions non-deterministically and performing the associated counter updates. Since the counters cannot assume negative values, transitions that would decrease some counter below zero are disabled. In program analysis, VASS are used as abstractions for programs operating over unbounded integer variables. Input parameters are represented by initial counter values, and more complicated arithmetical functions, such as multiplication, are modeled by VASS gadgets computing these functions in a weak sense (see, e.g., <cit.>). Branching constructs, such as if-then-else, are usually replaced with non-deterministic choice. VASS are particularly useful for evaluating the asymptotic complexity of infinite-state programs, i.e., the dependency of the running time (and other complexity measures) on the size of the program input <cit.>. Traditional VASS decision problems such as reachability, liveness, or boundedness are computationally hard <cit.>, and other verification problems such as equivalence-checking <cit.> or model-checking <cit.> are even undecidable. In contrast to this, decision problems related to the asymptotic growth of VASS complexity measures are solvable with low complexity and sometimes even in polynomial time <cit.>; see <cit.> for a recent overview. The existing results about VASS asymptotic analysis are applicable to programs with non-determinism (in demonic or angelic form, see <cit.>), but cannot be used to analyze the complexity of probabilistic programs. This motivates the study of Markov decision process over VASS (VASS MDPs) with both non-deterministic and probabilistic states, where transitions in probabilistic states are selected according to fixed probability distributions. Here, the problems of asymptotic complexity analysis become even more challenging because VASS MDPs subsume infinite-state stochastic models that are notoriously hard to analyze. So far, the only existing result about asymptotic VASS MDP analysis is <cit.> where the linearity of expected termination time is shown decidable in polynomial time for VASS MDPs with DAG-like MEC decomposition. Our Contribution: We study the problems of asymptotic complexity analysis for probabilistic programs and their VASS abstractions. For non-deterministic programs, termination complexity is a function _max assigning to every n ∈ the length of the longest computation initiated in a configuration with each counter set to n. A natural way of generalizing this concept to probabilistic programs is to define a function _ such that _(n) is the maximal expected length of a computation initiated in a configuration of size n, where the maximum is taken over all strategies resolving non-determinism. The same approach is applicable to other complexity measures. We show that this natural idea is generally inappropriate, especially in situations when _(n) is infinite for a sufficiently large n. By “inappropriate” we mean that this form of asymptotic analysis can be misleading. For example, if _(n) = ∞ for all n ≥ 1, one may conclude that the computation takes a very long time independently of n. However, this is not necessarily the case, as demonstrated in a simple example of Fig. <ref> (we refer to Section <ref> for a detailed discussion). Therefore, we propose new notions of lower/upper/tight complexity estimates and demonstrate their advantages over the expected values. These notions can be adapted to other models of probabilistic programs, and constitute the main conceptual contribution of our work. Then, we concentrate on algorithmic properties of the complexity estimates in the setting of VASS MDPs. Our first result concerns counter complexity. We show that for every VASS MDP with DAG-like MEC decomposition and every counter c, there are only two possibilities: * The function n is a tight estimate of the asymptotic growth of the maximal c-counter value assumed along a computation initiated in a configuration of size n. * The function n^2 is a lower estimate of the asymptotic growth of the maximal c-counter value assumed along a computation initiated in a configuration of size n. Furthermore, it is decidable in polynomial time which of these alternatives holds. Since the termination and transition complexities can be easily encoded as the counter complexity for a fresh “step counter”, the above result immediately extends also to these complexities. To some extent, this result can be seen as a generalization of the result about termination complexity presented in <cit.>. See Section <ref> for more details. Our next result is a full classification of asymptotic complexity for one-dimensional VASS MDPs. We show that for every one-dimensional VASS MDP * the counter complexity is either unbounded or n is a tight estimate; * termination complexity is either unbounded or one of the functions n, n^2 is a tight estimate. * transition complexity is either unbounded, or bounded by a constant, or one of the functions n, n^2 is a tight estimate. Furthermore, it is decidable in polynomial time which of the above cases hold. Since the complexity of the considered problems remains low, the results are encouraging. On the other hand, they require non-trivial insights, indicating that establishing a full and effective classification of the asymptotic complexity of multi-dimensional VASS MDPs is a challenging problem. § PRELIMINARIES We use , , , and to denote the sets of non-negative integers, integers, rational numbers, and real numbers. Given a function f →, we use O(f) and Ω(f) to denote the sets of all g → such that g(n) ≤ a · f(n) and g(n) ≥ b · f(n) for all sufficiently large n ∈, where a,b are some positive constants. If h ∈ O(f) and h ∈Ω(f), we write h ∈Θ(f). Let A be a finite index set. The vectors of ^A are denoted by bold letters such as ,,,…. The component of of index i∈ A is denoted by (i). If the index set is of the form A={1,2,…,d} for some positive integer d, we write ^d instead of ^A. For every n ∈, we use to denote the constant vector where all components are equal to n. The other standard operations and relations on such as +, ≤, or < are extended to ^d in the component-wise way. In particular, < if (i) < (i) for every index i. A probability distribution over a finite set A is a vector ν∈ [0,1]^A such that ∑_a ∈ Aν(a) = 1. We say that ν is rational if every ν(a) is rational, and Dirac if ν(a) =1 for some a ∈ A. §.§ VASS Markov Decision Processes Let d ≥ 1. A d-dimensional VASS MDP is a tuple = Q, (Q_n,Q_p),T,P, where * Q ≠∅ is a finite set of states split into two disjoint subsets Q_n and Q_p of nondeterministic and probabilistic states, * T ⊆ Q ×^d× Q is a finite set of transitions such that, for every p ∈ Q, the set (p) ⊆ T of all transitions of the form (p,,q) is non-empty. * P is a function assigning to each t ∈(p) where p ∈ Q_p a positive rational probability so that ∑_t ∈ T(p) P(t) =1. The encoding size of is denoted by , where the integers representing counter updates are written in binary and probability values are written as fractions of binary numbers. For every p ∈ Q, we use (p) ⊆ T to denote the set of all transitions of the form (q,,p). The update vector of a transition t = (p,,q) is also denoted by _t. A finite path in of length n ≥ 0 is a finite sequence of the form p_0,_1,p_1,_2,…,_n,p_n where (p_i,_i+1,p_i+1) ∈ T for all i<n. We use (α) to denote the length of α. If there is a finite path from p to q, we say that q is reachable from p. An infinite path in is an infinite sequence π = p_0,_1,p_1,_2,… such that every finite prefix of π ending in a state is a finite path in . A strategy is a function σ assigning to every finite path p_0,_1,…,p_n such that p_n ∈ Q_n a probability distribution over (p_n). A strategy is Markovian (M) if it depends only on the last state p_n, and deterministic (D) if it always returns a Dirac distribution. The set of all strategies is denoted by Σ_, or just Σ when is understood. Every initial state p ∈ Q and every strategy σ determine the probability space over infinite paths initiated in p in the standard way. We use ^σ_p to denote the associated probability measure. A configuration of is a pair p, where p ∈ Q and ∈^d. If some component of is negative, then p is terminal. The set of all configurations of is denoted by (). Every infinite path p_0,_1,p_1,_2,… and every initial vector ∈^d determine the corresponding computation of , i.e., the sequence of configurations p_0_0, p_1 _1, p_2 _2,… such that _0 = and _i+1 = _i + _i+1. Let (π) be the least j such that p_j_j is terminal. If there is no such j, we put (π) = ∞ . Note that every computation uniquely determines its underlying infinite path. We define the probability space over all computations initiated in a given p, where the underlying probability measure ^σ_p is obtained from _p^σ in an obvious way. For a measurable function X over computations, we use ^σ_p [X] to denote the expected value of X. § ASYMPTOTIC COMPLEXITY MEASURES FOR VASS MDPS In this section, we introduce asymptotic complexity estimates applicable to probabilistic programs with non-determinism and their abstract models (such as VASS MDPs). We also explain their relationship to the standard measures based on the expected values of relevant random variables. Let us start with a simple motivating example. Consider the simple probabilistic program of Fig. <ref>. The program inputs a positive integer N and then repeatedly increments/decrements N with probability 0.5 until N=0. One can easily show that for every N ≥ 1, the program terminates with probability one, and the expected termination time is infinite. Based on this, one may conclude that the execution takes a very long time, independently of the initial value of N. However, this conclusion is not consistent with practical experience gained from trial runs[For N=1, about 95% of trial runs terminate after at most 1000 iterations of the repeat-until loop. For N=10, only about 75% of all runs terminate after at most 1000 iterations, but about 90% of them terminate after at most 10000 iterations.]. The program tends to terminate “relatively quickly” for small N, and the termination time does depend on N. Hence, the function assigning ∞ to every N ≥ 1 is not a faithful characterization of the asymptotic growth of termination time. We propose an alternative characterization based on the following observations[Formal proofs of these observations are simple; in Section <ref>, we give a full classification of the asymptotic behaviour of one-dimensional VASS MDPs subsuming the trivial example of Fig. <ref>.]: * For every ε >0, the probability of all runs terminating after more than n^2+ε steps (where n is the initial value of N) approaches zero as n →∞. * For every ε >0, the probability of all runs terminating after more than n^2-ε steps (where n is the initial value of N) approaches one as n →∞. Since the execution time is “squeezed” between n^2-ε and n^2+ε for an arbitrarily small ε > 0 as n →∞, it can be characterized as “asymptotically quadratic”. This analysis is in accordance with experimental outcomes. §.§ Complexity of VASS Runs We recall the complexity measures for VASS runs used in previous works <cit.>. These functions can be seen as variants of the standard time/space complexities for Turing machines. Let = Q, (Q_n,Q_p),T,P be a d-dimensional VASS MDP, c ∈{1,…,d}, and t ∈ T. For every computation π = p_0 _0,p_1 _1,p_2 _2,…, we put (π) = (π) [c](π) = sup{_i(c) | 0 ≤ i < (π)} [t](π) = We refer to the functions , [c], and [t] as termination, c-counter, and t-transition complexity, respectively. Let be one of the complexity functions defined above. In VASS abstractions of computer programs, the input is represented by initial counter values, and the input size corresponds to the maximal initial counter value. The existing works on non-probabilistic VASS concentrate on analyzing the asymptotic growth of the functions _max : →_∞ where _max(n) = max{(π) |π} For VASS MDP, we can generalize _max into _ as follows: _(n) = max{_p^σ[] |σ∈Σ_, p ∈ Q} Note that for non-probabilistic VASS, the values of _max(n) and _(n) are the same. However, the function _ suffers from the deficiency illustrated in the motivating example at the beginning of Section <ref>. To see this, consider the one-dimensional VASS MDP modeling the simple probabilistic program (see Fig. <ref>). For every n ≥ 1 and the only (trivial) strategy σ, we have that ^σ_p [ < ∞] = 1 and _(n) = ∞. However, the practical experience with trial runs of is the same as with the original probabilistic program (see above). §.§ Asymptotic Complexity Estimates In this section, we introduce asymptotic complexity estimates allowing for a precise analysis of the asymptotic growth of the termination, c-counter, and t-transition complexity, especially when their expected values are infinite for a sufficiently large input. For the sake of readability, we first present a simplified variant applicable to strongly connected VASS MDPs. Let be one of the complexity functions for VASS computations defined in Section <ref>, and let f : →. We say that f is a tight estimate of if, for arbitrarily small ε > 0, the value of (n) is “squeezed” between f^1-ε(n) and f^1+ε(n) as n →∞. More precisely, for every ε > 0, * there exist p ∈ Q and strategies σ_1,σ_2,… such that lim inf_n →∞ ^σ_n_p [≥ (f(n))^1-ε] = 1; * for all p ∈ Q and strategies σ_1,σ_2,… we have that lim sup_n →∞ ^σ_n_p [≥ (f(n))^1+ε] = 0. The above definition is adequate for strongly connected VASS MDPs because tight estimates tend to exist in this subclass. Despite some effort, we have not managed to construct an example of a strongly connected VASS MDP where an with some upper polynomial estimate does not have a tight estimate (see Conjecture <ref>). However, if the underlying graph of is not strongly connected, then the asymptotic growth of can differ for computations visiting a different sequence of maximal end components (MECs) of , and the asymptotic growth of can be “squeezed” between f^1-ε(n) and f^1+ε(n) only for the subset of computations visiting the same sequence of MECs. This explains why we need a more general definition of complexity estimates presented below. An end component (EC) of is a pair (C,L) where C ⊆ Q and L ⊆ T such that the following conditions are satisfied: * C ≠∅; * if p ∈ C ∩ Q_n, then at least one outgoing transition of p belongs to L; * if p ∈ C ∩ Q_p, then all outgoing transitions of p belong to L; * if (p,,q) ∈ L, then p,q ∈ C; * for all p,q ∈ C we have that q is reachable from p and vice versa. Note that if (C,L) and (C',L') are ECs such that C ∩ C' ≠∅, then (C∪ C', L ∪ L') is also an EC. Hence, every p∈ Q either belongs to a unique maximal end component (MEC), or does not belong to any EC. Also observe that each MEC can be seen as a strongly connected VASS MDP. We say that has DAG-like MEC decomposition if for every pair M,M' of different MECs such that the states of M' are reachable from the states of M we have that the states of M are not reachable from the states of M'. For every infinite path π of , let (π) be the unique sequence of MECs visited by π. Observe that (π) disregards the states that do not belong to any EC; intuitively, this is because the transitions executed in such states do not influence the asymptotic growth of . Observe that the length of (π), denoted by 𝑙𝑒𝑛((π)), can be finite or infinite. The first possibility corresponds to the situation when an infinite suffix of π stays within the same MEC. Furthermore, for all σ∈Σ and p ∈ Q, we have that ^σ_p[𝑙𝑒𝑛() = ∞] =0, and the probability ^σ_p[𝑙𝑒𝑛() ≥ k] decays exponentially in k (these folklore results are easy to prove). All of these notions are lifted to computations in an obvious way. Observe that if a strategy σ aims at maximizing the growth of , we can safely assume that σ eventually stays in a bottom MEC that cannot be exited (intuitively, σ can always move from a non-bottom MEC to a bottom MEC by executing a few extra transitions that do not influence the asymptotic growth of , and the bottom MEC may allow increasing even further). On the other hand, the maximal asymptotic growth of may be achievable along some “minimal” sequence of MECs, and this information is certainly relevant for understanding the behaviour of a given probabilistic program. This leads to the following definition: A type is a finite sequence β of MECs such that (π) = β for some infinite path π. We say that f is a lower estimate of for a type β if for every ε > 0 there exist p ∈ Q and a sequence of strategies σ_1,σ_2,… such that ^σ_n_p[ = β] > 0 for all n ≥ 1 and lim inf_n →∞ ^σ_n_p [≥ (f(n))^1-ε|=β] = 1 . Similarly, we say that f is an upper estimate of for a type β if for every ε>0, every p ∈ Q, and every sequence of strategies σ_1,σ_2,… such that ^σ_n_p[ = β] > 0 for all n ≥ 1 we have that lim sup_n →∞ ^σ_n_p [≥ (f(n))^1+ε|=β] = 0 If there is no upper estimate of for a type β, we say that is unbounded for β. Finally, we say that f is a tight estimate of for β if it is both a lower estimate and an upper estimate of for β. Let us note that in the subclass of non-probabilistic VASS, MECs become strongly connected components (SCCs), and types correspond to paths in the directed acyclic graph of SCCs. Each such path determines the corresponding asymptotic increase of , as demonstrated in <cit.>. We conjecture that types play a similar role for VASS MDPs. More precisely, we conjecture the following: If some polynomial is an upper estimate of for β, then there exists a tight estimate f of for β. Even if Conjecture <ref> is proven wrong, there are interesting subclasses of VASS MDPs where it holds, as demonstrated in subsequent sections. For every pair of MECs M,M', let P(M,M') be the maximal probability (achievable by some strategy) of reaching a state of M' from a state of M in without passing through a state of some other MEC M”. Note that P(M,M') is efficiently computable by standard methods for finite-state MDPs. The weight of a given type β = M_1,…,M_k is defined as (β) = ∏_i=1^k-1 P(M_i,M_i+1). Intuitively, (β) corresponds to the maximal probability of “enforcing” the asymptotic growth of according to the tight estimate f of for β achievable by some strategy. Generally, higher asymptotic growth of may be achievable for types with smaller weights. Consider the following example to understand better the types, their weights, and the associated tight estimates. Let be the VASS MDP of Fig. <ref>. There are four MECs M_1,M_2,M_3,M_4 where M_2,M_3,M_4 are bottom MECs. Hence, there are four types of length one and three types of length two. Let us examine the types of length two initiated in M_1 for ≡[c] where c is the third counter. Note that in M_1, the first counter is repeatedly incremented/decremented with the same probability 1/2. The second counter “counts” these transitions and thus it is “pumped” to a quadratic value (cf. the VASS MDP of Fig. <ref>). Then, a strategy may decide to move to M_2, where the value of the second counter is transferred to the third counter. Hence, n^2 is the tight estimate of [c] for the type M_1,M_2, and (M_1,M_2) = 1. Alternatively, a strategy may decide to move to the probabilistic state q. Then, either M_3 or M_4 is entered with the same probability 1/2, which implies (M_1,M_3) = (M_1,M_4) = 1/2. In M_3, the third counter is unchanged, and hence n is the tight estimate of [c] for the type M_1,M_3. However, in M_4, the second counter previously pumped to a quadratic value is repeatedly incremented/decremented with the same probability 1/2, and the third counter “counts” these transitions. This means that n^4 is a tight estimate of [c] for the type M_1,M_4. This analysis provides detailed information about the asymptotic growth of [c] in . Every type shows “how” the growth specified by the corresponding tight estimate is achievable, and its weight corresponds to the “maximal achievable probability of this growth”. This information is completely lost when analyzing the maximal expected value of [c] for computations initiated in configurations p where p is a state of M_1, because these expectations are infinite for all n ≥ 1. Finally, let us clarify the relationship between the lower/upper estimates of and the asymptotic growth of _. The following observation is easy to prove. If _∈ O(f) where f : → is an unbounded function, then f is an upper estimate of for every type. Furthermore, if f : → is a lower estimate of for some type, then _∈Ω(f^1-ϵ) for each ϵ>0. However, if _∈Ω(f) where f : →, then f is not necessarily a lower estimate of for some type. Observation <ref> shows that complexity estimates are generally more informative than the asymptotics of _ even if _∈Θ(f) for some “reasonable” function f. For example, it may happen that there are only two types β_1 and β_2 where n and n^3 are tight estimates of for β_1 and β_2 with weights 0.99 and 0.01, respectively. In this case, _∈Θ(n^3), although the termination time is linear for 99% of computations. § A DICHOTOMY BETWEEN LINEAR AND QUADRATIC ESTIMATES In this section, we prove the following result: Let be a VASS MDP with DAG-like MEC decomposition and one of the complexity functions , [c], or [t]. For every type β, we have that either n is a tight estimate of for β, or n^2 is a lower estimate of for β. It is decidable in polynomial time which of the two cases holds. Theorem <ref> can be seen as a generalization of the linear/quadratic dichotomy results previously achieved for non-deterministic VASS <cit.> and for the termination complexity in VASS MDPs <cit.>. It suffices to prove Theorem <ref> for the counter complexity. The corresponding results for the termination and transition complexities then follow as simple consequences. To see this, observe that we can extend a given VASS MDP with a fresh “step counter” sc that is incremented by every transition (in the case of ) or the transition t (in the case of [t]) and thus “emulate” and [t] as [sc]. We first consider the case when is strongly connected and then generalize the obtained results to VASS MDPs with DAG-like MEC decomposition. So, let be a strongly connected d-dimensional VASS MDP and c a counter of . The starting point of our analysis is the dual constraint system designed in <cit.> for non-probabilistic strongly connected VASS. We generalize this system to strongly connected VASS MDPs in the way shown in Figure <ref> (the original system of <cit.> can be recovered by disregarding the probabilistic states). Note that solutions of both (I) and (II) are closed under addition. Therefore, both (I) and (II) have solutions maximizing the specified objectives, computable in polynomial time. For clarity, let us first discuss an intuitive interpretation of these solutions, starting with simplified variants obtained for non-probabilistic VASS in <cit.>. In the non-probabilistic case, a solution of (I) can be interpreted as a weighted multicycle, i.e., as a collection of cycles M_1,…, M_k together with weights a_1,… ,a_k such that the total effect of the multicycle, defined by ∑_i=1^k a_i ·𝑒𝑓𝑓𝑒𝑐𝑡(M_i), is non-negative for every counter. Here, 𝑒𝑓𝑓𝑒𝑐𝑡(M_i) is the effect of M_i on the counters. The objective of (I) ensures that the multicycle includes as many transitions as possible, and the total effect of the multicycle is positive on as many counters as possible. For VASS MDPs, the M_1,…, M_k should not be interpreted as cycles but as Markovian strategies for some ECs, and 𝑒𝑓𝑓𝑒𝑐𝑡(M_i) corresponds to the vector of expected counter changes per transition in M_i. The objective of (I) then maximizes the number of transitions used in the strategies M_1,…, M_k, and the number of counters where the expected effect of the “multicycle” is positive. A solution of (II) for non-probabilistic VASS can be interpreted as a ranking function for configurations defined by 𝑟𝑎𝑛𝑘(p)=(p)+∑_i=1^d(i)(i), such that the value of 𝑟𝑎𝑛𝑘 cannot increase when moving from a configuration p to a configuration q using a transition t=(p,-,q). The objective of (II) ensures that as many transitions as possible decrease the value of 𝑟𝑎𝑛𝑘, and 𝑟𝑎𝑛𝑘 depends on as many counters as possible. For VASS MDPs, this interpretation changes only for the outgoing transitions t=(p,,q) of probabilistic states. Instead of considering the change of 𝑟𝑎𝑛𝑘 caused by such t, we now consider the expected change of 𝑟𝑎𝑛𝑘 caused by executing a step from p. The objective ensures that 𝑟𝑎𝑛𝑘 depends on as many counters as possible, the value of 𝑟𝑎𝑛𝑘 is decreased by as many outgoing transitions of non-deterministic states as possible, and the expected change of 𝑟𝑎𝑛𝑘 caused by performing an step is negative in as many probabilistic states as possible. The key tool for our analysis is the following dichotomy (a proof is in Appendix <ref>). Let be a (maximal) solution to the constraint system (I) and , be a (maximal) solution to the constraint system (II). Then, for each counter c we have that either (c)>0 or ∑_t ∈ T(t)_t(c)>0, and for each transition t = (p,,q)∈ T we have that * if p∈ Q_n then either (q)-(p)+∑_i=1^d(i)(i)<0 or (t)>0; * if p∈ Q_p then either ∑_t'=(p,',q') ∈(p)P(t')((q')-(p)+∑_i=1^d'(i)(i))< 0 or (t)>0. For the rest of this section, we fix a maximal solution of (I) and a maximal solution , of (II), such that the smallest non-zero element of , is at least 1. We define a ranking function 𝑟𝑎𝑛𝑘: ()→ as 𝑟𝑎𝑛𝑘(s)=(s)+∑_i=1^d(i)(i). Now we prove the following theorem: For each counter c, if (c)>0 then n is a tight estimate of [c] (for the only type of ). Otherwise, i.e., when (c)=0, the function n^2 is a lower estimate of [c]. Note that Theorem <ref> implies Theorem <ref> for strongly connected VASS MDPs. A proof is obtained by combining the following lemmata. For every counter c such that (c)>0, every ε >0, every p∈ Q, and every σ∈Σ, there exists n_0 such that for all n≥ n_0 we have that ^σ_p ([c] ≥ n^1+ε )≤ kn^-ε where k is a constant depending only on . A proof is in Appendix <ref>. For 𝑇𝑎𝑟𝑔𝑒𝑡𝑠⊆() and m ∈, we use ^≤ m(𝑇𝑎𝑟𝑔𝑒𝑡𝑠) to denote the set of all computations π = p_0_0, p_1_1,… such that p_i_i ∈𝑇𝑎𝑟𝑔𝑒𝑡𝑠 for some i ≤ m. For each counter c such that (c)=0 we have that _[c] ∈Ω(n^2) and n^2 is a lower estimate of [c]. Furthermore, for every ε>0 there exist a sequence of strategies σ_1,σ_2,…, a constant k, and p∈ Q such that for every 0<ε'<ε, we have that lim_n →∞_p^σ_n(^≤ kn^2-ε'(𝑇𝑎𝑟𝑔𝑒𝑡𝑠_n)) = 1 where 𝑇𝑎𝑟𝑔𝑒𝑡𝑠_n = {q∈() |(c)≥ n^2-ε}. A proof is in Appendix <ref>. It remains to prove Theorem <ref> for VASS MDPs with DAG-like MEC decomposition. Here, we proceed by analyzing the individual MECs one by one, transferring the output of the previous MEC to the next one. We start in a top MEC with all counters initialized to n. Here we can directly apply Theorem <ref> to determine which of the [c] have a tight estimate n and a lower estimate n^2, respectively. It follows from Lemma <ref> that all counters c such that n^2 is a lover estimate of [c] can be simultaneously pumped to n^2-ε with very high probability. However, this computation may decrease the counters c such that n is a tight estimate for [c]. To ensure that the value of these counters is still Ω(n) when entering the next MEC, we first divide the initial counter vector into two halves, each of size ⌊/2⌋, and then pump the counters c such that n^2 is a lower estimate for [c] to the value (⌊n/2⌋)^2-ε. We show that the length of this computation is at most quadratic. The value of the other counters stays at least ⌊n/2⌋. When analyzing the next MEC, we treat the counters previously pumped to quadratic values as “infinite” because they are sufficiently large so that they cannot prevent pumping additional counters to asymptotically quadratic values. Technically, this is implemented by modifying every counter update vector so that [c] = 0 for every “quadratic” counter c. A precise formulation of these observations and the corresponding proofs are given in Appendix <ref>. We conjecture that the dichotomy of Theorem <ref> holds for all VASS MDPs, but we do not have a complete proof. If the MEC decomposition is not DAG-like, a careful analysis of computations revisiting the same MECs is required; such repeated visits may but do not have to enable additional asymptotic growth of [c]. § ONE-DIMENSIONAL VASS MDPS In this section, we give a full and effective classification of tight estimates of , [c], and [t] for one-dimensional VASS MDPs. More precisely, we prove the following theorem: Let be a one-dimensional VASS MDP. We have the following: * Let c be the only counter of . Then one of the following possibilities holds: * There exists a type β=M such that [c] is unbounded for β. * n is a tight estimate of [c] for every type. * Let t be a transition of . Then one of the following possibilities holds: * There exists a type β=M such that [t] is unbounded for β. * There exists a type β such that (β)>0 and [t] is unbounded for β. * There exists a type β=M such that n^2 is a tight estimate of [t] for β. * The transition t occurs in some MEC M, n is a tight estimate of [t] for every type β containing the MEC M, and 0 is a tight estimate of [t] for every type β not containing the MEC M. * The transition t does not occur in any MEC, and for every type β of length k we have that k is an upper estimate of [t] for β. * One of the following possibilities holds: * There exists a type β=M such that is unbounded for β. * There exists a type β=M such that n^2 is a tight estimate of for β. * n is a tight estimate of for every type. It is decidable in polynomial time which of the above cases hold. Note that some cases are mutually exclusive and some may hold simultaneously. Also recall that (β)=1 for every type β of length one, and (β) decays exponentially in the length of β. Hence, if a transition t does not occur in any MEC, there is a constant κ< 1 depending only on such that _p^σ[[t] ≥ i] ≤κ^i for every σ∈Σ and p∈(). For the rest of this section, we fix a one-dimensional VASS MDP = Q, (Q_n,Q_p),T,P and some linear ordering ⊑ on Q. A proof of Theorem <ref> is obtained by analyzing bottom strongly connected components (BSCCs) in a Markov chain obtained from by “applying” some MD strategy σ (we use Σ_ to denote the class of all MD strategies for ). Recall that σ selects the same outgoing transition in every p ∈ Q_n whenever p is revisited, and hence we can “apply” σ to by removing the other outgoing transitions. The resulting Markov chain is denoted by _σ. Note that every BSCC of _σ can also be seen as an end component of . For a MEC M of , we write ⊆ M if all states and transitions of are included in M. For every BSCC of _σ, let p_ be the least state of with respect to ⊑. Let _𝔹 be a function assigning to every infinite path π = p_0,_1,p_1,_2,… the sum ∑_i=1^ℓ_i if p_0 = p_ and ℓ≥ 1 is the least index such that p_ℓ = p_, otherwise _𝔹(π) = 0. Hence, _𝔹(π) is the change of the (only) counter c along π until p_ is revisited. Let be a BSCC of _σ. We say that is * increasing if ^σ_p_𝔹(_B)>0, * decreasing if ^σ_p_𝔹(_B)<0, * bounded-zero if ^σ_p_𝔹(_B)=0 and _p_𝔹^σ[_=0] =1, * unbounded-zero if ^σ_p_𝔹(_B)=0 and _p_𝔹^σ[_=0] <1. Note that the above definition does not depend on the concrete choice of ⊑. We prove the following results relating the existence of upper/lower estimates of , [c], and [t] to the existence of BSCCs with certain properties. More concretely, * for [c], we show that * [c] is unbounded for some type β=M if there exists an increasing BSCC of _σ for some σ∈Σ_ such that ⊆ M (Lemma <ref>); * otherwise, n is a tight estimate of [c] for every type (Lemma <ref>) * for , we show that * is unbounded for some type β=M if there exists an increasing or bounded-zero BSCC of _σ for some σ∈Σ_ such that ⊆ M (Lemma <ref>, Lemma <ref>); * otherwise, n^2 is an upper estimate of for every type β (Lemma <ref>); * if there exists an unbounded-zero BSCC of _σ for some σ∈Σ_, then n^2 is a lower estimate of for β=M where ⊆ M (Lemma <ref>); * if every BSCC of every _σ is decreasing, then _(n)∈Θ(n) (this follows from <cit.>), and hence n is a tight estimate of for every type (Observation <ref>); * for [t], we distinguish two cases: * If t is not contained in any MEC of , then for every type β of length k, the transition t cannot be executed more than k times along a arbitrary computation π where (π) = β. * If t is contained in a MEC M of , then * [t] is unbounded for β=M if there exist an increasing BSCC of _σ for some σ∈Σ_ such that ⊆ M (Lemma <ref>), or bounded-zero BSCC of _σ for some σ∈Σ_ such that contains t (Lemma <ref>); * [t] is unbounded for every β=M_1,…, M_k such that M = M_i for some i and there exists an increasing BSCC of _σ for some σ∈Σ_ such that ⊆ M_j for some j ≤ i (Lemma <ref>); * otherwise, n^2 is an upper estimate of [t] for every type (Lemma <ref>); * if there is an unbounded-zero BSCC of _σ for some σ∈Σ_ such that contains t, then n^2 is a lower estimate of [t] for β=M (Lemma <ref>); * if every BSCC of every _σ is decreasing, then [t]_(n)∈Θ(n) (this follows from <cit.>), and hence n is an upper estimate of [t] for every type (Observation <ref>). The polynomial time bound of Theorem <ref> is then obtained by realizing the following: First, we need to decide the existence of an increasing BSCC of _σ for some σ∈Σ_. This can be done in polynomial time using the constraint system (I) of Fig. <ref> (Lemma <ref>). If no such increasing BSCC exists, we need to decide the existence of a bounded-zero BSCC, which can be achieved in polynomial time for a subclass of one-dimensional VASS MDPs where no increasing BSCC exists (Lemma <ref>). Then, if no bounded-zero BSCC exists, we need to decide the existence of an unbounded-zero BSCC, which can again be done in polynomial time using the constraint system (I) of Fig. <ref> (realize that any solution of (I) implies the existence of a BSCC that is either increasing, bounded-zero, or unbounded-zero). Hence, the “algorithmic part” of Theorem <ref> is an easy consequence of the above observations, but there is one remarkable subtlety. Note that we need to decide the existence of a bounded-zero BSCC only for a subclass of one-dimensional VASS MDPs where no increasing BSCCs exist. This is actually crucial, because deciding the existence of a bounded-zero BSCC in general one-dimensional VASS MDPs is NP-complete (Lemma <ref>). The main difficulties requiring novel insights are related to proving the observation about [c], stating that if there is no increasing BSCC of _σ for any σ∈Σ_, then n is an upper estimate of [c] for every type. A comparably difficult (and in fact closely related) task is to show that if there is no increasing or bounded-zero BSCC, then n^2 is an upper estimate of for every type. Note that here we need to analyze the behaviour of under all strategies (not just MD), and consider the notoriously difficult case when the long-run average change of the counter caused by applying the strategy is zero. Here we need to devise a suitable decomposition technique allowing for interpreting general strategies as “interleavings” of MD strategies and lifting the properties of MD strategies to general strategies. Furthermore, we need to devise techniques for reducing the problems of our interest to analyzing certain types of random walks that have already been studied in stochastic process theory. We discuss this more in the following subsection, and we refer to Appendix <ref> for a complete exposition of these results. §.§ MD decomposition As we already noted, one crucial observation behind Theorem <ref> is that if there is no increasing BSCC of _σ for any σ∈Σ_, then n is an upper estimate of [c] for every type. In this section, we sketch the main steps towards this result. First, we show that every path in can be decomposed into “interweavings” of paths generated by MD strategies. Let α=p_0,_1,…,p_k be a path. For every i ≤ k, we use α_..i=p_0,_1, …,p_i to denote the prefix of α of length i. We say that α is compatible with a MD strategy σ if σ(α_..i) = (p_i,_i+1,p_i+1) for all i<k such that p_i∈ Q_n. Furthermore, for every path β=q_0,_1,q_1,…,q_ℓ such that p_k=q_0, we define a path α∘β= p_0,_1,p_1,…,p_k,_1,q_1,…,q_ℓ. Let be a VASS MDP, π_1,…, π_k ∈Σ_, and p_1,…,p_k ∈ Q. An of a path α = s_1,…,s_m under π_1,…, π_k and p_1,…,p_k is a decomposition of α into finitely many paths α = γ_1^1 ∘⋯∘γ_1^k ∘ γ_2^1∘⋯∘γ_2^k ∘ ⋯ ∘ γ_ℓ^1∘⋯∘γ_ℓ^k satisfying the following conditions: * for all i < ℓ and j≤ k, the last state of γ_i^j is the same as the first state of γ_i+1^j; * for every j≤ k, γ_1^j ∘⋯∘γ_ℓ^j is a path that begins with p_j and is compatible with π_j. Note that π_1,…, π_k and p_1,…,p_k are not necessarily pairwise different, and the length of γ_i^j can be zero. Also note that the same α may have several MD-decompositions. Intuitively, an MD decomposition of α shows how to obtain α by repeatedly selecting zero or more transitions by π_1,…,π_k. The next lemma shows that for every VASS MDP , one can fix MD strategies π_1,…,π_k and states p_1,…,p_k such that every path α in has an MD-decomposition under π_1,…,π_k and p_1,…,p_k. Furthermore, such a decomposition is constructible online as α is read from left to right. For every VASS MDP , there exist π_1,…,π_k ∈Σ_, p_1,…,p_k ∈ Q, and a function _ such that the following conditions are satisfied for every finite path α: * _(α) returns an MD-decomposition of α under π_1,…,π_k and p_1,…,p_k. * _(α)=_(α_..(α)-1) ∘ γ^1∘⋯∘γ^k, where exactly one of γ^i has positive length (the i is called the mode of α). * If the last state of α_..(α)-1 is probabilistic, then the mode of α does not depend on the last transition of α. A proof of Lemma <ref> is in Appendix <ref>. According to Lemma <ref>, every strategy σ for just performs a certain “interleaving” of the MD strategies π_1,…,π_k initiated in the states p_1,…,p_k. We aim to show that if every BSCC of every _π_j is non-increasing, then n is an upper estimate of [c] for every type. Since we do not have any control over the length of the individual γ_i^j occurring in MD-decompositions, we need to introduce another concept of extended VASS MDPs where the strategies π_1,…,π_k can be interleaved in “longer chunks”. Intuitively, an extended VASS MDP is obtained from by taking k copies of sharing the same counter. The j-th copy selects transitions according to π_j. At each round, only one π_j makes a move, where the j is selected by a special type of “pointing” strategy defined especially for extended MDPs. Note that σ can be faithfully simulated in the extended VASS MDP by a pointing strategy that selects the indexes consistently with _. However, we can also construct another pointing strategy that simulates each π_j longer (i.e., “precomputes” the steps executed by π_j in the future) and thus “close cycles” in the BSCC visited by π_j. This computation can be seen as an interleaving of a finite number of independent random walks with non-positive expectations. Then, we use the optional stopping theorem to get an upper bound on the total expected number of “cycles”, which can then be used to obtain the desired upper estimate. We refer to Appendix <ref> for details. §.§ A Note about Energy Games One-dimensional VASS MDPs are closely related to energy games/MDPs <cit.>. An important open problem for energy games is the complexity of deciding the existence of a safe configuration where, for a sufficiently high energy amount, the responsible player can avoid decreasing the energy resource (counter) below zero. This problem is known to be in ∩, and a pseudopolynomial algorithm for the problem exists; however, it is still open whether the problem is in when the counter updates are encoded in binary. Our analysis shows that this problem is solvable in polynomial time for energy (i.e., one-dimensional VASS) MDPs such that there is no increasing SCC of _σ for any σ∈Σ_. We say that a SCC of _σ is non-decreasing if does not contain any negative cycles. Note that every bounded-zero SCC is non-decreasing, and a increasing SCC may but does not have to be non-decreasing. An energy MDP has a safe configuration iff there exists a non-decreasing SCC of _σ for some σ∈Σ_. The “⇐” direction of Lemma <ref> is immediate, and the other direction can be proven using our MD decomposition technique, see Appendix <ref>. Note that if there is no increasing SCC of _σ for any σ∈Σ_, then the existence of a non-decreasing SCC is equivalent to the existence of a bounded-zero SCC, and hence it can be decided in polynomial time (see the results presented above). However, for general energy MDPs, the best upper complexity bound for the existence of a non-decreasing SCC is ∩. Interestingly, a small modification of this problem already leads to -completeness, as demonstrated by the following lemma. The problem whether there exists a non-decreasing SCC of _σ for some σ∈Σ_ such that contains a given state p ∈ Q is -complete. A proof of Lemma <ref> is in Appendix <ref>. § CONCLUSIONS We introduced new estimates for measuring the asymptotic complexity of probabilistic programs and their VASS abstractions. We demonstrated the advantages of these measures over the asymptotic analysis of expected values, and we have also shown that tight complexity estimates can be computed efficiently for certain subclasses of VASS MDPs. A natural continuation of our work is extending the results achieved for one-dimensional VASS MDPs to the multi-dimensional case. In particular, an interesting open question is whether the polynomial asymptotic analysis for non-deterministic VASS presented in <cit.> can be generalized to VASS MDPs. Since the study of multi-dimensional VASS MDPs is notoriously difficult, a good starting point would be a complete understanding of VASS MDPs with two counters. § PROOFS FOR SECTION <REF> §.§ Proof of Lemma <ref> [<ref>] Let be a (maximal) solution to the constraint system (I) and , be a (maximal) solution to the constraint system (II). Then, for each counter c we have that either (c)>0 or ∑_t ∈ T(t)_t(c)>0, and for each transition t = (p,,q)∈ T we have that * if p∈ Q_n then either (q)-(p)+∑_i=1^d(i)(i)<0 or (t)>0; * if p∈ Q_p then either ∑_t'=(p,',q') ∈(p)P(t')((q')-(p)+∑_i=1^d'(i)(i))< 0 or (t)>0. Proof: Let A=T_n∪ Q_p, where T_n=⋃_p∈ Q_n(p). For each a∈ A, let _a be a probability distribution on Q such that * _(p,,q)(q)=1 for a=(p,,q)∈ T_n, * _p(q)=∑_(p,,q)∈(p)∩(q) P((p,,q)) for a=p∈ Q_p, * and _a(p)=0 else, let _a be a probability distribution on Q such that * _(p,,q)(p)=1 for a=(p,,q)∈ T_n, * _p(p)=1 for a=p∈ Q_p, * and _a(p)=0 else, and let _a be defined as * _(p,,q)= for a=(p,,q)∈ T_n, * and _p=∑_(p,,q)∈(p) P((p,,q)) for a=p∈ Q_p. Then we can rewrite the constraint systems as [c]0.35 Constraint system (I'): Find ' ∈ℤ^A such that ∑_a ∈ A'(a) _a ≥0⃗ ' ≥0⃗ and for each p∈ Q ∑_a∈ A('(a)_a(p)-'(a)_a(p)) =0 [c]0.6 Constraint system (II'): Find ∈ℤ^d,∈ℤ^Q such that ≥0⃗ ≥0⃗ and for each a∈ A ∑_p∈ Q ((p)_a(p) - (p)(_a(p))) + ∑_i=1^d _a(i)(i)≤ 0 We recognize systems (I) and (I') as equivalent, and systems (II) and (II') as equivalent as per the following lemma. If ',, is a solution to the rewritten constraint systems (I') and (II'), then ,, is a solution to the original constraint systems (I) and (II), where (t)='(t) for t∈ T_n, and ((p,,q))=P((p,,q))'(p) for (p,,q)∈ T∖ T_n. Similarly, if ,, is a solution to the original constraint systems (I) and (II), then ',, is a solution to the rewritten constraint systems (I') and (II'), where '(t)=(t) for t∈ T_n, and '(p)=∑_t∈(p)(t) for p∈ Q_p. The first half (I): Let ' be a solution of (I'), we will show that is a solution to (I), where (t)='(t) for t∈ T_n, and ((p,,q))=P((p,,q))'(p) for (p,,q)∈ T∖ T_n. It holds from (I') that ∑_a ∈ A'(a) _a = ∑_t ∈ T_n'(t) _t + ∑_p ∈ Q_p'(p) _p = = ∑_t ∈ T_n'(t) _t + ∑_p ∈ Q_p'(p) (∑_(p,,q)∈(p) P((p,,q))) = ∑_t ∈ T_n(t) _t + ∑_p ∈ Q_p∑_(p,,q)∈(p)'(p) P((p,,q)) = ∑_t ∈ T_n(t) _t + ∑_p ∈ Q_p∑_(p,,q)∈(p)((p,,q)) = ∑_t ∈ T(t) _t ≥0⃗ ≥ 0 holds from both '≥ 0 and P(t)≥ 0 for each t∈ T∖ T_n. For each p∈ Q it holds from (I') ∑_a∈ A('(a)_a(p)-'(a)_a(p)) = = ∑_a∈ T_n('(a)_a(p)-'(a)_a(p)) + ∑_a∈ Q_r('(a)_a(p)-'(a)_a(p)) = ∑_t∈(p)∩ T_n'(t) - ∑_t∈(p)∩ T_n'(t) + ∑_a∈ Q_r'(a)(∑_t∈(a)∩(p) P(t))- '(p) = ∑_t∈(p)∩ T_n(t) - ∑_t∈(p)∩ T_n(t) + ∑_a∈ Q_r∑_t∈(a)∩(p)'(a)P(t)- ∑_t∈(p) P(t)'(p) = ∑_t∈(p)∩ T_n(t) - ∑_t∈(p)∩ T_n(t) + ∑_a∈ Q_r∑_t∈(a)∩(p)(t)- ∑_t∈(p) (t) = ∑_t∈(p)(t) - ∑_t∈(p)(t) =0 And for each p∈ Q_p, t∈(p) it holds ∑_t'∈(p)(t')=∑_t'∈(p)P(t')'(p)='(p), therefore it holds (t)=P(t)'(p)= P(t)∑_t'∈(p)x(t'). Thus is a solution to (I). The first half (II): Let , be a solution of (II'), we will show it is also a solution of (II). For each a=(p,,q)∈ T_n it holds from (II') ∑_p'∈ Q ((p')(_a(p') - (p')(_a(p'))) + ∑_i=1^d _a(i)(i) = (q) - (p) + ∑_i=1^d (i)(i) ≤ 0 And for each a=p∈ Q_p it holds from (II') ∑_q∈ Q ((q)_a(q) - (q)_a(q))) + ∑_i=1^d _a(i)(i) = = ∑_q∈ Q ((q)(∑_t∈(p)∩(q) P(t))) - (p) + ∑_i=1^d _a(i)(i) = ∑_t∈(p)(q)P(t) - ∑_t∈(p)(p)P(t) + ∑_i=1^d _a(i)(i) = ∑_t∈(p) P(t)((q) - (p)) + ∑_i=1^d ∑_t∈(p) P(t)_t(i)(i) = ∑_t∈(p) P(t)((q) - (p)) + ∑_t∈(p)∑_i=1^d P(t)_t(i)(i) = ∑_t∈(p) P(t)( (q) - (p) + ∑_i=1^d _t(i)(i) ) ≤ 0 Therefore , is a solution of (II). The second half (I'): Let be a solution of (I), we will show that ' is a solution of (I'), where '(t)=(t) for t∈ T_n, and '(p)=∑_t∈(p)(t) for p∈ Q_p. From (I) it holds for T_p=⋃_p∈ Q_p(p) ∑_t ∈ T(t) _t = = ∑_t ∈ T_n(t) _t + ∑_t ∈ T_p(t) _t = ∑_t ∈ T_n'(t) _t + ∑_(p,,q) ∈ T_p P((p,,q))·(∑_t'∈(p)(t')) = ∑_t ∈ T_n'(t) _t + ∑_(p,,q) ∈ T_p P((p,,q)) '(p) = ∑_t ∈ T_n'(t) _t + ∑_p∈ Q_p∑_(p,,q) ∈(p) P((p,,q)) '(p) = ∑_t ∈ T_n'(t)·_t + ∑_p∈ Q_p'(p)·_p = ∑_a ∈ A'(a)·_a ≥0⃗ We get '≥ 0 trivially from (I). It also holds ∑_a∈ A('(a)_a(p)-'(a)_a(p)) = = ∑_t∈ T_n('(t)_t(p)-'(t)_t(p))+ ∑_q∈ Q_p('(q)_q(p)-'(q)_q(p)) = ∑_t∈(p)∩ T_n'(t) -∑_t∈(p)∩ T_n'(t) +∑_q∈ Q_p'(q)(∑_t∈(q)∩(p) P(t)) -∑_q∈ Q_p∩{p }'(q) = ∑_t∈(p)∩ T_n'(t) -∑_t∈(p)∩ T_n'(t) +∑_q∈ Q_p∑_t∈(q)∩(p) P(t)'(q) -∑_q∈ Q_p∩{p }'(q) = ∑_t∈(p)∩ T_n(t) -∑_t∈(p)∩ T_n(t) +∑_q∈ Q_p∑_t∈(q)∩(p) P(t)· (∑_t'∈(q)(t')) - -∑_q∈ Q_p∩{p }∑_t∈(q)(t) = ∑_t∈(p)∩ T_n(t) -∑_t∈(p)∩ T_n(t) +∑_q∈ Q_p∑_t∈(q)∩(p)(t) -∑_q∈ Q_p∩{p }∑_t∈(q)(t) = ∑_t∈(p)∩ T_n(t) -∑_t∈(p)∩ T_n(t) +∑_t∈ T_p∩(p)(t) -∑_q∈ Q_p∩{p }∑_t∈(q)(t) If p∈ Q_n, then this becomes ∑_t∈(p)∩ T_n(t) -∑_t∈(p)∩ T_n(t) +∑_t∈ T_p∩(p)(t) -∑_q∈ Q_p∩{p }∑_t∈(q)(t) = = ∑_t∈(p)∩ T_n(t) -∑_t∈(p)∩ T_n(t) +∑_t∈ T_p∩(p)(t) -0 = = ∑_t∈(p)(t) -∑_t∈(p)∩ T_n(t) = ∑_t∈(p)(t) -∑_t∈(p)(t) = 0 with the last line being from (I). And if p∈ Q_p, then it becomes ∑_t∈(p)∩ T_n(t) -0 +∑_t∈ T_p∩(p)(t) -∑_q∈ Q_p∩{p }∑_t∈(q)(t) = = ∑_t∈(p)∩ T_n(t) +∑_t∈ T_p∩(p)(t) -∑_t∈(p)(t) = = ∑_t∈(p)(t) -∑_t∈(p)(t) = 0 with the last line being from (I). Therefore ' is a solution of (I') The second half (II'): Let , be a solution of (II) we will show that , is also a solution of (II'). From (II) we have for each t=(p,,q)∈ T_n that (q)-(p)+∑_i=1^d(i)(i) = = (q)·_t(q)-(p)·_t(p)+∑_i=1^d_t(i)(i) = = ∑_r∈ Q (r)·_t(r)-(r)·_t(r))+∑_i=1^d_t(i)(i) ≤ 0 Where we used that _t(r)=0 for every r≠ q, and _t(r)=0 for every r≠ p. Additionally, From (II) we also have for each p∈ Q_p that ∑_t= (p,,q) ∈(p)P(t)((q)-(p)+∑_i=1^d_t(i)(i)) = = -(p)+∑_t= (p,,q) ∈(p)P(t)((q)+∑_i=1^d_t(i)(i)) = = -(p)+∑_q∈ Q∑_t∈(p)∩(q)P(t)((q)+∑_i=1^d_t(i)(i)) = = -(p)+∑_q∈ Q∑_t ∈(p)∩(q)P(t)(q)+ ∑_t∈(p)P(t) ( ∑_i=1^d_t(i)(i)) = = -(p)+∑_q∈ Q(q) ∑_t∈(p)∩(q)P(t) + ∑_i=1^d(i)·(∑_t ∈(p)P(t) _t(i)) = = -(p)·_p(p)+∑_q∈ Q(q)·_p(q) + ∑_i=1^d(i)·_p (i) = = ∑_q∈ Q((q)·_p(q)-(q)·_p(q)) + ∑_i=1^d(i)·_p (i) ≤ 0 Therefore , is also a solution to (II') We will now rewrite the constraint systems (I') and (II') into matrix form. Let D be a A×{1,…, d } matrix whose columns are indexed by elements of A, and rows indexed by counters c∈{1,…,d}, such that the column D(a)=_a. And let F be a A× Q matrix, whose columns are indexed by elements of A, and rows are indexed by states p∈ Q, such that the column F(a) is equal to the vector such that (p)=_a(p)-_a(p) for each p∈ Q. Then we can further rewrite the systems (I') and (II') as follows: [c]0.45 constraint system (I'): Find ' ∈ℤ^A such that D ' ≥0⃗ ' ≥0⃗ F ' = 0⃗ [c]0.45 constraint system (II'): Find ∈ℤ^d,∈ℤ^Q with ≥0⃗ ≥0⃗ F^T + D^T ≤0⃗ The rest then follows exactly the same as the the proof of the dichotomy on non-stochastic VASS in <cit.> (Lemma 4), as the only difference between our systems and the ones used in <cit.> is that the matrix F now also may contain rational numbers other than -1,0,1. The proof in <cit.> is already made over ℤ, and the only additional requirement it needs is that each column of F sums up to 0, which is satisfied also by our F. §.§ The proof from <cit.> (Lemma 4) For the sake of completeness we include a copy of the proof from <cit.> (Lemma 4). All credit for the proof in this subsection goes to the author of <cit.>. The only changes we made was to rename some variables. The proof will be obtained by two applications of Farkas' Lemma. We will employ the following version of Farkas' Lemma, which states that for matrices A,C and vectors b,d, exactly one of the following statements is true: [c]0.4 there exists x with [ Ax ≥ b; Cx = d ] [c]0.5 there exist y,z with [ y ≥ 0; A^T y + C^T z = 0; b^T y + d^T z > 0 ] We now consider the constraint systems (_) and (_) stated below. Both constraint systems are parameterized by a ∈ A (we note that only Equations (<ref>) and (<ref>) are parameterized by a). [c]0.4 constraint system (_): there exists ∈ℤ^A with U ≥ 0 ≥ 0 = 0 () ≥ 1 [c]0.52 constraint system (_): there exist ∈ℤ^,∈ℤ^() with ≥ 0 ≥ 0 ^T + ^T ≤ 0 with < 0 in line We recognize constraint system (_) as the dual of constraint system (_) in the following Lemma: Exactly one of the constraint systems (_) and (_) has a solution. We fix some a∈ A. We denote by _a ∈ℤ^A the vector with _a(a') = 1, if a' = a, and _a(a') = 0, otherwise. Using this notation we rewrite (_) to the equivalent constraint system (_'): [r]0.3 constraint system (_'): [r]0.3 [ ; ] ≥ [ 0; _ ] = 0 Using Farkas' Lemma, we see that either (_') is satisfiable or the following constraint system (_') is satisfiable: [r]0.47 constraint system (_'): [ ; k ] ≥ 0 [ ; ]^T [ ; k ] + ^T = 0 [ 0; _ ]^T [ ; k ] + 0^T > 0 [c]0.45 constraint system (_') simplified: ≥ 0 k ≥ 0 ^T + k + ^T = 0 k() > 0 We observe that solutions of constraint system (_') are invariant under shifts of , i.e, if , k, is a solution, then , k, + c · is also a solution for all c ∈ℤ (because elements of every row of ^T sum up to 0). Hence, we can force to be non-negative. We recognize that constraint systems (_') and (_) are equivalent. We now consider the constraint systems (_) and (_) stated below. Both constraint systems are parameterized by a counter (we note that only Equations (<ref>) and (<ref>) are parameterized by ). [c]0.42 constraint system (_): there exists ∈ℤ^A with ≥ 0 with ≥ 1 in line ≥ 0 = 0 [c]0.5 constraint system (_): there exist ∈ℤ^,∈ℤ^() with ≥ 0 ≥ 0 ^T + ^T ≤ 0 () > 0 We recognize constraint system (_) as the dual of constraint system (_) in the following Lemma: Exactly one of the constraint systems (_) and (_) has a solution. We fix some counter . We denote by _∈ℤ^ the vector with _(') = 1, if ' =, and _(') = 0, otherwise. Using this notation we rewrite (_) to the equivalent constraint system (_'): [r]0.3 constraint system (_'): [r]0.3 [ ; ] ≥ [ _; 0 ] = 0 Using Farkas' Lemma, we see that either (_') is satisfiable or the following constraint system (_') is satisfiable: [r]0.47 constraint system (_'): [ ; k ] ≥ 0 [ ; ]^T [ ; k ] + ^T = 0 [ _; 0 ]^T [ ; k ] + 0^T > 0 [c]0.45 constraint system (_') simplified: ≥ 0 k ≥ 0 ^T + k + ^T = 0 () > 0 We observe that solutions of constraint system (_') are invariant under shifts of , i.e, if , k, is a solution, then , k, + c · is also a solution for all c ∈ℤ (because elements of every row of ^T sum up to 0). Hence, we can force to be non-negative. We recognize that constraint systems (_') and (_) are equivalent. §.§ Proof of Lemma <ref> [<ref>] For every counter c such that (c)>0, every ε >0, every p∈ Q, and every σ∈Σ, there exists n_0 such that for all n≥ n_0 we have that ^σ_p ([c] ≥ n^1+ε )≤ kn^-ε where k is a constant depending only on . Let P_1V_1,P_2V_2,… be the random variables encoding the computation under σ from p (i.e. P_iV_i represents the configuration at i-th step of the computation). And let R_1,R_2,… represent the value of rank at i-th step (i.e. R_i=rank(P_iV_i)). Then R_1,R_2,… is a supermartingale. R_1,R_2,… is a supermartingale. One can express R_i+1=R_i+X_i+1, where X_i+1=R_i+1-R_i is the change of rank in the (i+1)-st step. Then it holds ^σ_pn⃗(R_i+1|R_i)=^σ_pn⃗(R_i|R_i)+^σ_pn⃗(X_i+1|R_i)=R_i+^σ_pn⃗(X_i+1|R_i). We want to show that ^σ_pn⃗(X_i+1|R_i)≤ 0. Let T_i+1 be random variable representing the transition taken at (i+1)-st step. Then ^σ_pn⃗(X_i+1|R_i)= ∑_t∈ T_p^σ(T_i+1=t|R_i)·(t) where (t) represents the change of rank under transition t. Let T_n=⋃_p∈ Q_n(p) and T_p=⋃_p∈ Q_p(p) , then we can write ^σ_pn⃗(X_i+1|R_i)= ∑_t∈ T_p_p^σ(T_i+1=t|R_i)·(t)+∑_t∈ T_n_p^σ(T_i+1=t|R_i)·(t). Since for each t=(p,,q)∈ T it holds (t)=(q)-(p)+∑_i=1^d (i)(i), for each t∈ T_n it holds (t)≤ 0, and for each p∈ Q_n, it holds ∑_t∈(p) P(t)(t) ≤ 0. Therefore we can write ∑_t∈ T_p_p^σ(T_i+1=t|R_i)·(t) = ∑_p∈ Q_p (_p^σ(P_i=p) ∑_t∈(p) P(t)·(t)) ≤ 0 and ∑_t∈ T_n_p^σ(T_i+1=t|R_i)·(t)≤ 0 Thus E(X_i+1|R_i)≤ 0 Now let us consider the stopping rule τ that stops when either any counter reaches 0, or any counter c with (c)>0 becomes larger then n^1+ϵ for the first time. (i.e. either V_τ(c')<0 for any c'∈{1,…,d }, or V_τ(c)≥ n^1+ϵ for c with (c)>0). Then for all i, it holds that R_min(i,τ)≤ max_p∈ Q(p) + max_c∈{1,…,d}(c)· d · (n^1+ϵ+u), where u is the maximal increase of a counter in a single transition. Therefore we can apply optional stopping theorem to obtain: max_p∈ Q(p) + max_c∈{1,…,d}(c)· d· n ≥ E(R_1)≥ E(R_τ)≥ pX_n^1+ϵ+(1-p)X_0 where X_n^1+ϵ represents the minimal possible value of R_τ if any counter c with (c)>0 has R_τ(c)≥ n^1+ϵ, p is the probability of any such counter being at least n^1+ϵ upon stopping, and X_0 represents the minimal value of R_τ if no such counter reached n^1+ϵ. We can simplify this as max_p∈ Q(p) + max_c∈{1,…,d}(c)· d· n ≥ pX_n^1+ϵ+(1-p)X_0 max_p∈ Q(p) + max_c∈{1,…,d}(c)· d· n-(1-p)· X_0 ≥ pX_n^1+ϵ max_p∈ Q(p) + max_c∈{1,…,d}(c)· d· n-(1-p)max_c∈{1,…,d}(c) · d· u ≥ pX_n^1+ϵ max_p∈ Q(p) + max_c∈{1,…,d}(c)· d· n-max_c∈{1,…,d}(c) · d· u +p· max_c∈{1,…,d}(c) · d· u ≥ pX_n^1+ϵ max_p∈ Q(p) + max_c∈{1,…,d}(c)· d· n-max_c∈{1,…,d}(c) · d· u ≥ pX_n^1+ϵ-p · max_c∈{1,…,d}(c) · d· u max_p∈ Q(p) + max_c∈{1,…,d}(c)· d· n-max_c∈{1,…,d}(c) · d· u ≥ p(X_n^1+ϵ- max_c∈{1,…,d}(c) · d· u) max_p∈ Q(p) + max_c∈{1,…,d}(c)· d· n-max_c∈{1,…,d}(c) · d· u /X_n^1+ϵ- max_c∈{1,…,d}(c) · d· u≥ p max_p∈ Q(p) + max_c∈{1,…,d}(c)· d· n-max_c∈{1,…,d}(c) · d· u /n^1+ϵ- max_c∈{1,…,d}(c) · d· u≥ p As for all sufficiently large n it holds 0.5· n^1+ϵ≤ n^1+ϵ- max_c∈{1,…,d}(c) · d· u we have max_p∈ Q(p) + max_c∈{1,…,d}(c)· d· n-max_c∈{1,…,d}(c) · d· u /0.5· n^1+ϵ≥ p max_p∈ Q(p)-max_c∈{1,…,d}(c) · d· u/0.5· n^1+ϵ + max_c∈{1,…,d}(c)· d· n/0.5· n^1+ϵ≥ p max_p∈ Q(p)-max_c∈{1,…,d}(c) · d· u/0.5· n^1+ϵ + max_c∈{1,…,d}(c)· d/0.5· n^ϵ≥ p Also as n^1+ϵ≥ n^ϵ we have max_p∈ Q(p)-max_c∈{1,…,d}(c) · d· u/0.5· n^ϵ + max_c∈{1,…,d}(c)· d/0.5· n^ϵ≥ p max_p∈ Q 2·(p)-max_c∈{1,…,d} 2·(c) · d· u + max_c∈{1,…,d} 2·(c)· d/ n^ϵ≥ p As k=max_p∈ Q 2·(p)-max_c∈{1,…,d} 2·(c) · d· u + max_c∈{1,…,d} 2·(c)· d is a constant dependent only on the VASS MDP, it holds for each counter c with (c)>0 and for all sufficiently large n that ^σ_p ([c] ≥ n^1+ϵ )≤ p ≤ kn^-ϵ. §.§ Proof of Lemma <ref> [<ref>] For each counter c such that (c)=0 we have that _[c] ∈Ω(n^2) and n^2 is a lower estimate of [c]. Furthermore, for every ε>0 there exist a sequence of strategies σ_1,σ_2,…, a constant k, and p∈ Q such that for every 0<ε'<ε, we have that lim_n →∞_p^σ_n(^≤ kn^2-ε'(𝑇𝑎𝑟𝑔𝑒𝑡_n)) = 1 where 𝑇𝑎𝑟𝑔𝑒𝑡_n = {q∈() |(c)≥ n^2-ε}. Let _ be the VASS MDP induced by transitions t with (t)>0. In _, Each pair of states p,q∈ Q is either a part of the same MEC of _, or p is not reachable from q and vice-versa, in _. This follows directly from satisfying kirhoff laws. Therefore _ can be decomposed into multiple MECs, and there are no transitions in the MEC decomposition of _. Let these MECs be B_1,…,B_k, and let _1,…,_k be the restriction of to the transitions of B_1,…, B_k. (i.e. _i(t)=(t) if B_i contains t, and otherwise _i(t)=0). For each 1≤ i≤ k, let _i=_i/∑_t∈ T_i(t) be the normalized vector of _i, and let σ_i be a Markovian strategy for B_i such that σ_i(p)(t)=_i(t)/∑_t∈(p)_i(t) for t such that ∑_t∈(p)_i(t)>0, and undefined otherwise. We will use M_i to represent the Markov chain obtained by applying σ_i to B_i. Let m_i∈ℤ^Q be such that m_i(p)=∑_t∈(p)_i(t). Then m_i is an invariant distribution on M_i. Also the expected effect of a single computational step in M_i taken from distribution m_i is equal to ∑_(p,,q)∈ T_i((p,,q)). Let us consider a single computation step in M_i taken from distribution m_i, and let X be resulting distribution on transitions during this step. Then for each transition t=(p,,q)∈ T it holds: * if p∈ Q_p then X(t)=P(t)· m_i(p) =P(t)·∑_t∈(p)_i(t)=_i(t). * if p∈ Q_n then X(t)= σ_i(p)(t)· m_i(p) = _i(t)/∑_t∈(p)_i(t)·∑_t∈(p)_i(t) = _i(t). And as the next distribution m_i' on states can be expressed as m_i'(p)=∑_(q,,p)∈ T X((q,,p)) = ∑_(q,,p)∈ T_i((q,,p))=∑_t∈(p)_i(t)=m_i(p), m_i is an invariant distribution on M_i. Let _1,…, _k be the expected update vectors per single computational step generated by the invariants in M_1,…, M_k. Then from x being a solution to (I), we get that for a_i= ∑_t∈ T x_i(t) it holds ∑_i=1^k a_i·_i ≥0⃗ as well as (∑_i=1^k a_i·_i)(c) > 0 for c with (c)=0. Therefore we can use the results of <cit.>, which states that if there exists a sequence of Markov chains M_1,…,M_k with their respective increments _1,…,_k, and positive integer coefficients a_1,…,a_k such that ∑_i=1^k a_i_i≥0⃗, then there exists a function L(n)∈Θ(n), a state p∈ Q, and sequence of strategies σ_1,σ_2,… such that the probability X_n of the computation from p under σ_n never decreasing at each of the first L^2-ϵ'(n) steps any counter below b_1n_pn⃗^σ_n(C_i^n(c))-b_2n, where b_1,b_2 are some constants and C_i^n is the random variable representing the counter vector after i steps when computing form p under σ_n, satisfies lim_n→∞ X_n = 1. And furthermore, for each counter c with (∑_i=1^k a_i_i)(c) > 0 it holds that _pn⃗^σ_n(C_i^n(c))∈Ω(i). Therefore, with probability at least X_n we reach a configuration q with each counter c such that (c)=0 having (c)≥ n^2-ϵ within L^2-ϵ'(n)≤ kn^2-ϵ' steps, and it holds lim_n→∞ X_n=1. §.§ VASS MDP with DAG-like MEC Decomposition We formalize and prove the idea sketched at the end of Section <ref>. Let be a DAG-like VASS MDP with d counters and a DAG-like MEC decomposition, and β=M_1,…, M_k be it's type. Let _0,_1,…,_k∈{n,∞}^d, and let M_i^_i-1 be the MEC obtained by taking M_i and changing the effect of every transition to ' such that for each c∈{1,…,d }, '(c)=(c) if _i-1(c)=n, and '(c)=0 if _i-1(c)=∞. Furthermore, let the following hold for each counter c∈{1,…,d } and 1≤ i≤ k * _0(c)=n, * _i(c)=n if both _i-1(c)=n and n is a tight estimate of c in M_i^_i-1, * _i(c)=∞ if either _i-1(c)=∞ or n^2 is a lower estimate of c in M_i^_i-1. Then for each ϵ>0, there exists a sequence of strategies σ_1^1,σ_1^2,…,σ_1^k,σ_2^1,…,σ_2^k,σ_3^1,…, such that for each 1≤ i≤ k, and each n, the computation under σ_n^i initiated in some state of M_i with initial counter vector such that for each c∈{1,…,d } it holds * (c)≥⌊n/2^i⌋ if _i-1(c)=n, * (c)≥⌊(n/2^i)^2-ϵ_i-1/2^i⌋ if _i-1(c)=∞, reaches with probability X_n a configuration of M_i with counter vector such that for each c∈{1,…,d } it holds * (c)≥⌊n/2^i+1⌋ if _i(c)=n, * (c)≥⌊(n/2^i+1)^2-ϵ_i/2^i+1⌋ if _i(c)=∞, where ϵ_i=iϵ/k, and it holds lim_n→∞ X_n=1. Furthermore, for each counter c∈{1,…,d }, if _k(c)=n then n is a tight estimate of [c] for type β, and if _k(c)=∞ then n^2 is a lower estimate of [c] for type β. Proof by induction on k. Base case of k=1 holds from Lemma <ref>, and the second part holds from Lemma <ref>. Assume now the Lemma holds for the type M_1,…, M_i-1. Let σ_1,σ_2,… and p∈ Q be from the Lemma <ref> for M_i^_i-1 and for ϵ_i. Then from induction assumption, there are strategies such that when the computation reaches M_i the counters vector is with probability Y_i such that lim_n→∞ Y_n=1 and * (c)≥⌊n/2^i⌋ if _i-1(c)=n, * (c)≥⌊(n/2^i)^2-ϵ_i-1/2^i⌋ if _i-1(c)=∞. Now let us consider the following: upon reaching M_i, we divide the counters vector into two halves, each of size ⌊/2⌋, and then we perform the computation of σ_⌊n/2^i+1⌋ on the first half for ln^2-(i+0.5)ϵ/k steps. (i.e., if the effect on any counter c is less then -⌊/2⌋(c), then the computation stops). Then from Lemma <ref> we will with probability X_n reach a configuration with all counters c such that _i-1(c)=n and _i(c)=∞ being at least (c)≥ (⌊n/2^i+1⌋)^2-ϵ_i, such that lim_n→∞ X_n=1. As the length of this computation is only ln^2-(i+0.5)ϵ/k we cannot decrease any "deleted" counter c with _i-1(c)=∞ by more then an^2-(i+0.5)ϵ/k for some constant a. Therefore for all sufficiently large n, the computation cannot terminate due to such counter being depleted. And since the second half of is untouched, we still have for each counter c with _i-1(c)=∞ at least ⌊(n/2^i)^2-ϵ_i-1/2^i+1⌋ and for each counter c with _i-1(c)=n at least ⌊n/2^i+1⌋. Therefore with probability at least X_nY_n the computation ends in configuration q of M_i such that for each counter c * (c)≥⌊n/2^i+1⌋ if _i(c)=n, * (c)≥⌊(n/2^i+1)^2-ϵ_i/2^i+1⌋ if _i(c)=∞ And it holds lim_n→∞ X_nY_n=1. Thus n^2 is a lower estimate of [c] for type M_1,…,M_i for each c with _i(c)=∞. For the second part of the lemma, let σ be some strategy. Then for every counter c with _i(c)=n, we have from the induction assumption that the probability, of the strategy σ started in initial configuration pn⃗, reaching M_i along type M_1,…,M_i with c being at least n^1+ϵ' for some 0< ϵ', is at most Z_n, where lim sup_n →∞ Z_n=0. Let σ'be a strategy, which for an initial state q such that q is a state of M_1, computes as σ after a path from p to q. Then from Lemma <ref> we have for each 0<ϵ̂ that ^σ'_q n^1+ϵ'(_M_i^_i-1[c] ≥ n^1+ϵ̂ ) = ^σ'_q n^1+ϵ'(_M_i^_i-1[c] ≥ (n^1+ϵ')^log_n^1+ϵ'n^1+ϵ̂ ) ≤ kn^1-log_n^1+ϵ'n^1+ϵ̂ Let y=1-log_n^1+ϵ'n^1+ϵ̂, note that for ϵ'<ϵ̂ it holds y<0. Also let R_q=^σ_pn⃗({α| the first state of M_i in α is q }). Then we can write for each ϵ'<ϵ̂<ϵ ^σ_p n⃗([c] ≥ n^1+ϵ|=M_1,…,M_i ) ≤ ≤^σ_p n⃗([c] ≥ n^1+ϵ'|=M_1,…,M_i-1 )+ ∑_q∈ Q R_q ^σ'_q n^1+ϵ'(_M_i^_i-1[c] ≥ n^1+ϵ̂ ) ≤ Z_n+kn^y And since lim_n→∞ Z_n+kn^y=0, and ϵ',ϵ̂ can be arbitrarily small, we can find values for them for arbitrary ϵ>0. Thus n is a tight estimate of [c] for type M_1,…,M_i. § PROOFS FOR SECTION <REF> Given a one-dimesional VASS MDP, deciding existence of an increasing BSCC of _σ for some σ∈Σ_ can be done in . If such BSCC exists, then it gives us a solution with ∑_(p,,q)∈ T((p,,q))(c) > 0 for (I). The solution is such that if is the invariant distribution on states of 𝔹 under σ, then for each transition (p,,q) contained in 𝔹, ((p,,q))=(p) if p∈ Q_n is a non-deterministic state, and ((p,,q))=(p)P((p,,q)) if p∈ Q_p is a probabilistic state, while (t)=0 for each t that is not contained in 𝔹. And every solution of (I) such that ∑_(p,,q)∈ T((p,,q))(c) > 0 can be used to extract a strategy with expected positive effect on the counter (Appendix <ref>: Lemma <ref>). But this is only possible if there exists an increasing BSCC of _σ for some σ∈Σ_, as these are the extremal values of any strategy.[Here we rely on well-known results about finite-state MDPs <cit.>.] Given a one-dimensional VASS MDP, if there exists an increasing BSCC of _σ for some σ∈Σ_, then [c] and are unbounded in type M such that 𝔹⊆ M. Furthermore, let M[t] be the MEC containing the transition t. If M[t] exists and 𝔹⊆ M[t], then [t] is unbounded for type M[t]. Additionally, [t] is also unbounded for each type β=M_1,…,M_k such that there exist j≤ i such that M_i=M[t] and 𝔹⊆ M_j. The computation under σ from any state of 𝔹 has a tendency to increase the counter, and as n goes towards ∞ the probability of the computation terminating goes to 0.[For formal proof see e.g. <cit.> (Lemma 6)] Therefore both [c] and are unbounded for type M with 𝔹⊆ M. Furthermore, if 𝔹⊆ M[t], then t can be iterated infinitely often with high probability by periodically “deviating” from σ by temporarily switching to some other strategy which never leaves M[t] and has positive chance of using t. Clearly this can be done in such a way that the overall strategy still has the tendency to increase the counter. Therefore in such case [t] is unbounded for type M[t]. The last part of the theorem comes from the fact that we can first pump the counter in M_j to an arbitrarily large value, before moving to M[t] where we then can iterate any strategy on M[t] that has positive chance of using t. Given a one-dimensional VASS MDP, if there exists an unbounded-zero BSCC of _σ for some σ∈Σ_, then _∈Ω(n^2) and n^2 is a lower estimate of for type M such that 𝔹⊆ M. Furthermore, if 𝔹 contains the transition t, then also _[t]∈Ω(n^2) and n^2 is a lower estimate of [t] for type M such that 𝔹⊆ M. This follows follows directly from the results of <cit.> (Section 3.3). Given a one-dimensional VASS MDP, if there exists an bounded-zero BSCC of _σ for some σ∈Σ_, then is unbounded for type M such that 𝔹⊆ M. Furthermore, if 𝔹 contains t then also [t] is unbounded for type M such that 𝔹⊆ M. Since 𝔹 is bounded-zero, it must hold that there is no non-zero cycle in 𝔹. Therefore the effect of every path of 𝔹 is bounded by some constant. As such, the computation under σ started from any state of 𝔹 can never terminate if the initial counter value is sufficiently large. It is decidable in polynomial time if a one-dimensional VASS MDP , that contains no increasing BSCC of _σ for any σ∈Σ_, whether contains a bounded-zero BSCC 𝔹 of _σ for some σ∈Σ_. Since there is no class increasing BSCC of an MD strategy, there can be no solution to (I) with ∑_(p,,q)∈ T((p,,q))(c) > 0 as any such solution can be used to extract a strategy with expected positive effect on the counter (Appendix <ref>: Lemma <ref>). Therefore from Lemma <ref>, we have that there exists a ranking function rank, defined by a maximal solution of (II) (see Section <ref>), such that the effect of any transition from a nondeterministic state has non-positive effect on rank, and the expected effect of a single computational step taken from a probabilistic state is non-positive on rank. Furthermore, rank depends on the counter value. Therefore any BSCC which contains any transition whose effect on rank can be non-zero cannot be bounded-zero. If such transition were from non-deterministic state, then it could only decrease the rank, and as rank can never be increased in expectation, this would lead to a positive chance of a cycle with negative effect on rank and thus also on the counter. And if the transition were from a probabilistic state, then as the expectation is non-positive, there would be a non-zero probability of a transition with negative effect on rank being chosen. Therefore any bounded-zero BSCC can contain only those transitions that never change rank. On the other hand, any BSCC 𝔹 of _σ for some σ∈Σ_, which contains only transitions that never change rank must be bounded-zero, as that means the effect of any cycle in 𝔹 must be 0 (as any non-zero cycle would have necessarily changed rank in at least one of its transitions). Therefore it is sufficient to decide whether there exists a BSCC 𝔹 of _σ for some σ∈Σ_, containing only those transitions that do not change rank. We can do this by analyzing each MEC one by one. For each MEC we first compute rank using the system (II) (see Section <ref>), then proceed by first removing all transitions that can change rank, and then iteratively removing non-deterministic states that do not have any outgoing transition left, and probabilistic states for which we removed any outgoing transition, until we reach a fixed point. If there exists a bounded-zero BSCC 𝔹 of _σ for some σ∈Σ_, then all transitions of 𝔹 will remain in the fixed point as they can never be removed. On the other hand once we reach the fixed point, it holds that for any state p that is left there either exists a “safe” outgoing transition if p∈ Q_n or all outgoing transitions are “safe” if p∈ Q_p, and these “safe” transitions end in a “safe” state. With state being “safe” if it is left in fixed point, and transitions being “safe” if their effect on rank is 0. Thus we can simply select any MD strategy on the states/transitions that are left and it must have a bounded-zero BSCC. And if the fixed point is empty, then there can be no bounded-zero BSCC 𝔹 of _σ for some σ∈Σ_. Clearly this can be done in polynomial time. One might ask whether the restriction on one-dimensional VASS MDPs not containing an increasing BSCC of _σ for some σ∈Σ_, is necessary in Lemma <ref>. The answer is yes, as Lemma <ref> shows that deciding existence of a bounded-zero BSCC of _σ for some σ∈Σ_, is -complete for general one-dimensional VASS MDPs. Given a one-dimensional VASS MDP , if there is no increasing or bounded-zero BSCC of A_σ for any σ∈Σ_, then n^2 is an upper estimate of for every type. Given a one-dimensional VASS MDP , if there is no increasing BSCC of A_σ for any σ∈Σ_, then n is an upper estimate of [c] for every type. To prove these two Lemmata, we need to consider a certain overapproximation of , which in some sense is in multiple states at the same time. This overapproximation will allow us to view any computation on as if with very high probability, the computation was at each step choosing one of finitely many (depending only on ) random walks/cycles, whose effects correspond to their corresponding BSCC (increasing, bounded-zero, unbounded-zero, decreasing). That is if there is no increasing or bounded-zero BSCC of A_σ for any σ∈Σ_, then the expected effect of these random walks can only either be negative (decreasing), or 0 but with non-zero variance (unbounded-zero). This then allows us to provide some structure to the VASS MDP which will then allow us to prove these lemmata. A key concept to defining this overapproximation is that of an MD-decomposition, which roughly states that for each path on a VASS MDP, we can color each transition using one of finitely many colors, such that the sub-path corresponding to each color is a path under some MD strategy associated with that color. We then show that we can color any path on using a finite set of colors, and that this coloring can be made “on-line”, that is a color can be assigned uniquely (in some sense) to each transition at the time this transition is taken in the computation. [<ref>] For every VASS MDP , there exist π_1,…,π_k ∈Σ_, p_1,…,p_k ∈ Q, and a function _ such that the following conditions are satisfied for every finite path α: * _(α) returns an MD-decomposition of α under π_1,…,π_k and p_1,…,p_k. * _(α)=_(α_..(α)-1) ∘ γ^1∘⋯∘γ^k, where exactly one of γ^i has positive length (the i is called the mode of α). * If the last state of α_..(α)-1 is probabilistic, then the mode of α does not depend on the last transition of α. Proof by induction on the number of outgoing transitions from non-deterministic states in . Base case: Every non-deterministic state has exactly one outgoing transition. Then there exists only a single strategy π and it is MD. Therefore let k=|Q|, π_1=…=π_k=π, and p_1,…,p_k be all the distinct states of . Then let _(ϵ)=ϵ, and for a path α with initial state p_i let _(α)=_(α_..(α)-1)∘γ^1∘γ^2∘…∘γ^k such that γ^j=p_j for j≠ i and γ^i=q,,r where α=p_i,…,q,,r. Induction step: Assume every VASS MDP ' with less then i outgoing transitions from non-deterministic states satisfies the lemma, and let have exactly i outgoing transitions from non-deterministic states. If contains no non-deterministic state p∈ Q_n with |(p)|≥ 2 then base case applies. Otherwise, let us fix some state p∈ Q_n with |(p)|≥ 2, and let t_r,t_g∈(p) be such that t_r≠ t_g. Let _r and _g be VASS MDPs obtained from by removing t_g and t_r, respectively. For any path α, we define a red/green-decomposition of α on as α = g_1∘ r_1∘ g_2∘ r_2∘…∘ g_ℓ∘ r_ℓ (all of positive length except potentially g_1 and r_ℓ) satisfying the following: * for every 1≤ i<ℓ, the last state of g_i is p; * if (r_ℓ)>0 then the last state of g_ℓ is p; * for every 1≤ i<ℓ, the last state of r_i is p; * for every 1<i ≤ℓ, the first state of g_i is p and the first transition of g_i is t_g; * for every 1≤ i ≤ℓ, the first state of r_i is p and the first transition of r_i is t_r; * g_α=g_1 ∘⋯∘ g_ℓ is a path on _g. * r_α=r_1 ∘⋯∘ r_ℓ is a path on _r. Clearly every path on has a unique red/green-decomposition that can be computed online. Now let __g and __r, π_1^g,…, π_k_g^g and π_1^r,…,π_k_r^r, p_1^g,…, p_k_g^g and p_1^r,…,p_k_r^r, k_g and k_r be the functions, MD strategies, states and k values for _g and _r, respectively. Note that their existence follows from the induction assumption. Then let k=k_g+k_r, π_1=π_1^g,π_2=π_2^g,…, π_k_g=π_k_g^g,π_k_g+1=π_1^r,π_k_g+2=π_2^r,…,π_k_g+k_r=π_k_r^r, p_1=p_1^g,p_2=p_2^g,…, p_k_g=p_k_g^g,p_k_g+1=p_1^r,p_k_g+2=p_2^r,…,p_k_g+k_r=p_k_r^r. We now define _(ϵ)=ϵ and _(α)=_(α_..(α)-1)∘γ^1∘γ^2∘…∘γ^k such that if t=(q,,r) is the last transition of α, and α = g_1∘ r_1∘ g_2∘ r_2… g_ℓ∘ r_ℓ is the red/green-decomposition of α on , it holds: * if (r_ℓ)=0, then let i be the __g-mode of g_α=g_1 ∘⋯∘ g_ℓ. Then we put γ^j=p_j for each j≠ i, and γ^i=q,,r; * if (r_ℓ)>0, then let i be the __r-mode of r_α=r_1 ∘⋯∘ r_ℓ. Then we put γ^j=p_j for each j≠ k_g+i, and γ^k_g+i=q,,r; From Lemma <ref> we can view any strategy σ on as if σ were choosing "which of the k MD strategies to advance" at each computational step. That is, let α be some path produced by a computation under a strategy σ, then the "MD strategy to advance" chosen by σ after α is the MD strategy π_i where i is such that * if the last state p of α is probabilistic, then i is the _-mode of the path α,,q, for any (p,,q)∈(p) (note that i does not depend on which transition of (p) is chosen); * if the last state p of α is non-deterministic, then let t=(p,,q)∈(p) be the transition chosen by σ in α. Then i is the _-mode of the path α,,q. Each of these MD strategies π_i can be expressed using a Markov chain _i which is initialized in state p_i. Whenever an MD strategy gets chosen, then the corresponding Markov chain makes one step. Naturally, there are some restrictions on which of the indexes can be chosen at a given time, namely a strategy can only choose an index i such that _i is currently in the same state as the Markov chain which was selected last. However, for our purposes we will consider pointing strategies that are allowed to choose any index, regardless of the current situation. We shall call a VASS MDP where such pointing strategies are allowed, while also adding a special “die” transition that causes instant termination an extended VASS MDP. Formally speaking, let π_1,…, π_k and p_1,…, p_k be the MD strategies and states from Lemma <ref> associated with _. An extended VASS MDP associated to the 1-dimensional VASS MDP is the 2-dimensional VASS MDP '= Q', (Q_n',Q_p'),T',P' where Q'=Q^k×{0,1,…,k }, Q_n'=Q^k×{0 }, Q_p=Q^k×{1,…,k }, and * T'=T_n'∪ T_p'∪ T_die' where * T_n'={((p_1,…,p_k,0)),(0,0),(p_1,…,p_k,i)| (p_1,…,p_k)∈ Q^k, i∈{1,…,k}}; * T_p'= {((p_1,…,p_k,i),(_i,0),(p_1,…,p_i-1,q_i,p_i+1,…,p_k,0))| (p_1,…,p_k)∈ Q^k, i∈{1,…,k}, and either π_i(p_i)=(p_i,_i,q_i) or both of p_i∈ Q_p and (p_i,_i,q_i)∈ T }; * T_die'={(p,(0,-1),p)| p∈ Q_n'}; * P'(((p_1,…,p_k,i),(_i,0),(p_1,…,p_i-1,q_i,p_i+1,…,p_k,0)))= 1 p_i∈ Q_n P((p_i,_i,q_i)) p_i∈ Q_p We call strategies on the extended VASS MDP pointing strategies. Note that each strategy on a VASS MDP has an equivalent pointing strategy. Whenever a pointing strategy σ chooses a transition ((p_1,…,p_k,0),(0,0),(p_1,…,p_k,i)), then we say σ pointed at the Markov chain _i. Note that in the following we only consider computations on the extended VASS MDP initiated in the initial state (p_1,…,p_k,0), and with the second counter being set to 0, so to simplify the notation, we will write only ^σ_n instead of ^σ_(p_1,…,p_k,0)(n,0). Given a sequence of strategies σ_1,σ_2,… we will define a sequence of pointing strategies σ_1^δ,σ_2^δ,… such that each σ_n^δ in some sense “behaves as” σ_n, but at the same time it “precomputes” the individual Markov chains. Since a formal description of σ_n^δ would be overly complicated, we will give only a high level description of σ_n^δ. The sequence σ_1^δ,σ_2^δ,… is parameterized by 0<δ<1. To help us define the behavior of σ_n^δ, we assume σ_n^δ “remembers” (it can always compute these from the input) some paths γ_1,…,γ_k,α. At the beginning these are all initialized to γ_1=…=γ_k=α=ϵ. A computation under σ_n^δ operates as follows: First σ_n^δ internally selects i∈{1,…,k } that σ_n would select after α; that is i is the _-mode of α', where α' is such that if the last state p of α is probabilistic then α' is α extended by a single transition, and if p is nondeterministic then α'=α,,q is α extended by the transition (p,,q) where (p,,q) is the transition chosen by σ_n in α (i.e. (p,,q) is chosen at random using the probabilistic distribution σ_n(α)). Then σ_n^δ asks if γ_i≠ϵ, if yes then it skips to step 2), otherwise it first performs step 1) before moving to step 2): * Let (p_1,…,p_k,0) be the current state of '. Then in each non-deterministic state σ_n^δ keeps pointing at _i until either, if p_i is not a state of a BSCC of M_i, it reaches a state (p_1,…,p_i-1,q_i,p_i+1,…,p_k,0) where q_i is a state of a BSCC of M_i while, or if p_i is a state of a BSCC of _i then σ_n^δ stops pointing at _i with probability 1/2 each time the computation returns to (p_1,…,p_k,0). In both cases, if this takes more then 2n^δ steps then σ_n^δ terminates using the “die” transitions (i.e. σ_n^δ keeps reducing the second counter until termination using the transitions from T_die'). After this ends, σ_n^δ sets γ_i to the path generated by the probabilistic transitions along this iterating (note that this can be seen as a path on _i). * Let γ_i=p_1,_1,p_2… p_ℓ. Since ℓ>1 we have all the information needed to know which index σ_n would have chosen in it's next step. Let α'=α,_1,p_2 be α extended by the transition (p_1,_1,p_2). Then σ_n^δ replaces α with α', and γ_i with the path p_2… p_ℓ obtained by removing the first transition from γ_i. At this point this process repeats, until α is a terminating path for initial counter value n on , at which point σ^δ_n terminates using the transitions from T_die'. Let l=k· u, where u is the maximal possible change of the counter per single transition. When started in the initial state (p_1,…,p_k,0), we can view σ_n^δ as if we were computing as per σ_n, but occasionally we made some “extra” (precomputed) steps in some of the Markov chains. These “extra” steps correspond exactly to the paths γ_1,…,γ_k, and since probability of these being longer then kn^δ decreases exponentially with n, the probability. that σ_n^δ started with initial counter values (n+ln^δ,0) terminates before σ_n would have for initial counter value n in the first n^2+ϵ steps, goes to 0 as n goes to ∞, for each ϵ>0. Therefore, if it were to hold that σ_n can perform more then n^2+ϵ steps with probability at least a>0 for some ϵ>0 and for infinitely many n, conditioned that =β for some β (note that β does not depend on n), then it holds lim sup_n→∞_n+ln^δ^σ^δ_n(≥ n^2+ϵ )≥a·(β)/2>0. Similarly, the counter value of the first counter c of ', when computing under σ_n^δ from initial value n+ln^δ is at each point at most n+ln^δ plus the effect of the paths γ_1,…, γ_k, and α. As the length of all of γ_1,…,γ_k is at most kn^δ, their total effect on the counter at each point can be at most ln^δ. And α is the path generated by a computation of σ_n. Therefore, if it were to hold that σ_n can pump the counter to more then n^1+ϵ with probability at least a>0 for some ϵ>0 and for infinitely many n, conditioned that =β for some β (note that β does not depend on n), then it holds lim sup_n→∞_n+ln^δ^σ^δ_n([c]≥ n^1+ϵ-ln^δ )≥a·(β)/2>0. Therefore the following two lemmatta imply Lemmata <ref> and <ref>. If is a one-dimensional VASS MDP such that there is no increasing BSCC of A_σ for any σ∈Σ_, then lim sup_n→∞_n+ln^δ^σ^δ_n([c]≥ n^1+ϵ-ln^δ )=0 for each 0<δ<ϵ<1. If is a one-dimensional VASS MDP such that there is no increasing or bounded-zero BSCC of A_σ for any σ∈Σ_, then lim sup_n→∞_n+ln^δ^σ^δ_n(≥ n^2+ϵ )= 0 for each 0<δ<ϵ<1. Let us begin with a proof for Lemma <ref>. For simplification, let us assume that if the first counter of ' becomes negative while iterating in some Markov chain _j before it hits the target state of _j, that is while performing step 1) as per the description of σ_n^δ, then the computation does not terminate and instead it continues until this target state is reached at which point the computation terminates if the counter is still negative. Clearly this can only prolong the computation. Therefore, each Markov chain _j contributes to computation of σ_n^δ by at most a single path α_j (to reach a BSCC), and then of cycles over some state of a BSCC of _j. Let X_i^j denote the effect of the i-th cycle of _j performed under the computation of σ_n^δ. As each BSCC of A_σ for any σ∈Σ_ is either unbounded-zero or decreasing, it holds that either ^σ_n^δ(X_i^j)= 0 while Var^σ_n^δ(X_i^j)>0 (unbounded-zero), or ^σ_n^δ(X_i^j)<0 (decreasing). It also holds that ^σ_n^δ((α_j))=b_j for some constant b_j, and as the length of each α_i is bounded by n^δ, the maximal possible effect of all α_1,…,α_k on the counter is ln^δ. The maximal length of each cycle is bounded by n^δ, therefore we can upper bound the expected length of all cycles of _j as n^δ times the expected number of such cycles. Clearly the expected number of cycles corresponding to decreasing BSCCs are at most linear as each such cycle moves expectation closer to 0, and there is no way to move the expectation away from 0 by more then a constant. To bound the expected number of cycles corresponding to class unbounded-zero BSCCs, we shall use the following lemma that is proven in appendix <ref>. Let be a one-dimensional VASS MDP, and let X_1,X_2,… be random variables s.t. each X_i corresponds to the effect of a path on some unbounded-zero BSCC 𝔹 of _σ for some σ∈Σ_, that starts in some state p of 𝔹 and terminates with probability 1/2 every time p is reached again. Let S_0,S_1,… be defined as S_0=0, S_i=S_i-1+X_i, and τ_n be a stopping time such that either S_τ_n≤ -n or S_τ_n≥ n. Then it holds (τ_n)∈(n^2). It says that the expected number of cycles before their cumulative effect exceeds either n^1+μ or -n^1+μ is in (n^2+2μ), for each μ. Therefore the expected number of cycles upon either effect of -n-2ln^δ or n^1+ϵ is in (n^2+2ϵ) for all ϵ>0, as for all sufficiently large n it holds -n^1+ϵ≤ -n-2ln^δ. Therefore the expected length of whole computation, when started in n+ln^δ, and stopped upon either effect of -n-ln^δ or n^1+ϵ is in (n^2+2ϵn^δ)=(n^2+2ϵ+δ). Let X_n,ϵ^δ be the random variable encoding the number of steps the computation under σ_n^δ takes before the effect on counter is either less than -n-ln^δ, or at least n^1+ϵ, or until σ_n^δ performs a “die” move, whichever comes first. The above says that ^σ_n^δ_n+kn^δ(X_n,ϵ^δ)≤ an^2+2ϵ+δ for some constant a. Furthermore, let P_n,ϵ^δ be the probability that the computation under σ_n^δ reaches effect on counter at least n^1+ϵ before either hitting effect less then -n-ln^δ or performing a “die” move. Note that for 0<ϵ'<ϵ, it holds _n+ln^δ^σ_n^δ(X_n,ϵ^δ≥ X_n,ϵ'^δ) ≤ P_n,ϵ'^δ. Also note that it holds _n+ln^δ^σ^δ_n([c]≥ n^1+ϵ-ln^δ )≤ P_n,ϵ'^δ for each 0<ϵ'<ϵ and for all sufficiently large n, as for sufficiently large n if the counter reaches n^1+ϵ-ln^δ then it had to previously reach n^1+ϵ', as n^1+ϵ' grows asymptotically slower then n^1+ϵ-ln^δ. Now we shall use the following Lemma that is proven in the Appendix <ref>. For each 0<δ<ϵ<1, it holds lim_n→∞ P_n,ϵ^δ=0. Note that this already implies Lemma <ref>. To show also Lemma <ref>, let us write _n+ln^δ^σ_n^δ(≥ n^2+ϵ) ≤_n+ln^δ^σ_n^δ(X_n,ϵ^δ≥ n^2+ϵ) +P_n,ϵ^δ and for any 0<ϵ'<ϵ _n+ln^δ^σ_n^δ(X_n,ϵ^δ≥ n^2+ϵ) ≤_n+ln^δ^σ_n^δ(X_n,ϵ'^δ≥ n^2+ϵ)+_n+ln^δ^σ_n^δ(X_n,ϵ^δ≥ X_n,ϵ'^δ) ≤_n+ln^δ^σ_n^δ(X_n,ϵ'^δ≥ n^2+ϵ)+P_n,ϵ'^δ and from Markov inequality we get _n+ln^δ^σ_n^δ(X_n,ϵ'^δ≥ n^2+ϵ) ≤an^2+2ϵ'+δ/n^2+ϵ Which gives us _n+ln^δ^σ_n^δ(≥ n^2+ϵ) ≤_n+ln^δ^σ_n^δ(X_n,ϵ'^δ≥ n^2+ϵ)+P_n,ϵ'^δ +P_n,ϵ^δ≤an^2+2ϵ'+δ/n^2+ϵ +P_n,ϵ'^δ +P_n,ϵ^δ As this holds for each 0<ϵ'<ϵ, if we put ϵ'=ϵ-δ/4, then 2ϵ'+δ=ϵ-δ/2+2δ/2=ϵ+δ/2<ϵ if δ<ϵ, and therefore lim_n→∞an^2+2ϵ'+δ/n^2+ϵ=0. Therefore it holds lim_n→∞_n+ln^δ^σ_n^δ(≥ n^2+ϵ) ≤lim_n→∞ (an^2+2ϵ'+δ/n^2+ϵ +P_n,ϵ'^δ +P_n,ϵ^δ )=0 §.§ Proof of Lemma <ref> [<ref>] Let be a one-dimensional VASS MDP, and let X_1,X_2,… be random variables s.t. each X_i corresponds to the effect of a path on some unbounded-zero BSCC 𝔹 of _σ for some σ∈Σ_, that starts in some state p of 𝔹 and terminates with probability 1/2 every time p is reached again. Let S_0,S_1,… be defined as S_0=0, S_i=S_i-1+X_i, and τ_n be a stopping time such that either S_τ_n≤ -n or S_τ_n≥ n. Then it holds (τ_n)∈(n^2). Let us begin by showing the following technical result. Let be a one-dimensional VASS MDP. Let 𝔹 be an unbounded-zero BSCC of _σ for a strategy σ∈Σ_, and let p be a state of 𝔹. Let X denote the random variable representing the effect of a path under σ initiated in p, that ends with probability 1/2 every time the computation returns to p. Then there exists a function m:ℕ→ℕ such that m(n)≥ 2n, m∈(n), and such that for all sufficiently large n we get for X' = m X≤ -2n or X≥ m(n) X else that it holds ^σ_p(X')≥ 0 and Var^σ_p(X')≥ a for some a>0 that does not depend on n. Since 𝔹 is unbounded-zero, there exists both a positive as well as a negative cycle on 𝔹. Therefore there exists some a>0 such that ^σ_p(X≤ -i)≥ a^i. Also X is unbounded both from above as well as from below. As every |Q| steps there is non-zero, bounded from below by a constant, probability that we terminate in at most |Q| steps, it holds for each i>0 that ^σ_p(|X|>i)≤ b^i for some b<1. Therefore also ^σ_p(X≥ i)≤ b^i. We claim the lemma holds for any m(n)≥ 4nlog_b a. It holds ∑_i=m(n)^∞ i_p^σ(X=i) ≤∑_i=m(n)^∞ i_p^σ(X≥ i) ≤∑_i=m(n)^∞ ib^i=b^m(n)(-bm(n)+b+m(n))/(b-1)^2 And if we put in the value m(n)=x4nlog_b a, for x≥ 1 we obtain b^m(n)(-bm(n)+b+m(n))/(b-1)^2 = b^x4nlog_b a(-b(x4nlog_b a)+b+(x4nlog_b a))/(b-1)^2 = = a^x4n(-bx4nlog_b a+b+x4nlog_b a)/(b-1)^2 And furthermore, m(n)(_p^σ(X≥ m(n))+_p^σ(X≤ -2n)) ≥ m(n)_p^σ(X≤ -2n) ≥ m(n)a^2n = a^2nx4nlog_b a Also, as a<1, it holds for all sufficiently large n that a^x4n(-bx4nlog_b a+b+x4nlog_b a)/(b-1)^2 < a^2nx4nlog_b a Therefore it holds ^σ_p(X') = m(n)(_p^σ(X≥ m(n))+_p^σ(X≤ -2n)) +∑_i=-2n+1^m(n)-1 i_p^σ(X=i) ≥ ≥∑_i=m(n)^∞ i_p^σ(X=i) + ∑_i=-2n+1^m(n)-1 i_p^σ(X=i) And it also holds 0=^σ_p(X) = ∑_i=-∞^∞ i_p^σ(X=i) = ∑_i=-∞^-2n i_p^σ(X=i) + ∑_i=-2n+1^m(n)-1 i_p^σ(X=i) + ∑_i=m(n)^∞ i_p^σ(X=i) ≤ ≤∑_i=m(n)^∞ i_p^σ(X=i) + ∑_i=-2n+1^m(n)-1 i_p^σ(X=i) And therefore ^σ_p(X')≥∑_i=m(n)^∞ i_p^σ(X=i) + ∑_i=-2n+1^m(n)-1 i_p^σ(X=i)≥ 0 For the part about Var(X'). Since 𝔹 is unbounded-zero, it holds that ^σ_p(X)^2=Var^σ_p(X)≥ y>0 for some y. Therefore it holds for each n that 0<y≤^σ_p(X^2 )= ∑_i=1^∞ i^2_p^σ(|X|=i) = ∑_i=1^m(n) i^2_p^σ(|X|=i) +∑_i=m(n)^∞ i^2_p^σ(|X|=i)≤ ≤∑_i=1^m(n) i^2_p^σ(|X|=i) +∑_i=m(n)^∞ i^2 _p^σ(|X|≥ i) ≤∑_i=1^m(n) i^2_p^σ(|X|=i) +∑_i=m(n)^∞ i^2b^i And ∑_i=m(n)^∞ i^2b^i = b^m(n) (m^2(n) (-b^2) + 2 m^2(n) b - m^2(n) + 2 m(n) b^2 - 2 m(n) b - b^2 - b)/(b - 1)^3 But this fraction is dominated by b^m(n) which decreases exponentially in n (as b<1). Therefore for all sufficiently large n it holds y/2 ≤∑_i=1^m(n) i^2_p^σ(|X|=i). But this gives us ^σ_p((X')^2)≥∑_i=1^m(n) i^2_p^σ(|X|=i)≥ b/2 for each m(n)≥ n and all sufficiently large n. Let us now restate the Lemma <ref>. [<ref>] Let be a one-dimensional VASS MDP, and let X_1,X_2,… be random variables s.t. each X_i corresponds to the effect of a path on some unbounded-zero BSCC 𝔹 of _σ for some σ∈Σ_, that starts in some state p of 𝔹 and terminates with probability 1/2 every time p is reached again. Let S_0,S_1,… be defined as S_0=0, S_i=S_i-1+X_i, and τ_n be a stopping time such that either S_τ_n≤ -n or S_τ_n≥ n. Then it holds (τ_n)∈(n^2). As there are only finitely many BSCCs of _σ for σ∈Σ_, and each of them has only finitely many states, there are only finitely many distributions D_1,…,D_x such that each X_i≈ D_y for some 1≤ y ≤ x. Let X_1^n,X_2^n,… be random variables such that X_i^n = m(n) X_i≤ -2n or X_i≥ m(n) X_i else where m(n)=an is the maximal value of m(n) obtained from Lemma <ref> for any unbounded-zero BSCC of any _σ for any σ∈Σ_, and a is some constant. Then it holds that (X_i^n)≥ 0, and there exists b>0 that does not depend on n such that ((X_i^n)^2)≥ b for each i. Let S_0^n,S_1^n,… be a random walk defined as S_0^n=2n, S_i^n=S_i-1^n+X_i^n, and let τ_n' be a stopping time such that either S_τ_n'^n≤ 2n-n or S^n_τ_n'≥ 2n+n. Clearly it holds that τ_n'=τ_n, therefore it is enough to show that (τ_n')∈(n^2). Let us proceed by showing the following. Let M_i^n=(S_i^n)^2 - bi. Then M_0^n,M_1^n,… is a submartingale. (M^n_i+1| X_i^n,…,X_1^n) = ((S_i+1^n)^2 -b(i+1) | X_i^n,…,X_1^n) = ((S_i^n+X_i+1^n)^2 -b(i+1) | X_i^n,…,X_1^n) = ((S_i^n)^2+2S_i^nX_i+1^n + (X_i+1^n)^2 -b(i+1) | X_i^n,…,X_1^n) = (S_i^n)^2+2S_i^n(X_i+1^n| X_i^n,…,X_1^n)) + ((X_i+1^n)^2| X_i^n,…,X_1^n) -b(i+1) ≥ (S_i^n)^2+0 + b -b(i+1) = (S_i^n)^2 +b - bi -b = (S_i^n)^2 - bi = M_i As it holds that (τ_n')<∞, from the optional stopping theorem we obtain (M_0^n)≤(M^n_τ_n') which can be rewritten as (2n)^2 ≤((S^n_τ_n')^2 -bτ_n') =((S_τ_n'^n)^2) -b(τ_n'). As it holds (S^n_τ_n')^2≤ (3n+m(n))^2= (3n+an)^2= (9+6a+a^2)n^2 this gives us 4n^2+b(τ_n')≤((S^n_τ')^2)≤ (9+6a+a^2)n^2 and so (τ_n')≤(5+6a+a^2)n^2/b∈(n^2). §.§ Proof of Lemma <ref> [<ref>] For each 0<δ<ϵ<1, it holds lim_n→∞ P_n,ϵ^δ=0. Assume there exist some 0<δ<ϵ such that lim sup_n→∞ P_n,ϵ^δ=a>0. Then for each n_0, there exists n>n_0 such that P_n,ϵ^δ>a/2. Most notably, this means that the effect of the path α in σ_n^δ (see definition of σ_n^δ) is at least n^1+ϵ-ln^δ with probability at least a/2. But as α can be equally seen as a path under σ_n, this means that also the strategy σ_n reaches effect n^1+ϵ-ln^δ before the effect -n with probability R_n,ϵ^δ≥ a/2, for infinitely many n. Let type β_n be some type with the largest (β) among all types β, such that with probability at least a/2 σ_n reaches the effect at least n^1+ϵ-ln^δ before the effect -n conditioned the computation follows β. If the length of β_n were dependent on n then as probability of all long types decreases exponentially fast with their length, it could not hold that R_n,ϵ^δ>a/2 for arbitrarily large n. Therefore there must exist infinitely many values n_1,n_2,… such that β_n_1=β_n_2=…, let us denote this type by β=M_1,…,M_x (i.e., β=β_n_1). This means that n is not an upper estimate of [c] for type β. But in the next Lemma we are going to show that n is an upper estimate of [c] for type β, thus showing a contradiction. For each 0<ϵ_1 there exists 0<ϵ_2 and 0<b such that ^σ_n_p ([c] ≥ n^1+ϵ_1|=β )≤ bn^-ϵ_2 for each state p of M_1. We are going to do an induction over 1≤ i≤ x. Base case: i=1, then from Lemma <ref> we have ^σ_n_p ([c] ≥ n^1+ϵ_1|=M_1)≤ bn^-ϵ_1 for some constant b and for each 0<ϵ_1. Induction step: Assume this holds for i<x, let us now show it holds for i+1 as well. From induction assumption we have that for each 0<ϵ_1' there exists 0<ϵ_2' and 0<b' such that ^σ_n_p ([c] ≥ n^1+ϵ_1'|= M_1,…,M_i )≤ b'n^-ϵ_2' . Therefore, when the computation reaches M_i+1, the counter is larger than n^1+ϵ_1' with probability at most b'n^-ϵ_2'. As such we can express for each 0<ϵ_1'<ϵ ^σ_n_p ([c] ≥ n^1+ϵ|= M_1,…,M_i+1 ) ≤ ≤^σ_n_p ([c] ≥ n^1+ϵ_1'|= M_1,…,M_i )+ ∑_r∈ M_i+1 P_q ^σ_n^r_r n^1+ϵ_1'([c] ≥ n^1+ϵ|= M_i+1) = = ^σ_n_p ([c] ≥ n^1+ϵ_1'|= M_1,…,M_i )+ ^σ_n^q_q n^1+ϵ_1'([c] ≥ n^1+ϵ|= M_i+1) where σ_n^r is the strategy which computes as if σ_n after the path from p to r for each r being a state of M_i+1, P_r=^σ_n_p({α|the first state of M_i+1 in α is r}), and q is the state of M_i+1 such that for each state r of M_i+1 it holds ^σ_n^r_r n^1+ϵ_1'([c] ≥ n^1+ϵ|= M_i+1)≤^σ_n^q_q n^1+ϵ_1'([c] ≥ n^1+ϵ|= M_i+1) But from Lemma <ref> we have that ^σ_n^q_q n^1+ϵ_1'([c] ≥ n^1+ϵ|= M_i+1) = = ^σ_n^q_q n^1+ϵ_1'([c] ≥ (n^1+ϵ_1')^log_n^1+ϵ_1'n^1+ϵ|= M_i+1) ≤ b(n^1+ϵ_1')^1-log_n^1+ϵ_1'n^1+ϵ Let y=1-log_n^1+ϵ_1'n^1+ϵ, note that y<0 since ϵ_1'<ϵ. Then we can write ^σ_n_p ([c] ≥ n^1+ϵ|= M_1,…,M_i+1 ) ≤ b'n^-ϵ_2'+ bn^y which for each ϵ_1>ϵ gives ^σ_n_p ([c] ≥ n^1+ϵ_1|= M_1,…,M_i+1 ) ≤ ≤^σ_n_p ([c] ≥ n^1+ϵ|= M_1,…,M_i+1 ) ≤ b'n^-ϵ_2'+ bn^y And for ϵ_2=min (ϵ_2',-y) and b̂=max(2b',2b) this gives use ^σ_n_p ([c] ≥ n^1+ϵ_1|= M_1,…,M_i+1 ) ≤ b'n^-ϵ_1'+ bn^y≤b̂n^-ϵ_2 thus the induction step holds. §.§ Proof of Lemma <ref> [<ref>] An energy MDP has a safe configuration iff there exists a non-decreasing BSCC of _σ for some σ∈Σ_. The ⇐ direction is trivial. For the ⇒ direction assume the opposite. Then there exists a safe configuration, and a strategy such that the counter never decreases below some bound. But then from Lemma <ref> we can view the strategy as if choosing which of the finitely many Markov chains is to advance. And since there is no non-decreasing BSCC of _σ for any σ∈Σ_, each of these Markov chains contains a negative cycle. Therefore after every at most finite number of steps the counter has non-zero probability of decreasing, thus it cannot be bounded from below for the entire computation. §.§ Proof of Lemma <ref> [<ref>] The problem whether there exists a non-decreasing BSCC of _σ for some σ∈Σ_ such that contains a given state p ∈ Q is -complete. This problem being in is easy as we simply have to guess a BSCC of some MD strategy, and then verify that it contains no negative cycle while containing p. For the -hardness let us show a reduction from the -complete problem of deciding whether a given graph G contains a Hamiltonian cycle. Let G=(V,E) be the graph for which we want to decide existence of a Hamiltonian cycle, and let p∈ V be one of it's vertices. Let be a 1-dimensional VASS MDP, whose set of states is V, all states are nondeterministic, and the set of transitions is T such that whenever there is an edge {q,r}∈ E, q≠ p≠ r, then contains the transitions (q,+1,r),(r,+1,q), and for each edge {p,q}∈ E, contains the transitions (q,+1,p),(p,-|V|+1,q) We now claim that G contains a Hamiltonian path iff there exists a non-decreasing BSCC 𝔹 of _σ for some σ∈Σ_ such that 𝔹 contains p. First let a Hamiltonian cycle α=p_1,t_1,p_2,…,p_l,t_l,p_1 exist. Then for the MD strategy σ(p_j)=t_j, _σ surely contains exactly one BSCC that contains p, and it contains exactly one cycle whose effect is 0. Thus it is non-decreasing. Now let there exists a non-decreasing BSCC 𝔹 of _σ for some σ∈Σ_ such that 𝔹 contains p. Then since the effect of every outgoing transition of p is -|V|+1, the effect of every other transition is +1, and 𝔹 contains no negative cycles, there must be at least |V| transitions in 𝔹. But as σ is an MD strategy, there can be at most one transition per state, and so 𝔹 must contain every single state of G. But this means that the computation under π follows a Hamiltonian cycle. The problem whether there exists a bounded-zero BSCC of _σ for some σ∈Σ_ is -complete for general one-dimensional VASS MDPs. This follows from the proof above of the previous Lemma, as any bounded-zero BSCC of _σ for some σ∈Σ_ in the VASS MDP constructed for the graph G must contain p, while the BSCC associated to the strategy obtained from a hamiltonian cycle is bounded-zero.
http://arxiv.org/abs/2307.03905v1
20230708053355
A novel high-order linearly implicit and energy-stable additive Runge-Kutta methods for gradient flow models
[ "Xuelong Gu", "Wenjun Cai", "Yushun Wang" ]
math.NA
[ "math.NA", "cs.NA" ]
Incorporating Deep Q - Network with Multiclass Classification Algorithms Noopur Zambare1, Ravindranath Sawane2 August 12, 2023 ======================================================================== [-10pt]15.5cm0.1em This paper introduces a novel paradigm for constructing linearly implicit and high-order unconditionally energy-stable schemes for general gradient flows, utilizing the scalar auxiliary variable (SAV) approach and the additive Runge-Kutta (ARK) methods. We provide a rigorous proof of energy stability, unique solvability, and convergence. The proposed schemes generalizes some recently developed high-order, energy-stable schemes and address their shortcomings. On the one other hand, the proposed schemes can incorporate existing SAV-RK type methods after judiciously selecting the Butcher tables of ARK methods <cit.>. The order of a SAV-RKPC method can thus be confirmed theoretically by the order conditions of the corresponding ARK method. Several new schemes are constructed based on our framework, which perform to be more stable than existing SAV-RK type methods. On the other hand, the proposed schemes do not limit to a specific form of the nonlinear part of the free energy and can achieve high order with fewer intermediate stages compared to the convex splitting ARK methods <cit.>. Numerical experiments demonstrate stability and efficiency of proposed schemes. Energy-stable schemes, Scalar auxiliary variable approach, Additive Runge-Kutta methods, Linearly implicit schemes. [-10pt]15.5cm0.1em § INTRODUCTION Phase field models are versatile mathematical equations widely used in physics, material science, and mathematics to simulate various physical phenomena, including the diffusion of two-phase interfaces, phase transitions in materials, and mechanical properties <cit.>. These models are useful for describing different phases of material and the phase transitions and microstructural changes that occur in non-equilibrium states. The phase field model is usually represented as a gradient flow of a free energy functional ℱ(u) as follows: ∂ u/∂ t= 𝒢δℱ/δ u, (𝐱, t) ∈Ω× (0, T], with the initial condition u(𝐱, 0) = u_0(𝐱), where u is a state variable, Ω⊂ℝ^n represents the computational domain, δℱ/δ u denotes the variational derivative of ℱ to u, and 𝒢 is a non-positive mobility operator. Classical phase field models include the Allen-Cahn (AC) equation <cit.>, the Cahn-Hilliard (CH) equation <cit.>, the molecular beam epitaxy (MBE) equation <cit.>, etc <cit.>. A significant aspect of (<ref>) is that the system preserves the following energy dissipation law when appropriate boundary conditions are imposed on u. d ℱ/dt = (δℱ/δ u, ∂ u/∂ t) = (δℱ/δ u, 𝒢δℱ/δ u) ≤ 0. Due to the nonlinearity of (<ref>), its analytical solution is typically intractable. Therefore, developing efficient and stable numerical schemes is imperative. One approach is constructing schemes that inherit a discrete counterpart of (<ref>), known as energy-stable methods <cit.>. As demonstrated in <cit.>, energy-stable methods can prevent numerical oscillations and unphysical solutions, thus have been the focus of extensive researchers over the past few decades. Classical energy-stable methods include convex splitting (CS) methods <cit.> and discrete variational derivative (DVD) methods <cit.> and so on. CS and DVD methods are fully implicit, thus requiring solving a nonlinear system at each time step. To improve computational efficiency, researchers have suggested linearly implicit or explicit energy-stable schemes, such as stabilized semi-implicit methods <cit.>, exponential time difference methods <cit.>, and the leapfrog methods <cit.>. The numerical methods discussed above are exclusive to particular gradient flow models and can not be effortlessly adapted to others. This status quo did not change until the energy quadratization (EQ) methods <cit.> were proposed. EQ methods provide an elegant platform for constructing linearly implicit schemes, but they involve solving linear systems with variable coefficients at each time step. In <cit.>, Shen et al. proposed scalar auxiliary variable (SAV) methods. Besides their unconditional stability, SAV methods require only the solution of a linear system with constant coefficients in each step. Furthermore, SAV approaches provide a universal framework for developing linearly implicit energy-stable schemes that can be extended to a variety of complex models <cit.>. Due to these advantages, SAV methods have received attention and are promoted in <cit.>. However, the above methods are limited to second-order accuracy, which may not accommodate high precision requirements. The nonlinearity of phase field models makes it difficult to develop high-order energy-stable schemes. In <cit.>, the authors present high-order energy-stable schemes by combining additive Runge-Kutta (ARK) methods with CS techniques (CS-ARK). To guarantee energy stability, these approaches impose stringent criteria on the coefficients of the ARK methods, necessitating a large number of intermediate stages even for a second-order scheme. Thus, the currently identified energy-stable CS-ARK methods are limited to third-order. In <cit.>, energy-stable schemes based on the Hamiltonian boundary value or discrete gradient methods are presented. These schemes are fully implicit and thus computationally expensive. Akrivis et al. introduced in <cit.> novel linearly implicit schemes based on a combination of SAV and RK (SAV-RK) approaches. For explicit discretization of nonlinear terms, they incorporated extrapolation techniques to predict solutions at specified time levels. The resulting methods are referred to as SAV-RKEX. However, excessive interpolation points lead to highly oscillatory interpolation polynomials, resulting in inaccurate predictions. Li et al. developed SAV-RKPC methods in <cit.> to obtain a more accurate prediction of numerical solutions at intermediate stages, significantly improving the stability and accuracy of SAV-RKEX methods. Nevertheless, such a technique increases the computational costs, and there is no theoretical guarantee of the necessary number of iterations to achieve adequate accuracy. In this paper, we propose a novel paradigm for constructing linearly implicit and high-order unconditionally energy-stable schemes, combining the SAV approach with the ARK methods. The proposed methods overcome the limitations of both CS-ARK and SAV-RK methods and can be applied to gradient flow systems with general nonlinear functionals. On the one hand, to guarantee energy stability, the proposed methods require only the algebraic stability of the implicit part of ARK methods. This enables the methods to achieve high accuracy and energy stability with fewer intermediate stages. On the other hand, our approach can be regarded as a novel prediction correction technique that avoids the imprecision of extrapolation techniques used in the SAV-RKEX method and does not require iterative procedures for prediction in SAV-RKPC. Thus, the proposed approach guarantees both efficiency and stability. Additionally, our framework can accommodate all SAV-RK type integrators with some appropriate modifications, enabling us to theoretically analyze the consistency of SAV-RKPC(or EQ) methods proposed in <cit.> by exploiting the order conditions of ARK methods. The overall structure of the remaining contexts is summarized below. In Section <ref>, we briefly overview the ARK and SAV methods. In Section <ref>, we reformulate the gradient flow model into an equivalent one and propose our new algorithms. Then, we prove the unconditional energy stability and solvability of the proposed methods. Moreover, we demonstrate the order condition of SAV-RKPC methods by regarding it as an ARK method. The numerical examples and comparisons are made in Section <ref>. Finally, we conclude the whole work in Section <ref>. § OVERVIEW OF ARK METHODS AND SAV REFORMULATION OF GRADIENT FLOWS In this section, we briefly overview the additive Runge-Kutta (ARK) methods. Some basic notations and concepts are also presented. By incorporating a scalar auxiliary variable, the original gradient flow model is transformed into an equivalent one (known as the SAV reformulation). The reformulated system preserves the quadratic energy and provides an elegant platform for developing high-order and linearly implicit unconditionally energy-stable numerical methods. §.§ ARK methods We provide an overview of ARK methods, which are commonly used to solve the initial value problem for the following additive partitioned system: u_t(𝐱, t) = f(u) + g(u), u(𝐱, 0) = u_0(𝐱). Here, the right-hand side of (<ref>) is subdivided with respect to stiffness, nonlinearity, dynamical behavior, etc. Before we proceed, it is helpful to introduce the Butcher notations for two s-stage RK methods. [ c A; b^T ] = [ c_0 a_00 ⋯ a_0s-1; c_1 a_10 ⋯ a_1s-1; ⋮ ⋮ ⋯ ⋮; c_s-1 a_s-1 0 ⋯ a_s-1 s-1; b_0 ⋯ b_s-1 ] , [ c A; b^T ] = [ c_0 a_00 ⋯ a_0s-1; c_1 a_10 ⋯ a_1s-1; ⋮ ⋮ ⋯ ⋮; c_s-1 a_s-1 0 ⋯ a_s-1 s-1; b_0 ⋯ b_s-1 ] , where A ∈ℝ^s × s, b ∈ℝ^s, and c = A 1 with 1 = (1, 1, ⋯, 1)^T∈ℝ^s. A, c are defined in the similar manner. [Explicit RK (ERK) methods] A RK method is explicit if a_ij = 0 for j ≥ i-1. [Diagonally implicit RK (DIRK) methods] A RK method is diagonally implicit if a_ij = 0 for j ≥ i and there exists 0 ≤ i ≤ s-1, a_ii≠ 0. [algebraically stable RK method <cit.>] Let us consider a symmetric matrix with entries M_ij = b_i a_ij + b_j a_ji - b_i b_j. A RK method is algebraically stable if its coefficients satisfy the following stability criteria. * b_i ≥ 0, ∀ i = 1, 2, ⋯, s, * M is positive semi-definite. We partition the time interval uniformly with a step size of τ and denote the time grid points as t_n = n τ. Let N_t = [T/τ]. Assuming that u^n has been solved in advance. The ARK methods then update u^n+1 through two steps. First, the intermediate stages u_ni (i = 0, 1, ⋯, s-1) are computed from u_ni = u^n + τ∑_j=0^s-1 a_ij f(u_nj) + τ∑_j=0^s-1a_ij g(u_nj), Then, we update the solution by u^n+1 = u^n + τ∑_i=0^s-1 b_i f(u_ni) + τ∑_i=0^s-1b_i g(u_ni). It is worth mentioning that the above ARK methods have been employed to develop energy-stable schemes for phase field models in <cit.> and maximum bound principle methods for the AC equations in <cit.>. We emphasize that each ARK method can be considered as a partitioned Runge-Kutta (PRK) method <cit.>. Specifically, let us introduce an equivalent reformulation of (<ref>) as follows: { u̇_f(𝐱, t) = f(u), u̇_g(𝐱, t) = g(u), u(𝐱, t) = u_f(𝐱, t) + u_g(𝐱, t), . It is straightforward to see that (<ref>) is equivalent to (<ref>) if the consistent initial condition u_f(𝐱, 0) + u_g(𝐱, 0) = u^0(𝐱) is imposed. By employing a PRK method to (<ref>) and eliminating the intermediate variables u_f, u_g, we readily obtain the ARK method as mentioned above. By Remark <ref>, we can readily infer that an ARK method has an order of p if the corresponding PRK method has an order of p, as a ARK method is essentially a PRK method applied to the extended systems (<ref>). Adrian et al. conducted an extensive study on generalized ARK methods in <cit.> and provided a comprehensive list of their order conditions. Table <ref> summarizes the order conditions of ARK methods up to the third-order for convenience. §.§ Gradient flow systems and their SAV reformulation A gradient flow model can be expressed generally as u_t(𝐱, t) = 𝒢δℱ/δ u, 𝐱∈Ω, where u is a state variable, 𝒢∈ℝ^d × d is a negative semi-definite mobility operator, and δℱ/δ u is the variational derivative of the free energy functional ℱ to u. The triple (u, 𝒢, ℱ) uniformly specifies a gradient flow system. When appropriate boundary conditions are imposed on u, system (<ref>) dissipates the free energy as follows: d ℱ/dt = ( δℱ/δ u, ∂ u/∂ t) = ( δℱ/δ u, 𝒢δℱ/δ u) ≤ 0, where (u, v) = ∫_Ω u v d𝐱, ∀ u, v ∈ L^2(Ω) is the inner product. Moreover, we denote by u = √((u, u)) the corresponding norm. For illustration, let us assume a free energy functional of the form: ℱ(u, ∇ u) = 1/2(u, ℒu) + (F(u, ∇ u), 1), where ℒ is a linear, self-adjoint, and positive definite operator, F represents a bulk energy bounded below. The SAV approach introduces a new scalar variable such that q(t) = √( (F(u, ∇ u), 1) + C ), where C is a sufficiently large positive constant to guarantee that the square root in (<ref>) makes sense. The energy functional (<ref>) can be rewritten into a quadratic form as ℱ(u, q) = 1/2(u, ℒu) + q^2 - C. Let W(u) = √( (F(u, ∇ u), 1) + C) for simplicity. The model (<ref>) is reformulated into an equivalent system using the SAV approach <cit.>, as shown below: { u_t = 𝒢 (ℒu + 2q δ W/δ u - 2q ∇·δ W/δ∇ u ), q_t = (δ W/δ u, u_t ) + ( δ W/δ∇ u, ∇ u_t ), . equipped with the consistent initial conditions u(𝐱, 0) = u_0(𝐱), q(0) = √((F(u_0, ∇ u_0), 1) + C). Taking the inner products on both sides of the first and second equations of (<ref>) by ℒu + 2q δ W/δ u - 2q ∇·δ W/δ∇ u and 2q, respectively, and then combining the resulting equations, it is readily to confirm that system (<ref>) admits the following energy dissipation law. d/dtℱ(u, q) = ( ℒu + 2q δ W/δ u - 2q ∇·δ W/δ∇ u, 𝒢 (ℒu + 2q δ W/δ u - 2q ∇·δ W/δ∇ u )) ≤ 0. § HIGH-ORDER LINEARLY IMPLICIT AND ENERGY-STABLE SCHEMES §.§ Construction of time integrators Let us further reformulate (<ref>) as follows: { v_t = 𝒢( ℒv + 2q δ W/δ u[v] - 2q ∇·δ W/δ∇ u[v] ), u_t = 𝒢( ℒu + 2 q δ W/δ u[v] - 2q ∇·δ W/δ∇ u[v]), q_t = ( δ W/δ u[v], u_t ) + ( δ W/δ∇ u[v], ∇ u_t ), . equipped with the initial conditions u(𝐱, 0) = v(𝐱, 0) = u_0(𝐱), q(0) = √((f(u_0(𝐱), ∇ u_0(𝐱)), 1) + C). We first demonstrate the equivalence between the reformulated system (<ref>), (<ref>) and the original system (<ref>). Suppose that ℒ is a linear, self-adjoint, and positive definite operator. The reformulation (<ref>) and the initial condition (<ref>) are equivalent to (<ref>). According to the definition of q (<ref>) and introducing v(t) = u(t), it is evident that the original system (<ref>) implies (<ref>). We will now demonstrate that the combination of (<ref>) and (<ref>) leads to (<ref>). Subtracting the second equation from the first equation of (<ref>) yields u_t - v_t = 𝒢( ℒ u - ℒ v ). Taking the discrete inner product with ℒ u - ℒ v on both sides of the above equation produces 1/2d/dt(ℒ(u - v), u - v ) = (𝒢ℒ(u - v), ℒ(u - v)) ≤ 0. Due to the positive-definite of ℒ and (<ref>), we conclude that u(t) = v(t), ∀ 0 ≤ t ≤ T. Inserting (<ref>) into the third equation of (<ref>), we obtain q_t = ( δ W/δ u[v] , v ) + ( δ W/δ∇ u[v], ∇ v_t ) = d W[v]/dt. Combining (<ref>), (<ref>), and (<ref>) results in q = W[v] = W[u], Finally, it holds from the definition of W that 2q δ W/δ u = δ F/δ u, 2q ∇·δ W/δ∇ u = ∇·δ F/δ∇ u. Substituting the above results into (<ref>) yields (<ref>), which completes the proof. The positive-definite of ℒ is reasonable for most phase field models. For the CH equation with Neumann or periodic boundary conditions, we have ℒ = -Δ and 𝒢 = Δ. The mass conservation law guarantees the invertibility of ℒ. A similar argument applies to the MBE equation. For the AC equation, we have ℒ = -Δ and 𝒢 = -I. Although ℒ is only positive semi-definite in this case, we can introduce a stabilized parameter κ and equivalently recast the AC equation as u_t = - ( (κ I - Δ) u - (κ u + f(u)) ) := - (ℒ_κ u + f_κ (u)). Then, ℒ_κ = κ I - Δ is positive definite. The extension of (<ref>) results in a more complex system (<ref>). However, this reformulation provides an elegant platform for developing high-order, linearly implicit, and energy-stable schemes, as will be demonstrated in subsequent contexts. It should be noted that the equivalent reformulation of (<ref>) is not unique, and other similar reformulations can be employed to develop numerical schemes through the frameworks described in this paper. For simplicity, we only consider (<ref>) in this section. System (<ref>) is an extension of the original SAV approach (<ref>) proposed in <cit.>. Some other SAV approaches have recently gained popularity, including the exponential SAV approach <cit.> and the generalized SAV approach <cit.>. In <cit.>, Ju et al. have also introduced a novel exponential SAV approach to preserve both MBP and EDL for the AC equations. These approaches can also be extended similarly to (<ref>) and discretized by the methods outlined in subsequent contexts to obtain high-order and energy-stable schemes. For simplicity, we will only use the original SAV approach for illustrations. Assuming that u^n, v^n, and q^n are already determined. The SAV-ARK methods are outlined below: [SAV-ARK] The intermediate variables v_ni, u_ni, and q_ni are solved from { v_ni = v^n + τ∑_j=0^s-1 (a_ijv̇^ℒ_nj +a_ijv̇^𝒩_nj) , u_ni = u^n + τ∑_j=0^s-1 a_iju̇_nj, q_ni = q^n + τ∑_j=0^s-1 a_ijq̇_nj, v̇^ℒ_ni = 𝒢ℒ v_ni, v̇^𝒩_ni = 𝒢 ( 2q_niδ W/δ u[v_ni] - 2q_ni∇·δ W/δ∇ u[v_ni] ), u̇_ni = 𝒢 ( ℒ u_ni + 2q_niδ W/δ u[v_ni] - 2q_ni∇·δ W/δ∇ u[v_ni] ), q̇_ni = (δ W/δ u[v_ni], u̇_ni ) + ( δ W/δ∇ u[v_ni], ∇u̇_ni ). . Then, the solution at t_n+1 is v^n+1 = v^n + τ∑_i=0^s-1 b_i(v̇_ni^ℒ + v̇_ni^𝒩), u^n+1 = u^n + τ∑_i=0^s-1 b_i u̇_ni, q^n+1 = q^n + τ∑_i=0^s-1 b_i q̇_ni. We note here that linearly implicit schemes can be obtained by carefully choosing the RK coefficients in Algorithm <ref>. One effective method is discretizing u and q with DIRK methods and v with ERK methods. These methods will be referred to as SAV-DIARK methods in the subsequent contexts. It is important to emphasize that by introducing z = (v, u, q)^T, Algorithm <ref> can be regraded as ARK methods as follows: z_ni = z^n + τ∑_j=0^s-1 (a_ijΦ(z_nj) + a_ijΨ(z_nj) ), z^n+1 = z^n + τ∑_i=0^s-1 b_i ( Φ(z_ni) + Ψ(z_ni) ), where Φ(z) = ( [ 𝒢ℒ u; 𝒢( ℒu + 2q δ W/δ u[v] - 2q ∇·δ W/δ∇ u[v]); ( δ W/δ u[v], u̇) + ( δ W/δ∇ u, ∇u̇) ]), Ψ(z) = ( [ 𝒢( 2q δ W/δ u[v] - 2q ∇·δ W/δ∇ u[v] ); 0; 0 ]). This allows us to easily derive the order conditions of the proposed schemes by the order conditions of ARK methods. To further simplify and improve the stability of Algorithm <ref>, we introduce the following modified SAV-ARK (SAV-MARK) scheme. [SAV-MARK] The intermediate variables v_ni, u_ni, q_ni are solved from { v_ni = u^n + τ∑_j=0^s-1 ( a_ijv̇^ℒ_nj + a_ijv̇^𝒩_nj) , u_ni = u^n + τ∑_j=0^s-1 a_iju̇_nj, q_ni = q^n + ∑_j=0^s-1 a_ijq̇_nj, v̇^ℒ_ni = 𝒢ℒ v_ni, v̇^𝒩_ni = 𝒢( 2q_niδ W/δ u[v_ni] - 2q_ni∇·δ W/δ∇ u[v_ni]), u̇_ni = 𝒢 ( ℒ u_ni + 2q_niδ W/δ u[v_ni] - 2q_ni∇·δ W/δ∇ u[v_ni] ), q̇_ni = (δ W/δ u[v_ni], u̇_ni) + ( δ W/δ∇ u[v_ni], ∇u̇_ni). . Then, the solution at t_n+1 is u^n+1 = u^n + τ∑_i=0^s-1 b_i u̇_ni, q^n+1 = q^n + τ∑_i=0^s-1 b_i q̇_ni. In contrast to Algorithm <ref>, Algorithm <ref> does not require updating the variable v at integer time steps. This modification not only reduces computational costs but also improves the stability of the scheme in practice. Additionally, thanks to (<ref>), this modification does not affect the accuracy of Algorithm <ref>. §.§ Energy stability and solvability Suppose the RK methods employed on u in Algorithms <ref> and <ref> are algebraically stable. Then, SAV-ARK and SAV-MARK methods are unconditionally energy-stable in the sense ℱ(u^n+1, q^n+1) ≤ℱ(u^n, q^n), 0 ≤ n ≤ N_t - 1. By the definition of (<ref>) and the self-adjointness of ℒ, we can derive 1/2 (u^n+1, ℒ u^n+1) - 1/2(u^n, ℒ u^n) = τ∑_i=0^s-1 b_i (u̇_ni, ℒ u^n) + τ^2/2∑_i=0^s-1∑_j=0^s-1 b_ib_j (u̇_ni, ℒu̇_nj). Substituting u^n = u_ni - τ∑_j=0^s-1a_iju̇_nj into the above equation and observing that ∑_i = 0^s-1∑_j = 0^s-1 b_i a_ij(u̇_ni, ℒu̇_nj) = ∑_i=0^s-1∑_j=0^s-1 b_ja_ji(u̇_ni, ℒu̇_nj), we obtain 1/2 (u^n+1, ℒu^n+1) - 1/2(u^n, ℒ u^n) = τ∑_i=0^s-1 b_i (u̇_ni, ℒu_ni) - τ^2/2∑_i = 0^s-1∑_j=0^s-1 M_ij (u̇_ni, ℒu̇_nj) ≤τ∑_i=0^s-1 b_i (u̇_ni, ℒu_ni). The last inequality is a result of the positive definiteness of M and ℒ. Using a similar procedure, we have (q^n+1)^2 - (q^n)^2 ≤ 2τ∑_i=0^s-1 b_i q_niq̇_ni. Taking the discrete inner products of the sixth and last equations of (<ref>) with ℒu_ni and 2q_ni, respectively, and adding the obtained results together yield (u̇_ni, ℒu_ni) + 2 q_niq̇_ni = (𝒢μ_ni, μ_ni) ≤ 0, where μ_ni = ℒ u_ni + 2q_niδ W/δ u[v_ni] - 2q_ni∇·δ W/δ∇ u[v_ni]. The desired result is thus obtained by combining (<ref>)–(<ref>) with the condition b_i ≥ 0. The proposed approach uses quadratic energy (as depicted in equation (<ref>)) instead of the original one. When higher-order time discretization is applied to (<ref>), the resulting quadratic energy becomes a high-order approximation of the original energy. Although the SAV method may be criticized for this weakness, recent studies have attempted to overcome it. For example, Jiang et al. introduced the relaxed SAV approach in <cit.> to connect the modified and original energy at a discrete level, <cit.> proposed an alternating approach that combines the SAV and Lagrange multiplier methods to preserve the original energy. Our technique can also be utilized to develop higher-order schemes based on these approaches. It should be noted that Theorem <ref> guarantees the boundedness of the numerical solutions {u^n}_n=0^N_t under the energy norm ·_ℒ, where u_ℒ := (ℒu, u). However, the solutions {v^n}_n=0^N_t obtained from Algorithm <ref> may not be bounded. Hence, Algorithm <ref> is expected to be more stable in practical applications since it does not involve the update of v^n. Let us now concentrate on the solvability of SAV-MDIARK methods. Notice that the proof for SAV-DIARK methods is similar, and we omit it here. Assume that the mobility matrix satisfies 𝒢 = - ℬ^* ℬ and the RK coefficients a_ii≥ 0 in Algorithm <ref>. The semi-discrete SAV-MDIARK scheme is then uniquely solvable when the time step is sufficiently small. Here, ℬ is a linear operator, ℬ^⋆ represents its adjoint. Since we are considering the DIRK method, the scheme to solve the intermediate variable v_ni can be reformulated as follows: v_ni = u^n + τ a_ii𝒢ℒ v_ni + τ∑_j=0^i-1 (a_ijv̇_nj^ℒ + a_ijv̇_nj^𝒩). Notably, we can solve the above system one by one for i from 0 to s-1, where the only unknown in each step is v_ni. Combining the self-adjoint of ℒ and the assumption to 𝒢, it is readily to assert the decomposition 𝒢ℒ = -𝒜^⋆𝒜. Therefore, the solution of v_ni can be regarded as the minimization of the convex functional defined by: 𝒮[v] = 1/2 (v^2 + τ a_ii𝒜 v^2) - (u^n + τ∑_j=0^i-1 (a_ijv̇_nj^ℒ + a_ijv̇_nj^𝒩), v). Therefore, the unique solvability of v_ni is straightforward. Then, we prove the solvability of the system coupled by u_ni and q_ni. Let f_ni = δ W/δ u[v_ni] - ∇·δ W/δ∇ u_ni[v_ni]. Thanks to the factor that q_ni is in dependent of space, it can be updated by q_ni = q^n + τ∑_j=0^i-1a_ijq̇_nj + τ a_ii (𝒜f_ni, 𝒜 u^1_ni ) /1 + 2τ a_iiℬ f_ni^2 - τ a_ii (𝒜f_ni, 𝒜 u_ni^2) , where u^1_ni and u_ni^2 are defined by u_ni^1 = argmin _u1/2 (v^2 + τ a_ii𝒜 v^2) - τ (u^n + ∑_j=0^i-1 a_iju̇_nj, u ), u_ni^2 = argmin _u1/2 (v^2 + τ a_ii𝒜 v^2) - 2τ a_ii (𝒢 f_ni, u ). Since the time step is supposed to be sufficiently small, the solvability of the system can be straightforward. § THEORETICAL ANALYSIS §.§ Estimates of the global error In this section, we present global error estimates for the semi-discrete SAV-MARK methods. To simplify the presentation, we consider only the classical L^2 gradient flow, i.e., 𝒢 = -1, ℒ = -Δ and ℱ(u) = 1/2∇ u^2 + ∫_Ω F(u) d𝐱. Without loss of generality, our subsequent analysis is based on the following assumptions 𝒜1–𝒜3: 𝒜1: The implicit component of the ARK method is algebraically and diagonally stable. 𝒜2: The exact solution of the system is sufficiently smooth in both space and time. 𝒜3: The nonlinearity F(·) is twice differentiable. The SAV-MARK scheme for the AC equation is given by v_ni = u^n + τ∑_j = 0^s-1 (a_ijΔ v_nj - 2 a_ij q_nj W^'(v_nj)), u_ni = u^n + τ∑_j=0^s-1 a_iju̇_nj, q_ni = q^n + τ∑_j=0^s-1 a_ijq̇_nj, u^n+1 = u^n + ∑_i=0^s-1 b_i u̇_ni, q^n+1 = q^n + ∑_i=0^s-1 b_i q̇_ni, where u̇_ni = Δ u_ni - 2 q_ni W^'(v_ni) , q̇_ni = (W^'(v_ni), u̇_ni), W^'(u) = F^'(u)/ 2 √(∫_Ω F(u) dx + C_0 ). The major obstacle in establishing error estimates for the SAV-MARK method is obtaining a prior L^∞ bound for the intermediate stages v_ni. To address this issue, previous researches truncated the nonlinearity to a global Lipschitz function with compact support. This technique is reliable when the continuous solution is bounded, and the numerical solution is sufficiently close to it. Here, we will adopt a similar approach. Let U(𝐱, t) be the exact solution to the L^2 gradient flow and Q(t) = √(∫_Ω F(U(𝐱, t)) d𝐱 + C ). We define M_u = U(𝐱, t)_C([0, T]; L^∞ (Ω)), Ṁ_u = U̇(𝐱, t)_C([0, T]; L^∞(Ω)), M_q = max_0 ≤ t ≤ T|Q(t)|. The constants provided above are well-defined by the assumption 𝒜2 and the definition of Q(t). We denote by ℬ = M_u + 1 and let W^'_ℬ(s) = W^'(s) ρ(s/ℬ), where ρ(s) is a smooth function with compact support, such that ρ(s) = { 1, 0 ≤ |s| ≤ 1, ∈ [0, 1], 1 ≤ |s| ≤ 2, 0, |s| ≥ 2. . It is readily to confirm that W^'_ℬ(·) is global Lipschitz continuous, and W^'_ℬ (s) = W^'(s), ∀ 0 ≤ |s| ≤ℬ, |W^'_ℬ(s)| ≤ L_1, | W^'_ℬ (r) - W^'_ℬ (s) | ≤ L_2 |r - s|. Following <cit.>, we introduce reference solutions 𝒱_ni, 𝒰_ni, 𝒬_ni, 𝒰^n and 𝒬^n, such that 𝒱_ni = U(t_n) + τ∑_j = 0^s-1 (a_ijΔ𝒱_nj - 2 a_ij𝒬_nj W^'_ℬ (𝒱_nj)), 𝒰_ni = U(t_n) + τ∑_j=0^s-1 a_ij𝒰̇_nj, 𝒰̇_ni = Δ𝒰_nj - 2 𝒬_nj W^'_ℬ(𝒱_nj), 𝒬_ni = Q(t_n) + τ∑_j=0^s-1 a_ij𝒬̇_nj, 𝒬̇_ni = (W^'_ℬ(𝒱_ni), 𝒰̇_ni). These reference solutions play important roles in obtaining global estimates for the SAV-MARK methods. Suppose that the time step satisfies τ≤min{ (2c_2)^-1, (4c(c_3 + c_4))^-1}, where the constants above will be specified in the subsequent derivations. We have the following estimates for the intermediate solutions 𝒱_ni 𝒱_ni_L^∞≤ M_u + 1/2, 0 ≤ n ≤ N_t, 0 ≤ i ≤ s-1. Moreover, ∑_i= 0^s-1 (|Q(t_ni) - 𝒬_ni| + U(t_ni) - 𝒱_ni + U(t_ni) - 𝒰_ni) ≤ c_3 τ^2, ∑_i=0^s-1Δ (U(t_ni) - 𝒱_ni)≤ c_4 τ. Since W^'_ℬ(U(t_ni)) = W^'(U(t_ni)), the exact solutions satisfy U(t_ni) = U(t_n) + τ∑_j = 0^s-1 (a_ijΔ U(t_nj) - 2 a_ij Q(t_nj) W^'_ℬ (U(t_nj))) + η^v_ni, U(t_ni) = U(t_n) + τ∑_j=0^s-1 a_ijU̇(t_nj) + η_ni^u, U̇_ni = Δ U(t_ni) - 2 Q(t_ni) W^'_ℬ(U(t_ni)), Q(t_ni) = Q(t_n) + τ∑_j=0^s-1 a_ijQ̇(t_nj) + η^q_ni, Q̇(t_ni) = (W^'_ℬ(U(t_ni)), U̇(t_ni)), where ∑_i=0^s-1 (η_ni^v + η_ni^u + |η_ni^q|) ≤ c_1 τ^2. Subtracting the second and sixth equations of (<ref>) from that of (<ref>) yields U(t_ni) - 𝒱_ni = τ∑_j=0^s-1 ( a_ijΔ (U(t_nj) - 𝒱_nj) - 2 a_ijξ_nj ) + η_ni^v, U(t_ni) - 𝒰_ni = τ∑_j=0^s-1 a_ij (U̇(t_nj) - 𝒰̇_nj) + η_ni^u, Q(t_ni) - 𝒬_ni = τ∑_j=0^s-1 a_ij (Q̇(t_nj) - 𝒬̇_nj) + η_ni^q, where U̇(t_ni) - 𝒰̇_ni = Δ (U(t_ni) - 𝒰_ni) - 2 ξ_ni, ξ_ni = Q(t_ni) W^'_ℬ(U(t_ni)) - 𝒬_ni W^'_ℬ (𝒱_ni), Q̇(t_ni) - 𝒬̇_ni = (W^'_ℬ (U(t_ni)) - W^'_ℬ(𝒱_ni), U̇(t_ni)) + ( W^'_ℬ(𝒱_ni), U̇(t_ni) - 𝒰̇_ni ). There is no difficulty in confirming that ξ_ni ≤ M_u U(t_ni) - 𝒱_ni + L_1 |Q(t_ni) - 𝒬_ni|, |Q̇(t_ni) - 𝒬̇_ni| ≤Ṁ_u L_2 U(t_ni) - 𝒱_ni + L_1 U̇(t_ni) - 𝒰̇_ni. According to Assumption 𝒜1, there exists a positive definite diagonal matrix H = diag{h_0, h_1, ⋯, h_s-1}, such that M = H A+ A^TH is positive definite. Therefore, we can find a sufficiently small constant l, such that M_l = (m_ij^l) = A^-TM A^-1 - 2 l H = A^-TH + H A^-1 - 2 l H is positive definite. Moreover, let M_d = H A^-1, M_s = H A^-1A. Then, 0 ≤ 2 l ∑_i=0^s-1h_i U(t_ni) - 𝒱_ni^2 - 2τ∑_i=0^s-1h_i (Δ ( U(t_ni) - 𝒱_ni ), U(t_ni) - 𝒱_ni) = 2 ∑_i,j = 0^s-1m_ij^d (U(t_ni) - 𝒱_ni, U(t_nj) - 𝒱_nj) - ∑_i,j = 0^s-1m_ij^l (U(t_ni) - 𝒱_ni, U(t_nj) - 𝒱_nj) - 2τ∑_i=0^s-1h_i (Δ ( U(t_ni) - 𝒱_ni ), U(t_ni) - 𝒱_ni) = 2 ∑_i,j = 0^s-1m_ij^d (U(t_ni) - 𝒱_ni, η_nj^v) - ∑_i,j = 0^s-1m_ij^l (U(t_ni) - 𝒱_ni, U(t_nj) - 𝒱_nj) - 4 τ∑_i,j = 0^s-1m^s_ij (U(t_ni) - 𝒱_ni, ξ_nj) ≤ 2 λ_d ∑_i=0^s-1U(t_ni) - 𝒱_ni∑_i=0^s-1η_ni^v - λ_l ∑_i=0^s-1U(t_ni) - 𝒱_ni^2 + 4 λ_s (Ṁ_u L_2 + L_1) τ∑_i=0^s-1 U(t_ni) - 𝒱_ni ( ∑_i=0^s-1 |Q(t_ni) - 𝒬_ni| + ∑_i=0^s-1 U(t_ni) - 𝒱_ni ), where λ_α and λ_α, α = d,l,s,h are the maximum and minimum eigenvalues of M_d, M_l, M_s, and H, respectively. Consequently, ∑_i=0^s-1U(t_ni) - 𝒱_ni≤4 s λ_s (Ṁ_u L_2 + L_1)/λ_lτ∑_i=0^s-1 ( U(t_ni) - 𝒱_ni + |Q(t_ni) - 𝒬_ni| ) + 2 s λ_d/λ_l∑_i=0^s-1η_ni^v. Following the same procedure, we can derive ∑_i=0^s-1U(t_ni) - 𝒰_ni≤4 s λ_h (Ṁ_u L_2 + L_1)/λ_lτ∑_i=0^s-1 ( U(t_ni) - 𝒱_ni + |Q(t_ni) - 𝒬_ni| ) + 2 s λ_h/λ_l∑_i=0^s-1η_ni^u. Combining (<ref>) with the second equation of (<ref>), we have ∑_i=0^s-1U̇ (t_ni) - 𝒰̇_ni≤4s λ_d λ_h (Ṁ_u L_2 + L_1)/λ_l λ_h∑_i=0^s-1 ( U(t_ni) - 𝒱_ni + |Q(t_ni) - 𝒬_ni| ) + s λ_d (2 λ_h + λ_l)/λ_lλ_hτ^-1∑_i=0^s-1η_ni^u. Subtracting the fourth equation of (<ref>) with that of (<ref>) gives Q(t_ni) - 𝒬_ni = τ∑_j=0^s-1 a_ij (Q̇(t_ni) - 𝒬̇_ni) + η_ni^q. Repeating to use the above technique and combining (<ref>) and (<ref>) then result in ∑_i=0^s-1|Q(t_ni) - 𝒬_ni| ≤2s λ_h (λ_h + 2s λ_d λ_h L_1)(Ṁ_u L_2 + L_1) /λ_l λ_hτ∑_i=0^s-1 ( U(t_ni) - 𝒱_ni + |Q(t_ni) - 𝒬_ni|) + 2s^2 λ_h λ_d (2 λ_h + λ_l) L_1 /λ_l^2 λ_h ∑_i=0^s-1 (η_ni^u + |η_ni^q|). Adding (<ref>), (<ref>) and (<ref>) together yields ∑_i= 0^s-1 ( U(t_ni) - 𝒱_ni + U(t_ni) - 𝒰_ni + |Q(t_ni) - 𝒬_ni| ) ≤ c_2 τ∑_i= 0^s-1 ( U(t_ni) - 𝒱_ni + U(t_ni) - 𝒰_ni + |Q(t_ni) - 𝒬_ni| ) + c_3/2τ^2. It follows by setting τ≤ (2c_2)^-1 that ∑_i= 0^s-1 ( U(t_ni) - 𝒱_ni + U(t_ni) - 𝒰_ni + |Q(t_ni) - 𝒬_ni| ) ≤ c_3 τ^2. Using equation (<ref>), we then demonstrate the boundedness of 𝒱_ni for sufficiently small τ. Inserting (<ref>) into the first equation of (<ref>) infers ∑_i=0^s-1Δ (U(t_ni) - 𝒱_ni)≤λ_d + 2 λ_s (M_u + L_1)/λ_h τ ( U(t_ni) - 𝒱_ni + |Q(t_ni) - 𝒬_ni| + η_ni^v ) ≤ c_4 τ. The Sobolev inequality f_L^∞≤ cf_H^2 and the triangular inequality give us 𝒱_ni_L^∞ ≤U(t_ni)_L^∞ + U(t_ni) - 𝒱_ni_L^∞ ≤ M_u + c U(t_ni) - 𝒱_ni_H^2≤ M_u + 2c(c_3 + c_4) τ. The estimate for 𝒱_ni in Lemma <ref> is straightforward after setting τ≤ (4c(c_3 + c_4))^-1. Therefore, we have completed the proof. Using the Taylor's formula and Lemma <ref>, it is readily to confirm that when the time step satisfies the condition of Lemma <ref>, the reference solutions further satisfy U(t_n+1) = U(t_n) + τ∑_i=0^s-1 b_i 𝒰̇_ni + η_n+1^u, Q(t_n+1) = Q(t_n) + τ∑_i=0^s-1 b_i 𝒬̇_ni + η_n+1^q, with η_n+1^u_H^1 + η_n+1^q≤ c_5 τ^p+1. We proceed to prove the convergence of the modified scheme obtained by replacing the nonlinear term W^'(·) in (<ref>) with W^'_ℬ(·). For clarity, we remain to use the original notation to denote the solution of this modified scheme. Our proof demonstrates that v_ni_L^∞≤ M_u + 1 for sufficiently small time steps. Consequently, W^'_ℬ(v_ni) = W^'(v_ni), which indirectly confirming the convergence of the SAV-MARK method (<ref>). Let 𝒥_ni = 𝒱_ni - v_ni, ℰ_ni = 𝒰_ni - u_ni, 𝒟_ni = 𝒬_ni - q_ni. Define solution errors E^n+1 = U(t_n+1) - u^n+1, D^n+1 = Q(t_n+1) - q^n+1. Let c_⋆ =((3c_5^2 + c_11)Texp(2c_12T))^1/2, and the time step τ≤min{ (2c_2)^-1, (4c(c_3 + c_4))^-1, (2c_6)^-1, (4c(c_⋆ c_7 + c_8))^-1/p-1, (2c_12)^-1}. Then, the SAV-MARK method is convergent in the sense E^n + |D^n | ≤ c_⋆τ^p, 0 ≤ n ≤ N_t. We will complete the proof by the mathematical induction. As SAV-MARK is a one-step method, it is enough to prove the result for n = l+1 while assuming it holds for n = l. Let n = l. Subtracting (<ref>) and (<ref>) from (<ref>), we get 𝒥_li = E^l + τ∑_j=0^s-1 (a_ijΔ𝒥_lj - 2 a_ijζ_lj), ℰ_li = E^l + τ∑_j=0^s-1 a_ijℰ̇_lj, 𝒟_li = D^l + τ∑_j=0^s-1 a_ij𝒟̇_lj, E^l+1 = E^l + τ∑_i=0^s-1 b_i ℰ̇_li + η_l+1^u, D^l+1 = D^l + τ∑_i=0^s-1 b_i 𝒟̇_li + η_l+1^q, where ℰ̇_li = Δℰ_li - 2ζ_li, ζ_li = 𝒬_li (W^'_ℬ(𝒱_li) - W^'_ℬ(v_li) ) + 𝒟_li W^'_ℬ(v_li), 𝒟̇_li = (W^'_ℬ (𝒱_li) - W^'_ℬ(v_li), 𝒰̇_li) + (W^'_ℬ (v_li), ℰ̇_li). Based on the proof of Lemma <ref>, we can conclude that |𝒬_li| ≤ℳ_q and |𝒰̇_li| ≤ℳ̇_u. Applying the propositions of W^'_ℬ(·) then yields ζ_li≤ (ℳ_q L_2 + L_1) (𝒥_li + |𝒟_li| ), |𝒟̇_li| ≤ (ℳ̇_u L_2 + L_1)( 𝒥_li + ℰ̇_li ). Furthermore, using (<ref>) and the same technique employed in Lemma <ref>, we can still arrive at ∑_i=0^s-1𝒥_li ≤2 s λ_s (ℳ_q L_2 + L_1)/λ_lτ∑_i=0^s-1 ( 𝒥_li + |𝒟_li| ) + 2s λ_d/λ_lE^l, ∑_i=0^s-1ℰ_li ≤2 s λ_h (ℳ_q L_2 + L_1)/λ_lτ∑_i=0^s-1 ( 𝒥_li + |𝒟_li| ) +2s λ_h/λ_lE^l, ∑_i=0^s-1 |𝒟_li| ≤s λ_h (λ_d λ_h + 2 s λ_d λ_h L_1)(ℳ̇_u L_2 + L_1) /λ_l^2 λ_hτ∑_i=0^s-1 ( 𝒥_li + |𝒟_li| ) + s^2 λ_h λ_d (2λ_h + λ_l) (ℳ̇_u L_2 + L_1)/λ_l^2 λ_h |D^l|, ∑_i=0^s-1ℰ̇_li ≤2s λ_d λ_h(ℳ_q L_2 + L_1)/λ_l λ_h∑_i=0^s-1 ( 𝒥_li + |𝒟_li| ) + s λ_d (λ_l + 2 λ_h)/λ_l λ_hτ^-1E^l. Consequently, ∑_i=0^s-1 ( 𝒥_li + ℰ_li + |𝒟_li| ) ≤ c_6 τ∑_i=0^s-1 ( 𝒥_li + ℰ_li + |𝒟_li| ) + c_7/2(E^l + |D^l|). The restriction τ≤ (2c_6)^-1 and the induction produce ∑_i=0^s-1 ( 𝒥_li + ℰ_li + |𝒟_li| ) ≤ c_⋆ c_7 τ^p. Combining the above estimate with first equation of (<ref>) then yields Δ𝒥_li≤ c_8 τ^p-1, where c_8 = λ_d c_⋆ c_7 + s λ_d c_⋆ + 2 λ_s (ℳ_q L_2 + L_1)c_⋆ c_7/λ_h. Employing the inequalities ∇ f^2 ≤fΔ f and f_L^∞≤ c f_H^2, it can be shown that if τ≤ (4c(c_⋆ c_7 + c_8))^-1/p-1, v_li_H^2 ≤𝒱_li_H^2 + 𝒥_li_H^2≤𝒱_li_H^2 + 2(c_⋆ c_7 + c_8) τ^p-1≤ c_9 , v_li_L^∞ ≤𝒱_li_L^∞ + 2c(c_⋆ c_7 + c_8) τ^p-1≤ M_u + 1. Let us now provide estimates for E^l+1 and D^l+1. Taking the difference between E^l+1^2 and E^l^2, and use the fourth equation of (<ref>) yield E^l+1^2 - E^l^2 = 2τ∑_i=0^s-1 (E^l, b_i ℰ̇_li) + τ^2 ∑_i=0^s-1∑_j=0^s-1 b_i b_j (ℰ̇_li, ℰ̇_lj) + 2 (E^l + τ∑_i=0^s-1 b_i ℰ̇_li, η^u_l+1) + η_l+1^u^2. Next, we individually estimate each of the terms on the right-hand side of (<ref>). Based on the second equation of (<ref>) and the algebraically stable condition, we deduce 2τ∑_i=0^s-1 (E^l, b_i ℰ̇_li) + τ^2 ∑_i=0^s-1∑_j=0^s-1 b_i b_j (ℰ̇_li, ℰ̇_lj) + 2 τ∑_i=0^s-1 b_i ∇ℰ_li^2 = - τ^2 ∑_i=0^s-1∑_j=0^s-1 m_ij (ℰ̇_li, ℰ̇_lj) + 2τ∑_i=0^s-1 b_i (ℰ_li, ℰ̇_li) + 2 τ∑_i=0^s-1 b_i ∇ℰ_li^2 ≤ 4(ℳ_q L_2 + L_1 + 1) τ∑_i=0^s-1 b_i (𝒥_li^2 + ℰ_li^2 + |𝒟_li|^2). Using the Cauchy-Schwarz inequality and ab ≤τ/2 a^2 + 1/2 τ b^2 yield (E^l + τ∑_i=0^s-1 b_i ℰ̇_li, η^u_l+1) = E^lη_l+1^u + τ∑_i=0^s-1 b_i (Δℰ_li - 2ζ_li, η^u_l+1) ≤τ/2E^l^2 + 1/2τη_l+1^u^2 + τ∑_i=0^s-1 b_i (∇ℰ_li∇η_l+1^u + 2 ζ_liη_l+1^u) ≤τ/2E^l^2 + τ/2∑_i=0^s-1 b_i ∇ℰ_li^2 + 2(ℳ_q L_2 + L_1) τ∑_i=0^s-1 b_i (𝒥_li^2 + |𝒟_li|^2) + 2 c^2_5 τ^2p+1. Inserting (<ref>) and (<ref>) into (<ref>) infers E^l+1^2 + τ∑_i=0^s-1 b_i ∇ℰ_li^2 ≤ (1 + τ )E^l^2 + 8(ℳ_q L_2 + L_1 + 1)τ∑_i=0^s-1 b_i (𝒥_li^2 + ℰ_li^2 + |𝒟_li|^2) + 3c_5^2 τ^2p+1. Analogously, |D^l+1|^2 ≤ (1 + c_9 τ) |D^l|^2 + c_10τ∑_i=0^s-1 (𝒥_li^2 + ℰ_li^2 + |𝒟_li|^2) + c_11τ^2p+1 . Moreover, ∑_i=0^s-1 (𝒥_li^2 + ℰ_li^2 + |𝒟_li|^2) ≤ ( ∑_i=0^s-1𝒥_li + ℰ_li + |𝒟_li| )^2 ≤ 2 c_7^2 ( E^l^2 + |D^l|^2 ). Collecting (<ref>), (<ref>), (<ref>) produces E^l+1^2 - |D^l+1|^2 ≤ (1 + c_12τ) (E^l^2 + |D^l|^2) + (3c_5^2 + c_11) τ^2p+1. We observe that c_5, c_11, c_12 are independent of c_⋆ and discrete parameters according to the derivations. By selecting τ≤ (2c_12)^-1 and applying the discrete Gronwall inequality, we can derive the desired result with c_⋆ = ((3c_5^2 + c_11)Texp(2c_12T))^1/2. Therefore, the proof is completed. §.§ Relationships with the SAV-RK methods In <cit.>, Li et al. developed high-order unconditionally energy-stable schemes based on SAV techniques and RK methods. To obtain arbitrarily high-order and linearly implicit schemes, they proposed an iterative procedure to get a sufficiently accurate prediction of u, which was then used to discretize the nonlinear terms. In this section, we demonstrate that every SAV-RK methods can be viewed as an ARK method applied to some appropriate reformulations of (<ref>). This new perspective enables us to systematically investigate the order conditions of existing works, utilizing the order conditions of ARK approaches. Employing their SAV-RKPC(M) methods to gradient flows leads to. [SAV-RKPC(M)] Given a fundamental RK method with coefficients (A, b, c), the intermediate variables are calculated by the prediction-correction procedure as 1. Prediction: We initialize u_ni^(0) = u^0, q_ni^(0) = q^0. Let M be a positive integer. Then, we iteratively compute u_ni^(m) and q_ni^(m) for m = 0 to M-1 by { u_ni^(m+1) = u^n + τ∑_j=0^s-1 a_iju̇_nj^(m+1), q_ni^(m+1) = q^n + τ∑_j=0^s-1a_ijq̇_nj^(m+1) u̇_ni^(m+1) = 𝒢( ℒu_ni^(m+1) + 2q_ni^(m)δ W/δ u [u_ni^(m)] - 2 q_ni^(m)∇·δ W/δ∇ u[u_ni^(m)] ), q̇_ni^(m+1) = (δ W/δ u [u_ni^(m+1)], u̇_ni^(m+1)) + (δ W/δ∇ u[u_ni^(m+1)], ∇u̇_ni^(m+1)). . If max_i u_ni^(m+1) - u_ni^(m)_∞≤ TOL, we stop the iterations and set u_ni^⋆ = u_ni^(m+1). Otherwise, we set u_ni^⋆ = u_ni^(M). 2. Correction: For the predicted u_ni^⋆, we compute the intermediate stages u̇_ni and q̇_ni as follows: { u_ni = u^n + τ∑_j=0^s-1 a_iju̇_nj, q_i^n = q^n + τ∑_j=0^s-1 a_ijq̇_nj, u̇_ni = 𝒢 ( ℒu_ni + 2q_niδ W/δ u[u_ni^⋆] - 2q_ni∇·δ W/δ∇ u[u_ni^⋆] ), q̇_ni = ( δ W/δ u [u_ni^⋆], u̇_ni) + ( δ W/δ∇ u [u^⋆_ni], ∇u̇_ni ), . and then update u^n+1, q^n+1 by u^n+1 =u^n + τ∑_i=0^s-1 b_i u̇_ni, q^n+1 = q^n + τ∑_i=0^s-1 b_i q̇_ni. We display that Algorithm <ref> can be regarded as an ARK method for the following alternative reformulation of (<ref>). { w_t = 𝒢( ℒv + 2 r δ W/δ u[v] - 2 r ∇·δ W/δ∇ u[v] ), v_t = 𝒢( ℒv + 2r δ W/δ u[v] - 2r ∇·δ W/δ∇ u[v] ), u_t = 𝒢( ℒu + 2 q δ W/δ u[v] - 2q ∇·δ W/δ∇ u[v]), r_t = (δ W/δ u[v], v^ℒ_t) + (δ W/δ u[w], v^𝒩_t) + ( δ W/δ∇ u[v], ∇ v_t^ℒ) + ( δ W/δ∇ u[w], ∇ v^𝒩_t ), q_t = ( δ W/δ u[v], u_t ) + ( δ W/δ∇ u[v], ∇ u_t ), . where v_t^ℒ = 𝒢ℒ v, v^𝒩_t = 𝒢 (2 r δ W/δ u[v] - 2 r δ W/δ∇ u[v]). Let us explain the equivalence between (<ref>) and (<ref>). Subtracting the second from the first equation of (<ref>) and investigating the initial condition, we obtain: v(t) = w(t), ∀ 0 < t ≤ T. Substituting this formula into the fourth equation of (<ref>), and subtracting the third equation of (<ref>) from the second, the fifth equation of (<ref>) from the fourth resulting in u_t - v_t = 𝒢( ℒ(u - v) + 2(q - r) (δ W/δ u[v] - ∇·δ W/δ∇ u[v]) ), q_t - r_t = ( δ W/δ u[v], u_t - v_t) + ( δ W/δ∇ u[v], ∇ u_t - ∇ v_t ). Taking the inner products on both sides of the first and the second equations in (<ref>) with ℒ(u - v) + 2(q - r) (δ W/δ u[v] - ∇·δ W/δ∇ u[v]) and 2(q - r), respectively, and adding the resulting equations together yield: 1/2 (u - v, ℒ(u - v)) + (q - r)^2 ≤ 0. This implies u(t) = v(t), q(t) = r(t). The remaining steps follow the proof of Lemma <ref>, which we omit here for brevity. Let z = (w, v, u, r, q)^T. We split the reformulated system (<ref>) as follows z_t = Φ_1(z) + Φ_2(z) + Φ_3(z) + Φ_4(z), where Φ_1(z) = ( [ 0; 𝒢ℒ v; 𝒢 ( ℒu + 2 q δ W/δ u[v] - 2q ∇·δ W/δ∇ u[v]); ( δ W/δ u[v], v^ℒ_t ) + ( δ W/δ∇ u[v], ∇ v_t^ℒ ); ( δ W/δ u[v], u_t ) + ( δ W/δ∇ u[v], ∇ u_t ) ]), Φ_2(z) = ( [ 0; 𝒢 ( 2 r δ W/δ u[v] - 2 r ∇·δ W/δ∇ u[v] ); 0; ( δ W/δ u[v], w^𝒩_t ) + ( δ W/δ∇ u[v], ∇ w_t^𝒩 ); 0 ]), Φ_3(z) = ( [ 𝒢ℒ v; 0; 0; 0; 0 ]), Φ_4(z) = ( [ 𝒢 ( 2 r δ W/δ𝐮[𝐯] - 2 r ∇·δ W/δ∇𝐮[𝐯] ); 0; 0; 0; 0 ]). Employing four different RK methods to (<ref>) yields the following SAV-ARKII method {z_ni = z^n + τ∑_j=0^s-1( a_ijΦ_1(z_nj) + a_ijΦ_2(z_nj) + a_ijΦ_3 (z_nj) + a_ijΦ_4(z_nj) ), z^n+1 = z^n + τ∑_i=0^s-1 b_i (Φ_1 (z_ni) + Φ_2(z_ni) + Φ_3 (z_ni) + Φ_4(z_ni) ). . Furthermore, we rewrite the above scheme componentwisely and employ the techniques outlined in Section <ref> to modify the obtained scheme, ultimately resulting in the SAV-MARKII method as shown below. [SAV-MARKII] We solve the intermediate stages from { w_ni = u^n + τ∑_j=0^s-1 ( a_ijv̇_nj^ℒ + a_ijv̇^𝒩_nj ), v_ni = u^n + τ∑_j=0^s-1 (a_ijv̇_nj^ℒ + a_ijv̇^𝒩_nj), r_ni = r^n + τ∑_j=0^s-1( a_ijṙ_nj^ℒ + a_ijṙ_nj^𝒩), u_ni = u^n + τ∑_j=0^s-1 a_iju̇_nj, q_ni = q^n + τ∑_j=0^s-1 a_ijq̇_nj, v̇_ni^ℒ = 𝒢ℒ v_ni, ṙ_ni^ℒ = ( δ W/δ u[v_ni], v̇_ni^ℒ) + (δ W/δ∇ u[v_ni], ∇v̇_ni^ℒ ), v̇_ni^𝒩 = 𝒢 ( 2r_niδ W/δ u[v_ni] - 2r_niδ W/δ∇ u[v_ni] ) , ṙ_ni^𝒩 = ( δ W/δ u[v_ni], v̇_ni^𝒩) + (δ W/δ∇ u[v_ni], ∇v̇_ni^𝒩 ), u̇_ni = 𝒢(ℒ u_ni + 2q_niδ W/δ u[v_ni] - 2q_niδ W/δ∇ u[v_ni]) , q̇_ni = (δ W/δ u [v_ni], u̇_ni ) + (δ W/δ∇ u [v_ni], ∇u̇_ni). . Then, we update u^n+1 = u^n + τ∑_i=0^s-1 b_i u̇_ni, q^n+1 = q^n + τ∑_i=0^s-1 b_i q̇_ni. Consider a SAV-RKPC(M) method associated with the fundamental RK method (A, b, c) of stage s. Then, it can be regarded as a SAV-MARKII method with the tableaux [ 𝐜 𝐀; 𝐛^T ] = [ 0 O O; 1_M ⊗ c O I_M ⊗ A; 0^T (𝐞_M ⊗ b)^T ], [ 𝐜 𝐀; 𝐛^T ] = [ 0 O O; 1_M ⊗ c I_M ⊗ A O; 0^T (𝐞_M ⊗ b)^T ], [ 𝐜 𝐀; 𝐛^T ] = [ 1_M ⊗ c O I_M ⊗ A; c O A; 0^T (𝐞_M ⊗ b)^T ], [ 𝐜 𝐀; 𝐛^T ] = [ 1_M ⊗ c I_M ⊗ A O; c A O; 0^T (𝐞_M ⊗ b)^T ], where I_s represents the identity matrix, 𝐞_M = (0, 0, ⋯, 1)^T, and ⊗ denotes the Kronecker product. Notice that, we have w_n,i+ms = v_n,i+(m+1)s and r_n,i+ms = q_n,i+(m+1)s in Algorithm <ref>. In addition, the intermediate stages of Algorithm <ref> and <ref> are related as follows: (u̇_ni^(m), q̇_ni^(m), q_ni^(m), u_ni^(m)) = (v̇^ℒ_n,i+ms+v̇^𝒩_n,i+ms, q̇_n,i+ms, q_n,i+ms, v_n,i+ms), u_ni^⋆ = v_n,i+Ms. By Theorem <ref>, the consistency error of the SAV-RKPC(M) can be investigated by the order conditions of the generalized ARK methods straightforwardly. Readers are referred to <cit.> for convenience. Taking the fourth-order Gauss SAV-RKPC (SAV-GRK4PC) used in <cit.> as an example, the SAV-GRK4PC(1), SAV-GRK4PC(2), SAV-GRK4PC(3) methods arrive at second-, third- and fourth-order, respectively, which agrees with the numerical experiments proposed in <cit.>. Although we have demonstrated that the SAV-GRK4PC(3) achieves fourth-order accuracy, it is advisable to carry out additional iterative steps in practical computations to guarantee the stability of the proposed method. § NUMERICAL EXPERIMENTS In this section, we demonstrate the effectiveness of our methods in solving the 2D AC, CH, and MBE equations. The spatial domain is Ω = (x_L, x_R) × (y_L, y_R), and periodic boundary conditions are employed in all examples. To guarantee both accuracy and efficiency, we use the Fourier pseudo-spectral method for spatial discretization. Let N_x and N_y be positive integers. The spatial domain is uniformly partitioned with step sizes h_x = x_R - x_L/N_x and h_y = y_R - y_L/N_y. We define Ω_N = { (x_i, y_j) |x_i = x_L + i h_x, y_j = y_L + j h_y }, and 𝕄_N denotes the space of periodic grid functions on Ω_N. We use the notations ∇_N, ∇_N ·, and Δ_N to represent discrete gradient, divergence, and Laplace operators to the Fourier pseudo-spectral method, respectively. Readers are referred to <cit.> for details. Given u, v ∈𝕄_N, the discrete L^2 inner product, discrete L^2 and L^∞ norms are (u, v)_N = h_x h_y ∑_j=0^N_x-1∑_k=0^N_y-1 u_jk v_jk, u_N = √((u, u)_N), u_∞ = max_0 ≤ j ≤ N_x-1 0 ≤ k ≤ N_y-1 |u_jk|. §.§ AC equation To validate the convergence results presented in Theorem <ref>, we consider the following AC equation u_t = ε^2 Δ u + u - u^3, which can be obtained by setting 𝒢 = -1 and ℱ[u] = ∫_Ωε^2/2 |∇ u|^2 + 1/4 (u^2 - 1)^2 d𝐱 in (<ref>). Employing the Fourier spectral method to (<ref>), the fully discrete system of the AC equation is to find (u_ni, v_ni, q_ni) ∈𝕄_N ×𝕄_N ×ℝ and (u^n+1, q^n+1) ∈𝕄_N ×ℝ, such that v_ni = u^n +τ∑_j=0^s-1 (a_ijΔ_N v_nj - 2 a_ij q_nj W^'(v_nj)), u_ni = u^n + τ∑_j=0^s-1 a_iju̇_nj, q_ni = q^n + τ∑_j=0^s-1 a_ijq̇_nj, u^n+1 = u^n + ∑_i=0^s-1 b_i u̇_ni, q^n+1 = q^n + ∑_i=0^s-1 b_i q̇_ni, where u̇_ni = Δ_N u_ni - 2q_ni W^' (v_ni), q̇_ni = (W^'(v_ni), u̇_ni)_N, W^'(u) = F^'(u)/2√( (F(u), 1 )_N +C_0 ). It is worth mentioning that the discrete operator Δ_N satisfies the summation-by-parts formula. By following the procedure outlined in the proof of Theorem <ref> and <ref>, we can confirm the energy-stability and solvability of the above fully-discrete scheme. We set the computational domain as Ω = (0, 1)^2, the parameter as ε = 0.01, and the initial condition as u_0 = 0.1 sin (2 π x) sin (2 π y). Since the exact solution is unavailable, we use the solution obtained by the SAV-MDIARK(5,6,4) method with N = 512, and τ = 10^-4 at the final time T = 1 as a reference. Then, refinement test in time is conducted with N = 128 and different time steps τ = 0.1 × 2^-k (k = 1,2,3,4,5). Figure <ref> displays the discrete L^2-norm error of the solution at T = 1 computed by various methods as a function of the time step size in the logarithmic scale. All the methods achieve their respective accuracy. §.§ CH equation We consider the following Cahn-Hilliard model for immiscible binary fluids u_t = λΔ (-ε^2 Δ u + u^3 - u), where λ is a mobility parameter, and ε represents the width of the diffuse interface. The corresponding free energy functional is ℱ[u] = ∫_Ωε^2/2|∇ u|^2 + 1/4(u^2 - 1)^2 d𝐱. We introduce an auxiliary variable q = √(1/4∫_Ω (u^2 - 1 -κ)^2 d𝐱 + C|Ω|), where κ is a stabilized parameter. The energy functional (<ref>) is transformed into ℱ[u, q] = ε^2/2∇ u^2 + κ/2u^2 + q^2 - κ^2 + 2κ + 4C/4 |Ω|. (<ref>) is then reformulated into an equivalent model, as shown below { u_t = λΔ( -ε^2 Δ u + κ u + f_κ(u) q), q_t = 1/2(f_κ(u), u_t). . We perform convergence tests in time by considering (<ref>) in the spatial domain Ω = (0, 2π)^2 with specified parameters γ = 0.01 and ε = 1. As the exact solution of (<ref>) is not available, we construct a manufactured solution ϕ(x, y, t) = sin(x)sin(y)cos(t) to (<ref>) by introducing a nonhomogeneous source term to the right-hand side of (<ref>). We use 128 × 128 wave numbers for the spatial discretization. Subsequently, (<ref>) will be integrated using various methods until T = 1 with different time steps τ = 0.2 × (2k)^-1 (k = 1,2,3,4,5,6,7,8). The numerical solution at the final time is recorded to evaluate errors in the refinement tests. Figure <ref> plots the L^2 and L^∞ errors of different methods against the time step in a logarithmic scale. All the methods achieve the expected convergence rate. Among the second-order schemes, the SAV-MDIARK(2,2,2) method exhibits higher accuracy than the SAV-MCNRK2 method. Additionally, when γ = 3 + √(3)/6, the SAV-MDIARK(2,2,2) scheme performs to be better than when γ = 1/4. Despite the latter preserving the dissipative rate, the former is more stable in practice. Among the third-order schemes, the SAV-MDIARK(4,5,3) exhibits the highest accuracy and unexpectedly results in superconvergence in this test. This phenomenon can be attributed to the smoothness of the provided solution. Further accuracy tests of this method will be conducted in subsequent examples. When investigating the fourth-order schemes, we present the results of both the SAV-MARK methods and their corresponding SAV-ARK methods. Notably, the convergence rate of the SAV-MARK methods is consistent with that of the SAV-ARK methods, confirming that the modified Algorithm <ref> possesses the same accuracy as Algorithm <ref>. To thoroughly investigate the performance of the proposed schemes, we consider the CH equation (<ref>) with the initial condition ϕ_0(x, y) = 0.05 ( cos(6 π x)cos(8 π y) + (cos(8 π x)cos(6 π y))^2 + cos(2 π x - 10 π y)cos(4 π x - 2π y)). We specify the spatial domain Ω = (0, 2π)^2, and set the parameters in (<ref>) as λ = 1, ε = 0.01. The spatial discretization is carried out using 128 × 128 Fourier modes. Several methods are employed to solve the governing system until the final time T = 0.1. It should be noted that, due to the chosen initial condition (<ref>), the solution of (<ref>) undergoes rapid changes at the beginning. Therefore, if the method is not stable, it will fail to depict the solution using a large time step size accurately. As a benchmark, Figure <ref> illustrates the snapshot obtained by the SAV-MDIARK(5,6,4) method with a step size of τ = 1 × 10^-5. During the test, the time step is progressively reduced until the correct solution snapshot is obtained, and the maximum step size that yields the correct solution profile for each method is recorded. To facilitate comparisons, we display numerical results for several existing methods in Figure <ref>, including the SAV-CN method, the fully implicit second-order convex splitting scheme (CS2), and the SAV-GRK4PC(5). It can be seen that the SAV-CN method fails to produce a correct result at a large time step, while the convex splitting scheme is capable of producing an accurate result with a relatively large time step. Due to the high precision and stability achieved through multiple iterations, the SAV-GRK4PC(5) can also compute a correct solution with a larger time step. The numerical results obtained by the proposed schemes are presented in Figure <ref>. It is evident that our second- and third-order schemes achieve accurate results at larger step sizes compared to SAV-CN and CS2 methods and even outperform the SAV-GRK4PC(5) method. Among these methods, the SAV-MDIARK(5,4,3) method performs the best by yielding the correct solution at a step size of τ = 5.2 × 10^-4. Although the proposed fourth-order methods require smaller step sizes to obtain accurate results, their step sizes remain competitive with those used in other publications, despite considering only the order during their construction. In addition to verifying the effectiveness of the proposed schemes through profiles and step sizes, we also present the evolution of the following discrete free energy ℱ^n_N = ε^2/2∇_N u^n_N^2 + 1/4(((u^n)^2 - 1)^2, 1)_N. It is worth noting that although the above methods have only been proven to dissipate quadratic energy, we still investigate the original discrete energy in our experiments. Figure <ref> summarizes the evolution of the free energy for different numerical schemes under different time steps. It can be observed that the SAV-CN method fails to dissipate the original energy at larger step sizes due to lower precision and weaker stability. While all our methods monotonically decrease the discrete free energy. This indicates that the proposed methods are robust and unconditionally energy-stable, as predicted by the theoretical results. §.§ MBE equation To further display the accuracy and robustness of the proposed schemes, let us consider the following MBE model u_t = - λ (δΔ^2 u - ∇· f(∇ u)), which is the L^2 gradient flow with respect to the following free energy functional ℱ[u] = ∫_Ωδ/2 |Δ u|^2 + F(∇ u) d𝐱. In (<ref>) and (<ref>), u represents the height function of a thin film in a co-moving frame, δ is a positive constant, and f = F^'. If we set F(∇ u) = -1/2ln(1 + |∇ u|^2), (<ref>) is usually called the MBE equation without slope selection. Corresponding, (<ref>) is named MBE equation with slope selection while taking F(∇ u) = 1/4(|∇ u|^2 - 1)^2. Introducing an SAV q = √(1/4∫_Ω (|∇ u|^2 - 1 - κ)^2 d𝐱 + C|Ω| ). The free energy is then modified into ℱ[u, q] = δ/2Δ u^2 + κ/2∇ u^2 + q^2 - κ^2 + 2κ + 4C/4|Ω|. Correspondingly, (<ref>) is reformulated into { u_t = -λ( δΔ^2 u - κΔ u - ∇· f_κ(∇ u) q ), q_t = 1/2 (f_κ(∇ u), ∇ u_t), . where f_κ(∇ u) = (|∇ u|^2 - 1 - κ)∇ u/√(1/4∫_Ω ( |∇ u|^2 - 1 - κ )^2 d𝐱 + C|Ω| ). We remark here although the nonlinearity of the MBE equation without slope selection seems to be unbounded, the SAV can remain to be introduced as q = √(κ/2|∇ u|^2 -1/2ln(1 + |∇ u|^2) + C|Ω| ). Due to the Lipschitz continuous of F, there is no difficulty in confirming that κ/2|∇ u|^2 -1/2ln(1 + |∇ u|^2) > 0, as soon as κ≥1/8. We will still begin with performing the refinement test in time. Specifying the computational domain Ω = (0, 2π)^2 and considering a classical example with the initial condition ϕ_0(x, y) = 0.1(sin3xsin5y + sin5xsin5y), which was studied in <cit.> to observe morphological instability due to the nonlinear interaction. The parameters are λ = 1 and δ = 0.1. Since the exact solution of (<ref>) is not available, the SAV-MDIARK(5,6,4) method is employed to compute a reference solution of (<ref>) using 256 × 256 Fourier modes and a step size of τ = 5 × 10^-6. Then the refinement test in time is carried out by varying the temporal step size τ = 2^3-k× 10^-4 (k = 0,1,⋯,6). The spatial is discretized using 128 × 128 Fourier modes. The discrete L^2 and L^∞ errors between the reference and numerical solutions at T = 0.1 are recorded. Figure <ref> displays the solution error at T=0.1 as a function of the step size in the logarithmic scale. It is observable that all methods arrive at their corresponding convergence rates. Moreover, the super-convergence of SAV-MDIARK(4,5,2) disappears under this circumstance, suggesting that the results appearing in Figure <ref> are coincidental. Then, we simulate (<ref>) under the same initial condition until T=30. Figure <ref> displays the height profiles solved by the SAV-MDIARK(5,6,4) under with τ = 5 × 10^-3 at different times t=0,0.05,2.5,5.5,8,30. The results agree with those reported in <cit.>. We remark here that the simulation results of other schemes are indistinguishable and thus are omitted due to the limitation of space. Figure <ref> summarizes the evolution of free energy from t = 0 to t = 15 solved by different methods with different time steps. Notice that the energy curve for the fully-implicit backward difference (BDF) methods, which are recognized to have good stability, are also plotted for comparisons. For the third- and fourth-order schemes, the energy curves predicted by the proposed methods are comparable with those predicted by BDF methods. Moreover, among the second-order schemes, the proposed methods provide more accurate energy prediction than the BDF2 method when τ = 3.125× 10^-2. These suggest that our methods are comparable to the fully discrete BDF methods in terms of stability. However, it should be noted that our methods are linearly implicit and only require the solution of a linear system at each step. Table <ref> lists the CPU times for these methods when conducting the above experiments with the time step of τ = 1× 10^-2. Despite the ARK methods needing to solve more intermediate stages, particularly for higher-order schemes, our proposed methods are more efficient than BDF methods. § CONCLUSION Combing the SAV approach and ARK methods, we develop a novel paradigm for constructing linearly implicit and high-order unconditionally energy-stable methods for general gradient flows. The proposed schemes are rigorously proved to be unconditionally energy-stable, uniquely solvable, and convergent. We also reveal that each SAV-RKPC method can be regarded as a SAV-ARK method, and the order of the SAV-RKPC methods are then confirmed theoretically using the order-conditions of ARK methods. Numerical examples demonstrate the efficiency and robustness of the proposed methods. § ACKNOWLEDGMENTS This work is supported by the National Key Research and Development Project of China (2018YFC1504205), the National Natural Science Foundation of China (12171245, 11971242). § EXAMPLES OF SOME SAV-ARK METHODS In this section, we list SAV-ARK methods utilized in the above contexts. We will refer a SAV-DIARK (or SAV-MDIARK) method with s-stage implicit method, r-stage explicit method and p-th order as SAV-DIARK(s,r,p) (SAV-MDIARK(s,r,p)). §.§ SAV-DIARK(2,2,2) A = [ γ 0; 1-2γ γ ], b = [ 1/2 1/2 ]^T, A = [ 0 0; 1 0 ], b = b. The discriminant of the implicit part of the above method reads = (λ - 1/4) [ 1 -1; -1 1 ]. Therefore, the implicit part of the method is algebraic stable iff λ≥1/4. §.§ SAV-DIARK(2,3,3) A = [ 0 0 0; 0 3 + √(3)/6 0; 0 -√(3)/3 3 + √(3)/6 ], b = [ 0 1/2 1/2 ]^T, A = [ 0 0 0; 3 + √(3)/6 0 0; -3 + √(3)/6 3 - √(3)/3 0 ], b = b. The eigenvalues of diagonally implicit part are [1.0774, 0, 0, 0]. §.§ SAV-DIARK(3,4,3) A = [ 0 0 0 0; 0 σ 0 0; 0 1/2 - σ σ 0; 0 2σ 1 - 4σ σ; ], b = [ 0 μ 1-2μ μ ]^T, A = [ 0 0 0 0; σ 0 0 0; 0 1/2 0 0; 0 9 μσ-3 μ-3 σ+1/3 μ (2 σ-1) 1-σ- 9 μσ-3 μ-3 σ+1/3 μ (2 σ-1) 0; ], b = b, where σ = √(3)/3cos(π/18) + 1/2, μ = 1/6 (2σ - 1)^2. Then, the eigenvalues of the diagonally implicit part are [1.5530, 0, 0, 0]. §.§ SAV-DIARK(5,6,4) A = [ 0 0 0 0 0 0; 0 3/8 0 0 0 0; 3/8 0 3/16 0 0 0; 0 0 0 σ 0 0; 0 0 0 1/2 - σ σ 0; 0 0 0 2σ 1 - 4σ σ; ], b = [ 0 0 0 μ 1-2μ μ ]^T, A = [ 0 0 0 0 0 0; 3/8 0 0 0 0 0; 0 9/16 0 0 0 0; 25/162 μ -104 σμ^2 +6 μ^2+20 μ/108 μ^2-90 μ+9 112 σμ^2 +36 μ^2-37 μ/324 μ^2-270 μ+27 0 0 0; 0 0 1/2 0 0 0; 0 56 σμ^2 -2 μ^2-12 μ/36 μ^2-30 μ+3 16 σμ^2 -4 μ^2+3 μ/36 μ^2-30 μ+3 0 0 0; ], b = b. The eigenvalues of the diagonally implicit part are [1.5530, 0, 0, 0, 0, 0]. §.§ SAV-GARK(4,5,4) A = [ 0 0 0 0 0; 0 1/4 0 0 0; 1/4 0 1/4 0 0; 0 0 0 1/4 1/4-√(3)/6; 0 0 0 1/4+√(3)/6 1/4 ], b = [ 0 0 0 1/2 1/2 ]^T, A = [ 0 0 0 0 0; 1/4 0 0 0 0; 0 1/2 0 0 0; 1/6 0 1/3-√(3)/6 0 0; 1/6 0 1/3+√(3)/6 0 0; ], b = b. The implicit part of the above method is based on the Gauss RK method (see <cit.>). The eigenvalues of A are [0, 0, 0, 0, 0]. abbrv 10 sav_rk_extra G. Akrivis, B. Li, and D. Li. Energy-decaying extrapolated RK-SAV methods for the Allen–Cahn and Cahn–Hilliard equations. SIAM J. Sci. Comput., 41(6):A3703–A3727, 2019. 001 M. Ambati, T. Gerasimov, and L. De Lorenzis. A review on phase-field models of brittle fracture and a new fast hybrid formulation. Comput. Mech., 55:383–405, 2015. burrage_efficiently_1982 K. Burrage. Efficiently implementable algebraically stable Runge–Kutta methods. SIAM J. Numer. Anal., 19(2):245–258, 1982. burrage_stability_1979 K. Burrage and J. C. Butcher. Stability criteria for implicit Runge–Kutta methods. SIAM J. Numer. Anal., 16(1):46–57, 1979. intr_ac J. W. Cahn and S. M. Allen. A microscopic theory for domain wall motion and its experimental verification in Fe-Al alloy domain growth kinetics. J. Phys. Colloq, 38:C7–51–C7–54, 1977. intr_ch J. W. Cahn and J. E. Hilliard. Free energy of a nonuniform system. I. Interficial free energy. J. Chem. Phys., 28:258–267, 1958. mbe_leapfrog L. Chen, J. Zhao, and Y. Gong. A novel second-order scheme for the molecular beam epitaxy model with slope selection. Commun. Comput. Phys., 25(4):1024–1044, 2019. mbe_etd3 K. Cheng, Z. Qiao, and C. Wang. A third order exponential time differencing numerical scheme for No-Slope-Selection epitaxial thin film model with energy stability. J. Sci. Comput., 81:154–185, 2019. lag Q. Cheng, C. Liu, and J. Shen. A new Lagrange multiplier approach for gradient flows. Comput. Methods Appl. Mech. Engrg., 367:113030, 2020. GSAV1 Q. Cheng, C. Liu, and J. Shen. Generalized SAV approaches for gradient systems. J. Comput. Appl. Math., 394:113532, 2021. intr_other1 Q. Cheng, X. Yang, and J. Shen. Efficient and accurate numerical schemes for a hydro-dynamically coupled phase field diblock copolymer model. J. Comput. Phys., 341:44–60, 2017. tang_splitting Y. Cheng, A. Kurganov, Z. Qu, and T. Tang. Fast and stable explicit operator splitting methods for phase-field models. J. Comput. Phys., 303:45–65, 2015. intr_mbe S. Clarke and D. Vvedensky. Origin of reflection high-energy electron-diffraction intensity oscillations during molecular-beam epitaxy: A computational modeling approach . Phys. Rev. Lett, 58:2235–2238, 1987. du_2019 Q. Du, L. Ju, X. Li, and Z. Qiao. Maximum principle preserving exponential time differencing schemes for the nonlocal Allen-Cahn equation. SIAM J. Numer. Anal., 57:875–898, 2019. du_2021 Q. Du, L. Ju, X. Li, and Z. Qiao. Maximum bound principles for a class of semilinear parabolic equations and exponential time-differencing schemes. SIAM Rev., 63:317–359, 2021. du_jsc_analysis Q. Du, L. Ju, and J. Lu. Analysis of fully discrete approximations for dissipative systems and application to time-dependent nonlocal diffusion problems. J. Sci. Comput., 78(3):1438–1466, 2019. convex_splitting1 C. Elliot and A. Stuart. The global dynamics of discrete semilinear parabolic equations. SIAM J. Numer. Anal., 30:1622–1663, 1993. intr_crystal M. Elsey and B. Wirth. A simple and efficient scheme for phase field crystal simulation. ESIAM: M2AN, 47:1413–1432, 2013. grad_stable_ch D. J. Eyre. Unconditionally Gradient Stable Time Marching the Cahn-Hilliard Equation. Mater. Res. Soc. Sympos. Proc., 529:39–46, 1998. cn_ab X. Feng, T. Tnag, and J. Yang. Stabilized crank-nicolson/adams-bashforth schemes for phase field models. East Asian J. Appl. Math., 3(1):59–80, 2013. dvd2 D. Furihata. A stable and conservative finite difference scheme for the Cahn-Hilliard equation, 2001. dvd D. Furihata and T. Matsuo. a. Chapman and Hall/CRC, 1st edition, 2010. svm Y. Gong, Q. Hong, and Q. Wang. Supplementary variable method for thermodynamically consistent partial differential equations. Comput. Methods Appl. Mech. Engrg., 381:113746, 2021. gong_nls Y. Gong, Q. Wang, Y. Wang, and J. Cai. A conservative Fourier pseudo-spectral method for the nonlinear Schrödinger equation. J. Comput. Phys., 328:354–370, 2017. ieq_gong Y. Gong, J. Zhao, and Q. Wang. Arbitrarily high-order linear energy stable schemes for gradient flow models. J. Comput. Phys., 419:109610, 2020. ieq1 F. Guillén-González and G. Tierra. On linear schemes for a Cahn–Hilliard diffuse interface model. J. Comput. Phys., 234:140–171, 2013. ac_hbvm F. Guo and W. Dai. Arbitrarily high-order accurate and energy-stable schemes for solving the conservative Allen–Cahn equation. Numer. Methods Partial Differential Eq., 39:187–212, 2022. 002 Z. Guo and P. Lin. A thermodynamically consistent phase-field model for two-phase flows with thermocapillary effects. J. Fluid Mech., 766:226–271, 2015. hairer_book E. Hairer, C. Lubich, and G. Wanner. Geometric Numerical Integration: Structure-Preserving Algorithms for Ordinary Differential Equations. Springer-Verlag, Berlin, 2nd edition, 2006. hou_leapfrog T. Hou, D. Xiu, and W. Jiang. A new second-order maximum-principle preserving finite difference scheme for Allen-Cahn equations with periodic boundary conditions. Appl. Math. Lett., 104:106256, 2020. sav_ns_err F. Huang and J. Shen. Stability and error analysis of a class of high-order IMEX scheme for Navier–Stokes equations with periodic boundary conditions. SIAM J. Numer. Anal., 59:2926–2954, 2021. dvd_high J. Huang. Energy stable schemes for gradient flows based on the DVD method. arXiv:2210.11960v1, 2022. dvd1 T. Ide. Some energy preserving finite element schemes based on the discrete variational derivative method. Appl. Math. Comput., 175:277–296, 2006. relaxation_sav M. Jiang, Z. Zhang, and J. Zhao. Improving the accuracy and consistency of the scalar auxiliary variable (SAV) method with relaxation. J. Comput. Phys., 456:110954, 2022. ESAV_AC L. Ju, X. Li, and Z. Qiao. Stabilized Exponential-SAV cchemes preserving energy dissipation law and maximum bound principle for the Allen–Cahn type equations. J Sci Comput, 92(2):66, 2022. ju_jcp_if L. Ju, X. Li, Z. Qiao, and J. Yang. Maximum bound principle preserving integrating factor Runge-Kutta methods for semilinear parabolic equations. J. Comput. Phys., 439(110405):18, 2021. ju_mbe L. Ju, X. Li, Z. Qiao, and H. Zhang. Energy stability and error estimates of exponential time differencing schemes for the epitaxial growth model without slope selection. Math. Comp., 87(312):1859–1885, 2017. mbe_model1 B. Li and J.-G. Liu. Thin film epitaxy with or without slope selection. Eur. J. Appl. Math., 14:713–743, 2003. mbe_model2 B. Li and J.-G. Liu. Stability analysis of large time‐stepping methods for epitaxial growth models. SIAM J. Numer. Anal., 44:1759–1779, 2006. sav_li D. Li and W. Sun. Linearly implicit and high-order energy-conserving schemes for nonlinear wave equations. J. Sci. Comput., 83:17, 2020. sav_nlsw X. Li, Y. Gong, and L. Zhang. Linear high-order energy-preserving schemes for the nonlinear Schrödinger equation with wave operator using the scalar auxiliary variable approach. J. Sci. Comput., 88:25, 2021. sav_ns_err2 X. Li, J. Shen, and Z. Liu. New SAV-pressure correction methods for the Navier-Stokes equations: stability and error analysis. Math. Comp., 92(340):141–167, 2022. liao_bdf H. Liao, T. Tang, and T. Zhou. On energy stable, maximum-principle preserving, second-order BDF scheme with variable steps for the Allen-Cahn equation. SIAM J. Numer. Anal., 58:2294–2314, 2020. ESAV Z. Liu and X. Li. The Exponential Scalar Auxiliary Variable (E-SAV) Approach for Phase Field Models and Its Explicit Computing. SIAM J. Sci. Comput., 42:B630–B655, 2020. relaxation_lag Z. Liu and X. Li. A novel lagrange multiplier approach with relaxation for gradient flows. arXiv:2210.02723v1 [math.NA], 2022. lu_lsp L. Lu, Q. Wang, Y. Song, and Y. Wang. Local structure-preserving algorithms for the molecular beam epitaxy model with slope selection. Am. J. Math, 26:4745–4765, 2021. 003 W. Marth, S. Aland, and A. Voigt. Margination of white blood cells: a computational approach by a hydrodynamic phase field model. J. Fluid Mech., 790:389–406, 2016. qiao_mixed_fe Z. Qiao, T. Tang, and H. Xie. Error analysis of a mixed finite element method for the molecular beam epitaxy model. SIAM J. Numer. Anal., 53:184–205, 2015. ark_general A. Sandu and M. Günther. A generalized–structure approach to additive Runge–Kutta methods. SIAM J. Numer. Anal., 53(1):17–42, 2015. sav_shen J. Shen, J. Xu, and J. Yang. The scalar auxiliary variable (SAV) approach for gradient flows. J. Comput. Phys., 353:407–416, 2018. sav_shen_siam J. Shen, J. Xu, and J. Yang. A new class of efficient and robust energy stable schemes for gradient flows. SIAM Rev., 61:474–506, 2019. csrk J. Shin, H. G. Lee, and J.-Y. Lee. Unconditionally stable methods for gradient flow using convex splitting Runge-Kutta scheme. J. Comput. Phys., 347:367–381, 2017. tan_jcp_msrksav Z. Tan and H. Tang. A general class of linear unconditionally energy stable schemes for the gradient flows. J. Comput. Phys., 464(111372):32, 2022. tang_imex T. Tang and J. Yang. Implicit-explicit scheme for the Allen-Cahn equation preserves the maximum principle. J. Comput. Math., 34:471–481, 2016. intr_other2 C.-H. Teng, I.-L. Chern, and L. Ming-Chih. Simulating binary fluid-surfactant dynamics by a phase field model . Discrete Contin. Dyn. Syst. Ser. B, 4(17):1289–1307, 2012. 004 A. Wheeler, W. Boettinger, and G. McFadden. Phase-field model for isothermal phase transitions in binary alloys. Phys. Rev. A, 45:7424–7438, 1992. sav_ch3 J. Yang, J. Wang, Z. Tan, and J. Kim. Efficient IMEX and consistently energy-stable methods of diffuse-interface models for incompressible three-component flows. Comput. Phys. Commun., 282, 2023. ieq3 X. Yang. X. Yang, Linear, first and second-order, unconditionally energy stable numerical schemes for the phase field model of homopolymer blends. J. Comput. Phys., 327:294–316, 2016. ieq4 X. Yang and L. Ju. Efficient linear schemes with unconditional energy stability for the phase field elastic bending energy model. Comput. Methods Appl. Mech. Eng., 135:691–712, 2017. ieq2 X. Yang, J. Zhao, Q. Wang, and J. Shen. Numerical approximations for a three components Cahn–Hilliard phase-field model based on the invariant energy quadratization method. Math. Models Methods Appl. Sci., 27(11):1993–2030, 2017. imex_fac H. Zhang, J. Yan, X. Qian, X. Gu, and S. Song. On the preserving of the maximum principle and energy stability of high-order implicit-explicit Runge-Kutta schemes for the space-fractional Allen-Cahn equation. Numer. Algorithms, 88(3):1309–1336, 2021. GSAV2 Y. Zhang and J. Shen. A generalized SAV approach with relaxation for dissipative systems. J. Comput. Phys., 464:111311, 2022. sav_chhs_err N. Zheng and X. Li. Error analysis of the SAV Fourier-spectral method for the Cahn-Hilliard-Hele-Shaw system. Adv. Comput. Math., 47(71), 2021.
http://arxiv.org/abs/2307.04448v1
20230710095736
Casimir effect of Lorentz-violating charged Dirac in background magnetic field
[ "Ar Rohim", "Apriadi Salim Adam", "Arista Romadani" ]
hep-th
[ "hep-th", "quant-ph" ]
[email protected] Research Center for Quantum Physics, National Research and Innovation Agency (BRIN), South Tangerang 15314, Indonesia Departemen Fisika, FMIPA, Universitas Indonesia, Depok 16424, Indonesia [email protected] Research Center for Quantum Physics, National Research and Innovation Agency (BRIN), South Tangerang 15314, Indonesia [email protected] Department of Physics, Faculty of Science and Technology, Universitas Islam Negeri Maulana Malik Ibrahim Malang, Malang 65144, Indonesia We study the effect of the Lorentz symmetry breaking on the Casimir energy of charged Dirac in the presence of a uniform magnetic field. We use the boundary condition from the MIT bag model to represent the property of the plates. We investigate two cases of the direction of violation, namely, time-like and space-like vector cases. We discuss how the Lorentz violation and the magnetic field affect the structure of the Casimir energy and its pressure. We also investigate the weak and strong magnetic field cases with two different limits, heavy and light masses. Casimir effect of Lorentz-violating charged Dirac in background magnetic field Arista Romadani August 12, 2023 ============================================================================== § INTRODUCTION The Casimir effect representing quantum field effects under macroscopic boundaries was first predicted by H. B. G. Casimir in 1948 <cit.>. He showed that the quantum vacuum fluctuations of the electromagnetic field confined between two parallel plates generate an attractive force. One decade later, in 1958, Sparnaay performed the experimental measurement of the effect, however, with a rough precision <cit.>. He found that the attractive force of the plates does not contradict the theoretical prediction. After his work, the studies showed that the Casimir effect has experimentally confirmed with high precision <cit.>. The Casimir effect itself has many applications in nanotechnology <cit.>, and the theoretical discussion was elaborated in connection to several research areas, for example, cosmology <cit.> and condensed matter physics <cit.>(see e.g. Refs. <cit.> for review). The studies showed that the Casimir effect also arises not only for the electromagnetic field but also for other fields. The geometry of the plate's surface represented by the form of the boundary conditions also determines how the Casimir effect behaves. To discuss the Casimir effect of the scalar field, one can use the Dirichlet boundary conditions of the vanishing field at the surface of the plates. In such a case, one can also employ Neumann and/or mixed boundary conditions <cit.>. However, in the case of the fermion field, one cannot apply such boundaries because the solution for the fermion field is derived from the first-order differential equation. Alternatively, one may use a bag boundary condition that guarantees the vanishing flux at the plate's surface. The well-known form covering this property is the boundary condition from the MIT bag model <cit.> (see Ref. <cit.> for review). The extension of this boundary that includes the role of the chiral angle has been employed in the literature (see e.g. Refs. <cit.>, c.f. Ref. <cit.> for the self-adjoint variant). The Casimir effect phenomenon could be investigated in the system with charged quantum fields under the magnetic field background. With such a system, one can investigate how the charged quantum field couples to the quantum fluctuation <cit.>. On the other hand, the Casimir effect in the system involving a Lorentz violation has also attracted some attention <cit.>. Within the framework of string theories, the spontaneous Lorentz breaking may occur through a dynamic of the Lorentz covariant <cit.>. Such a dynamic will generate interactions to gain nonzero expectation values for Lorentz tensors. This is the same analog as in the Higgs mechanism in the context of the standard model. There are several studies where they investigated a system under Lorentz symmetry breaking and the CPT anomaly <cit.>. Those two phenomena could be possibly measured in the experiment, for instance, the measurements of neutral-meson oscillations <cit.>, the QED test on Penning traps <cit.>, and the baryogenesis mechanism <cit.>. Hence, in this work, we study a system of charged fields involving both Lorentz violation and magnetic field background. In particular, we investigate the Casimir effect of the system under such effects. In our setup, the magnetic field is raised in parallel to the normal plate's surface. We investigate two cases of the Lorentz-violating direction, i.e., timelike and space-like directions. For the spacelike case, we restrict ourselves to discussing the violation in the z-direction only because the Lorentz violation in the x- and y-directions do not affect the behavior of the Casimir energy of a Dirac field <cit.>. In the present study, we employ the boundary condition from the MIT bag model <cit.>, which is originally used to describe quark confinement. It is natural to show that the presence of the boundary condition in the confinement system leads the allowed perpendicular momentum to the boundary surface to be discrete. To discuss the Casimir effect, we investigate the mode expansion of the field consisting of the linear superposition of the positive- and negative-energy solutions associated with the creation and annihilation operators. We can evaluate the vacuum energy by applying the boundary condition to the mode expansion. In the present study, we use the Abel-Plana-like summation <cit.> to extract the divergence of the vacuum energy in the presence of boundary conditions. Then, the Casimir energy can be mathematically obtained by taking the difference between the vacuum energy in the presence of the boundary conditions to that in the absence of ones, where both vacuum energies are infinite, but their difference is finite. The rest structure of this paper is organized as follows. In Sec. <ref>, we describe the model of our system, namely, a Dirac field confined between two parallel plates with a background magnetic field under the Lorentz violation in the quantum field theory framework. In Sec. <ref>, we investigate the Casimir energy. In this section, we derive the solution for the field inside the confinement area following the procedure used in the literature (see e.g., Refs. <cit.>). In Sec. <ref>, we discuss the Casimir pressure. Section <ref> is devoted to our summary. In this paper, we use the natural units so that c=ħ=1. § MODEL We consider the charged Dirac field confined between two parallel plates placed at z=0 and z=ℓ in the presence of a uniform magnetic field. The normal surface of the plates is parallel to the z-axis (see Fig. <ref>). In our model, the Lorentz symmetry is not preserved. The Lagrangian density for such a Dirac field with mass m is given by L=Ψ̅[iγ^μ∂_μ-eγ^μ A_μ- m+iλ u^μ u^νγ_μ∂_ν]Ψ, where Ψ̅(≡Ψγ^0) is the Dirac adjoint, λ is the dimensionless parameter with |λ |≪ 1, A_μ is the four vector potential, and u^μ is an arbitrary constants vector with u^μ u_μ can be 1,-1,0 for time-like, space-like, and light-like, respectively. The Lorentz symmetry breaking is characterized by the last term of Eq. (<ref>); the parameter λ contributes to the violation intensity while the vector u^μ describes the direction one <cit.>. In the present study, we use the 4× 4 gamma matrices γ^μ written in the Dirac representation as follows γ^0= [ I 0; 0 -I ]   and  γ^j= [ 0 σ^j; -σ^j 0 ], where I represents the 2× 2 identity matrix and σ^j is the 2× 2 Pauli matrices. The gamma matrices satisfy the anti-commutation relation as {γ^μ, γ^ν}=η^μν, where η^μν(≡ diag.(1,-1,-1,-1)) is the metric tensor of the Minkowski spacetime. The Dirac field Ψ satisfies the modified Dirac equation as follows [iγ^μ∂_μ-eγ^μ A_μ- m+iλ u^μ u^νγ_μ∂_ν]Ψ=0. The positive-energy solution for the above Dirac equation is given as Ψ^(+)(r)=e^-iω tψ( r)=e^-iω t[ χ_1; χ_2 ], where χ_1 and χ_2 are the upper and lower two-component spinors, respectively. We use ω to represent the eigenenergy of the Dirac field. In our model, the magnetic field is raised in the z-direction B=(0,0,B), where one can choose the corresponding four-vector potential components as follows A_0=A_2=A_3=0     and    A_1=-yB, with B as the magnetic field strength. The geometry of the plates is described by the boundary condition from the MIT bag model as follows <cit.> i n_μγ^μΨ=Ψ, where n_μ is the unit normal inward four-vector perpendicular to the boundary surface. The consequence of this boundary is the vanishing flux or normal probability density at the plate surface n_μ J^μ (≡ n_μΨ̅γ^μΨ)=0. The idea of this boundary is that the mass of the field is written as a function of its position; inside the confinement area, the mass has a finite value and becomes infinite at the boundary surface. Then, one can suppose that the field outside the confinement area vanishes (see Ref. <cit.> for the confinement model of a relativistic particle). While inside the confinement area, the solution for the field is written as the superposition between the left- and right-field components. § CASIMIR ENERGY In this section, we derive the Casimir energy of a Lorentz-violating charge Dirac in a background magnetic field. We study two directions of the Lorentz violation, namely, time-like and space-like vector cases. We derive the solution for the Dirac field inside the confinement area under the boundary condition from the MIT bag model <cit.>. We follow the general procedure given in Refs. <cit.>. Then, we compute the Casimir energy using the Abel-Plana-like summation <cit.> following Refs. <cit.>. In addition, we also investigate the Casimir energy approximately for the case of weak and strong magnetic fields. §.§ Time-like vector case We consider the positive-energy solution for the timelike vector case with u^(t)=(1,0,0,0). In this case, the Dirac equation (<ref>) gives two equations as follows [(1+λ)ω-m]χ^(t)_1=(-iσ^j∂_j+eyBσ^1)χ^(t)_2, [(1+λ)ω+m]χ^(t)_2=(-iσ^j∂_j+eyBσ^1)χ^(t)_1, from which we have the equation for the upper two-component spinor χ^(t)_1 as [(1+λ)^2ω^2-m^2]χ^(t)_1 = (-iσ^j∂_j+eyBσ^1)^2χ^(t)_1 = [-∇^2+e^2y^2B^2-eB(i2y∂_1+σ^3)]χ^(t)_1. In the above equation, we have used the commutation and anti-commutation relations of the Pauli matrices given as [σ^l,σ^m]=2iϵ_lmnσ^n and {σ^m,σ^n}=2δ_mnI, respectively, where δ_mn is a Kronecker delta and ϵ_lmn is a Levi Civita symbol. To find the solution for χ^(t)_1 in Eq. (<ref>), one can propose the following form χ^(t)_1=e^ik_1 xe^ik_3 z F^(t)(y). The presence of the Pauli matrix σ^3 in Eq. (<ref>) leads two independent solution for F^(t)(y) as follows F^(t)_+(y) = [ f^(t)_+(y); 0 ]    and    F^(t)_-(y) = [ 0; f^(t)_-(y) ] . Then, it is convenient to introduce s=± 1 so that the solution for f^(t)_s(y) can be read in a general way as σ^3F^(t)_s(y)=sF^(t)_s(y), and introduce a new parameter as ξ^(+, t)=√(eB)(y+k_1 eB). Then, Eq. (<ref>) can be read as Hermite's equation for arbitrary s as follows [d^2 dξ^(t)2-ξ^(t)2+a^(t)_s]f^(t)_s(y)=0, where a^(t)_s=(1+λ)^2ω^2-m^2-k^2_3+eBs eB. We now have the eigenenergies as[We have used |eB| to avoid imaginary value of ω.] ω^(t)_n',k_3=(1+λ)^-1√(m^2+k^2_3+|eB|(2n'+1)-|eB|s), where we have used a^(t)_s=2n'+1 with n'=0,1,2,3,⋯. The appropriate solution for f^(t)_s(y) with positive value eB that satisfies Hermite's equation (<ref>) is given by f^(t)_s(y)= √((eB)^1/2 2^nn'!(π)^1/2) e^-ξ^2/2H_n'(ξ^(t)), where f^(t)_s(y) has been normalized. The solution for F^(t)_s(y) is characterized by two conditions, namely, n'=n for s=+1 and n'=n-1 for s=-1. They can be written as follows F^(t)_+(y) = [ f^(t)_k_1,n(y); 0 ]    and    F^(t)_-(y) = [ 0; f^(t)_k_1,n-1(y) ] . We note that the eigenenergy for both values of s gives the same expression as ω^(t)_n, k_3=(1+λ)^-1√(m^2+k^2_3+2n|eB|), where n=0,1,2,3,⋯ is the Landau level. Then, we can finally derive the spatial solution for the right-moving field component as follows ψ^(+, t)_k_1,n,k_3 ( r) = e^ik_1 xe^ik_3 z2π√(2(1+λ)ω^(t)_n, k_3((1+λ) ω^(t)_n,k_3+m)) ×[C_1 [ ((1+λ)ω^(t)_n,k_3+m) f^(t)_k_1, n(y); 0; k_3f^(t)_k_1, n(y); √(2neB) f^(t)_k_1, n-1(y) ] + C_2 [ 0; ((1+λ)ω^(t)_n,k_3+m) f^(t)_k_1, n-1(y); √(2neB) f^(t)_k_1, n(y); -k_3f^(t)_k_1, n-1(y) ]],   for n≥ 1 ψ^(+, t)_k_1,0,k_3 ( r)= e^ik_1 xe^ik_3 z2π√(2(1+λ)ω^(t)_0, k_3((1+λ) ω^(t)_0, k_3+m)) C_0 f^(t)_k_1, 0(y) [ (1+λ)ω^(t)_0, k_3+m; 0; k_3; 0 ],   for n=0, where C_0, C_1 and C_2 are the complex coefficients and f^(t)_k_1, n(y) is given by f^(t)_k_1, n(y)=√((eB)^1/2 2^n n!π^1/2)exp[-eB 2(y+k_1 eB)^2]H_n[√(eB)(y+k_1 eB)], with H_n(ξ) is the Hermite polynomial. In a similar way, we can obtain the solution for the left-moving field component as follows ψ^(+, t)_k_1,n,-k_3( r) = e^ik_1 xe^-ik_3 z2π√(2(1+λ)ω^(t)_n, k_3((1+λ) ω^(t)_n,k_3+m)) ×[C̃_1 [ ((1+λ)ω_nk_3+m) f^(t)_k_1, n(y); 0; -k_3f^(t)_k_1, n(y); √(2neB) f^(t)_k_1, n-1(y) ] + C̃_2 [ 0; ((1+λ)ω_nk_3+m) f^(t)_k_1, n-1(y); √(2neB) f^(t)_k_1, n(y); k_3f^(t)_k_1, n-1(y) ]],   for n≥ 1 ψ^(+, t)_k_1,0,-k_3 ( r)= e^ik_1 xe^-ik_3 z2π√(2(1+λ)ω^(t)_0, k_3((1+λ) ω^(t)_0, k_3+m)) C̃_0 f^(t)_k_1, 0(y) [ (1+λ)ω^(t)_0, k_3+m; 0; -k_3; 0 ] ,   for n=0, where C̃_0, C̃_1 and C̃_2 are the complex coefficients. The total field solution is given by the linear combination between the left- and right-moving field components as follows[In the case of preserved Lorentz symmetry (λ=0), the solution is completely the same as that of Ref. <cit.>.] ψ^(+, t)_k_1,n, k_3( r)=ψ^(+, t)_k_1,n,k_3( r)+ψ^(+, t)_k_1,n,-k_3( r), where we use k_3 l to represent the allowed momentum in the system, as we will see below. For arbitrary non-zero complex coefficients, we have the constraint for momentum component in the z-direction (k_3) in the case of n≥ 0 as follows mℓsin(k_3ℓ)+k_3 ℓcos (k_3ℓ)=0. The detailed derivation is given in Appendix <ref>. The solution for Eq. (<ref>) is given by k_3l with l=1,2,3,⋯, which indicates that the allowed momentum k_3 must be discrete. As a consequence, the energy of the field under the MIT boundary condition must also be discrete as follows ω^(t)_n,l=(1+λ)^-1√(m^2+k^2_3l+2n|eB|). These properties not only hold for positive-energy solutions but also for the negative-energy counterpart. One can see that the magnetic field and parameter λ do not affect the structure of the momentum constraint. In this context, the former is similar to that in the absence of the magnetic field <cit.> while the latter is similar to that of the preserved Lorentz symmetry. We now write down a mode expansion of the Dirac field in the time-like vector case under the boundary condition from the MIT bag model as Ψ^(t)_k_1,n,l(r)= ∑^∞_n=0∑^∞_l=1∫^∞_-∞d k_1 [â_k_1,n,lΨ^(+,t)_k_1,n,l(r)+ b̂^†_k_1,n,lΨ^(-,t)_k_1,n,l(r) ], where Ψ^(±,t)_k_1,n,l(r) are the positive (+) and negative (-) energy solutions. See Appendix <ref> for the detailed expression of the negative-energy solution. The annihilation and creation operators in Eq. (<ref>) satisfy the following anti-commutation relations {â_k_1,n,l,â^†_k'_1,n',l'}={b̂_k_1,n,l,b̂^†_k'_1,n',l'}=δ_nn'δ_ll'δ(k_1-k'_1), and the other anticommutation relations vanish. The Dirac field satisfies orthonormality conditions as follows ∫ d x_⊥∫^ℓ_0 dz ψ^(j,t)†_k_1,n, l( r)ψ^(j',t)_k'_1,n', l'( r)=δ_jj'δ_nn'δ_l l'δ(k_1-k'_1),    j,j'=0,1,2 , by which we can obtain the relations of the complex coefficients of the field. We use x_⊥≡ (x,y) to represent the sub-spatial coordinate parallel to the normal plates' surface. From the above Lagrangian density (<ref>), one can obtain the Hamiltonian density in the time-like vector case as follows H^(t)=-Ψ̅^(t)[iγ^j∂_j-eγ^μ A_μ- m]Ψ^(t)=i(1+λ)Ψ^(t)†∂_0Ψ^(t). Then we are now ready to evaluate the vacuum energy as follows E^(t)_ Vac.=∫_Ω d^3 x E^(t)_ Vac.=∫_Ω d^3 x⟨ 0| H^(t)|0⟩ = -|eB|L^2π∑_n=0^∞∑_l=1^∞ i_n√(m^2+(k'_3lℓ)^2+2n|eB|), where E_ Vac. is the vacuum energy density, i_n=1-1 2δ_n0, k'_3l≡ k_3lℓ, and Ω is the volume of the confinement area. One can derive the Casimir energy by subtracting the vacuum energy in the presence of the boundary condition from the absence of one. We note that the roles of λ do not appear in the vacuum energy for the time-like vector case. In other words, the Casimir energy also does not depend on λ. In the next subsection, we will show that the above result can be recovered in the case of the preserved Lorentz symmetry. Therefore, it is not necessary to evaluate further the Casimir energy in this subsection. §.§ Space-like vector case In this subsection, we investigate the Casimir energy for the space-like vector case in the z-direction. We start the discussion by deriving the solution for the space-like vector case with u^(z)=(0,0,0,1). In this case, the Dirac equation (<ref>) gives two equations as follows (ω-m)χ^(z)_1=(-iσ^j∂_j+eyBσ^1+iλσ^3∂_3)χ^(z)_2, (ω+m)χ^(z)_2=(-iσ^j∂_j+eyBσ^1+iλσ^3∂_3)χ^(z)_1. Multiplying both sides of Eq. (<ref>) by (ω+m) and using Eq. (<ref>), we have the equation for the upper two-component spinor χ^(z)_1 as follows (ω^2-m^2)χ^(z)_1 = (-iσ^j∂_j+eyBσ^1+iλσ^3∂_3)^2χ^(z)_1 = [-∇^2+e^2y^2B^2-eB(2iy∂_1+σ^3)+2λ∂^2_3-λ^2∂^2_3]χ^(z)_1. One can propose the solution χ^(z)_1 as follows χ^(z)_1=e^ik_1 xe^ik_3 zf^(z)(y). Along the same procedure used in the previous subsection, substituting back Eq. (<ref>) into Eq. (<ref>) brings us to Hermite's equation in which we have the eigen energies given as ω^(z)_n, k_3=√(m^2+(1-λ)^2k^2_3+2 n |eB|). We find that the solution of the Dirac field confined between two parallel plates in the space-like vector case of z-direction for the right-moving field with positive value eB is given as follows ψ^(z)_k_1,n,k_3 ( r)=e^ik_1 xe^ik_3 z2π√(2ω^(z)_n, k_3(ω^(z)_n, k_3+m)) [C_1 [ (ω_n, k_3+m) F^(z)_k_1, n(y); 0; (1-λ) k_3F^(z)_k_1, n(y); √(2neB) F^(z)_k_1, n-1(y) ] + C_2 [ 0; (ω_nk_3+m) F^(z)_k_1, n-1(y); √(2neB) F^(z)_k_1, n(y); -(1-λ)k_3F^(z)_k_1, n-1(y) ]],   for n≥ 1 ψ^(z)_k_1,0, k_3 ( r)= e^ik_1 xe^ik_3 z2π√(2ω^(z)_0, k_3(ω^(z)_0, k_3+m)) C_0 F^(z)_k_1 0(y) [ ω^(z)_0, k_3+m; 0; (1-λ) k_3; 0 ] ,  for n=0, where F^(z)_k_1, n(y)=√((eB)^1/2 2^n n!π^1/2)exp[-eB 2(y+k_1 eB)^2]H_n[√(eB)(y+k_1 eB)], with the Hermite polynomial H_n(y). In a similar way, we can obtain the solution for the left-moving field as follows ψ^(+,z)_k_1,n,-k_3( r)= e^ik_1 xe^-ik_3 z2π√(2ω^(z)_n, k_3(ω^(z)_n, k_3+m)) [C̃_1 [ (ω^(z)_n, k_3+m) F^(z)_k_1, n(y); 0; -(1-λ)k_3F^(z)_k_1, n(y); √(2neB) F^(z)_k_1, n-1(y) ] + C̃_2 [ 0; (ω^(z)_n, k_3+m) F^(z)_k_1, n-1(y); √(2neB) F^(z)_k_1, n(y); (1-λ)k_3F^(z)_k_1, n-1(y) ]],  for n≥ 1 ψ^(+,z)_k_1,0,-k_3 ( r)= e^ik_1 xe^-ik_3 z2π√(2ω^(z)_0, k_3(ω^(z)_0, k_3+m)) C̃_0 F^(z)_k_1, 0(y) [ ω^(z)_0,k_3+m; 0; -(1-λ)k_3; 0 ] ,  for n=0, where the eigen energies ω^(z)_n,k_3 are given by Eq. (<ref>) (see Appendix <ref> for the detailed derivation). The complex coefficients in the above Dirac field can be determined by similar orthonormality conditions given in Eq. (<ref>). We next write the total spatial solution for the Dirac field inside the confinement area as follows ψ^(+,z)_k_1,n,k_3( r)=ψ^(+,z)_k_1,n,k_3( r)+ψ^(+,z)_k_1,n,-k_3( r). For non-zero complex coefficients C_1, C_2,C̃_1,C̃_2, we have the constraint of the momentum k_3 as follows mℓsin(k_3ℓ)+(1-λ)k_3 ℓcos (k_3ℓ)=0, for arbitrary Landau level n. One can see that the parameter λ affects the constraint while the magnetic field does not. The allowed momentum that satisfies the constraint (<ref>) is k_3l with l=0,1,2,3,⋯. The discretized eigenenergies of the field under the MIT boundary can be written as follows ω^(z)_n,l=√(m^2+(1-λ)^2 k^2_3l+2n|eB|). Below we will compute the Casimir energy of charged Dirac field under the presence of the MIT boundary. For this purpose, we write down the Hamiltonian density for the space-like vector case as follows, H^(z)=-Ψ̅^(z)[iγ^j∂_j-eγ^μ A_μ- m]Ψ^(z)=iΨ^(z)†∂_0Ψ^(z). The vacuum energy reads E_ Vac.=-|eB| L^2π∑_n=0^∞∑_l=1^∞ i_n √(m^2+(1-λ)^2(k'_3lℓ)^2+2n|eB|), where we have used the eigenenergies given in Eq. (<ref>) and k'_3ℓ(≡ k_3lℓ). From the above vacuum energy, one can see that its value is divergent. To solve the issue, we employ the Abel-Plana-like summation as follows <cit.> ∑_l=1^∞π f_n(k'_3l)(1-sin(2k'_3l) 2k'_3l)=-π b mf_n(0) 2 (b m+1)+∫_0^∞ dz f_n(z) - i∫_0^∞ dt f_n (it)-f_n(-it)t+b m t-b me^2t+1. From the momentum constraint in the space-like vector case (<ref>), the denominator of the left-hand side Eq. (<ref>) can be rewritten in the following form 1-sin(2k'_3l) 2k'_3l = 1 +b m k'^2_3l+(bm)^2, where b=ℓ (1-λ)^-1. Then, after applying the Abel-Plana-like summation to the vacuum energy, Eq. (<ref>) becomes E_ Vac.=-|eB|L^2π^2 b∑_n=0^∞ i_n [-π b m f_n(0) 2 (b m +1)+∫_0^∞ dq f_n(q) - i∫_0^∞ dt f_n (it)-f_n(-it)t+b m t-b me^2t+1], where the function f_n(q) is defined as f_n(q)= √(m^2b^2+q^2+2n|eB| b^2)(1 +b m q^2+(bm)^2). Next, one can decompose the first and second terms in the vacuum energy (<ref>) into two parts: (i) in the absence of the boundary conditions of two plates and (ii) in the presence of one plate. The latter part is irrelevant to our discussion because it does not contribute to the force. Then, the last term of Eq. (<ref>) can be understood as the Casimir energy E_ Cas.=i |eB|L^2π^2 b∑_n=0^∞ i_n ∫_0^∞ dt f_n (it)-f_n(-it)t+b m t-b me^2t+1. Using Eq. (<ref>) and introducing variable of t=bu, the Casimir energy reads E_ Cas.= -2 |e B| L^2 /π^2 ∑_n = 0^∞ i_n ∫_0^∞ d u √(u^2 - M_n^2 )( b ( u - m ) - m / (m + u)/(u + m) e^2 b u + u - m), where [ M_n = √(m^2 + 2 n |e B|). ] The range of integration of Eq. (<ref>) can be split into two intervals, i.e., [0,M_n] and [M_n,∞]. The integration result of the first interval vanishes while the second one remains. To further proceed with the Casimir energy, we next rewrite the following quantity as b (u - m) - m / (m + u)/(u + m) e^2 b u + u - m = - 1/2d/d uln( 1 + u - m/u + m e^- 2 b u), which leads the Casimir energy to E_ Cas. = |e B| L^2 /π^2 b∑_n = 0^∞ i_n ∫_0^∞ d y √(y^2 + 2 y b M_n)d/d yln( 1 + y + b (M_n - m)/y + b (M_n + m) e^- 2 (y + b M_n) ), where we have introduced a new variable as y = b u - b M_n. Performing integration by part for Eq. (<ref>), we finally find the simpler form of the Casimir energy as follows E_ Cas.=-|eB| L^2π^2 b∑^∞_n=0 i_n ∫^∞_0 dy (y+bM_n)(y^2+2byM_n)^-1/2ln( 1 + y + b (M_n - m)/y + b (M_n + m) e^- 2 (y + b M_n) ). We next numerically evaluate the expression of the Casimir energy given in Eq. (<ref>). The left panel of Fig. <ref> depicts the scaled Casimir energy as a function of the dimensionless parameter m'(≡ mℓ) for various values of the parameter λ=0,0.01,0.1 with a fixed parameter ℓ^2|eB|=2. From this figure, we find that the scaled Casimir energy converges to zero as the parameter m' becomes larger. The right panel of figure <ref> depicts the scaled Casimir energy as a function of the dimensionless parameter ℓ^2|eB| for a fixed parameter m'=1. From this figure, one can see that the scaled Casimir energy also converges to zero as the parameter ℓ^2 |eB| increases. Both panels of Fig. <ref> show that the parameter λ increases, the Casimir energy will increase and vice versa, as previously shown by Ref. <cit.> for the absence of the magnetic field. Figure <ref> plots the scaled Casimir energy as a function of the dimensionless parameter ℓ^2 |eB| for various values of parameter λ=0,0.01,0.1 with a fixed parameter m'=1. One can see that the increasing ℓ^2 |e B| leads to the converging of the Casimir energy to zero. In the rest of this part, we investigate the approximate cases of the Casimir energy. In the case of the weak magnetic field B→ 0, the above Casimir energy (<ref>) for an arbitrary m'(≡ mℓ) reduces to E_ Cas.≃ -L^2π^2 b^3∫^∞_bm dx x^2 ∫^∞_0 dv (v+1)1√(v(v+2))ln( 1+x(v+1)-bm x(v+1)+bm e^-2x(v+1)). To obtain the above expression, we have used the replacement of summation with integration, v=y/(b M_n), and x=bM_n. Taking the case of light mass m'≪1 for Eq. (<ref>), we recover the earlier result by Ref. <cit.> as follows E_ Cas.≃-7π^2 (1-λ)^3 L^2 2880 ℓ^3[1-120 m' 7π^2(1-λ)], where we have expanded the integrand up to the order of 𝒪(m') and omitted the higher ones. The first term corresponds to the Casimir energy in the massless case with the effect of the Lorentz violation while the second term corresponds to the correction part. In the case of the preserved Lorentz symmetry, λ=0, we recover the well-known Casimir energy of the massless fermion derived by Johnson <cit.>. To obtain the approximated result of Eq. (<ref>), one can also start from the general Casimir energy (<ref>) and take its light mass case m'≪ 1 for the arbitrary magnetic field as E_ Cas.≃ -|eB|L^2π^2b∑_n=0^∞ i_n∫^∞ _0dy [(y+b√(2 n e B))ln(1+e^-2(y+b√(2 n e B)))√(y^2+2y b√(2n e B))-2b me^-2(y+b√(2 n e B))√(y^2+2y b√(2n e B))(1+e^-2(y+b√(2 n e B)))]. Then, taking the limit of the weak magnetic field, the above expression reduces to Eq. (<ref>). In the case of heavy mass m'≫ 1, we find that the Casimir energy approximately reduces to E_ Cas.≃ - |e B|L^2(1-λ)^3/2 16 π^3/2ℓ√(m')∑_n=0^ ∞ i_ne^-2√( m'^2+2 n B') (1-λ), where we have expanded the integrand of Eq. (<ref>) up to the order of 𝒪(1/m') and omitted the higher ones. In the case of weak magnetic field B→ 0, the above Casimir energy (<ref>) reads E_ Cas.≃ - L^2 (1 - λ)^5 / 2√(m')/32 π^3 / 2ℓ^3 e^- 2 m'/(1 - λ). We can see that, in the case of heavy mass, the Casimir energy goes to zero as the increase of mass. We next investigate the Casimir energy in the case of the strong magnetic field ℓ^2 eB≫ 1. In this case, together with light mass m'≪ 1, the Casimir energy in Eq. (<ref>) approximately reduces to E_ Cas.≃ -|eB|L^2 (1-λ) 48 ℓ. Meanwhile for the case of strong magnetic field ℓ^2 |eB|≫ 1 and taking the limit of heavy mass m'≫ 1, the Casimir energy reads E_ Cas.≃-|eB|L^2 (1-λ)^3/2 32 π^3/2ℓ√(m')e^-2m' (1-λ). From the above expression, we note that the Casimir energy converges to zero as the increase of parameter m'. § CASIMIR PRESSURE In this section, we investigate the Casimir pressure for the spacelike vector case. It can be obtained from the Casimir energy (<ref>) by taking the derivative with respect to the plate's distance as P_Cas . = -1 L^2∂ E_ Cas.∂ℓ = - ∑_n = 0^∞ i_n ∫_0^∞ d y 1/(1 - λ) b^2 π^2 (y (2 b M_n + y))^3 / 2                     × e B y {2 b (b M_n + y) (2 b M_n + y) (b^2 M_n (M^2_n - m^2) + 2 b M^2_n y + y (m + M_n y))/b^2 (M^2_n - m^2) + 2 b M_n y + y^2 + e^2 (b M_n + y) (b (m + M_n) + y)^2.                                  . + (b^2 M^2_n + 3 b M_n y + y^2) ln( 1 + e^- 2 (b M_n + y) (b (- m + M_n) + y)/b (m + M_n) + y) }. We plot the behavior of the scaled Casimir pressure in Figs. <ref> and <ref>. In general, we can see that its behavior is similar to that of the Casimir energy. From the left panel of Fig. <ref>, one can see the scaled Casimir pressure converges to zero as the increases of parameter m' while from the right panel, it increases as the increases of ℓ^2 |eB|. These behaviors are supported by Fig. <ref>. Both panels of Fig. <ref> show that the Casimir pressure increases as the increases of parameter λ. We next investigate the Casimir pressure in the case of weak and strong magnetic fields. In the case of weak magnetic field B→ 0, the Casimir pressure (<ref>) approximately reduces to P_ Cas. ≃ - 1/(1 - λ) b^4 π^2∫_b m^∞ d x ∫_0^∞ d v x^2/v^1 / 2 (2 + v)^3 / 2 ×( 2 x (1 + v) (2 + v) (x^2 (1 + v)^2 + t b m - (b m)^2)/x^2 (1 +v)^2 - (b m)^2 + e^2 x (1 + v) (b m + x (1 + v))^2 + (1 + 3 v + v^2) ln( 1 + e^- 2 x (1 + v) (- b m + x (1 + v))/(b m + x (1 + v))) ). We further take light mass limit m'≪ 1 for the above expression, then we have P_ Cas.≃ -(1-λ)^2(7π^2 (1-λ)-80m') 960 ℓ^4, which covers the earlier result of Ref. <cit.>. As discussed in the previous section, to obtain the above expression, we can use the reverse way, namely, taking its light mass limit and then considering the weak magnetic field. The Casimir pressure for the case of light mass with the arbitrary magnetic field is approximately given as follows P_ Cas.≃ P^(0)_ Cas.+P^(1)_ Cas., where P^(0)_ Cas. is the Casimir pressure for the massless case explicitly given as P^(0)_ Cas. = - ∑_n = 0^∞ i_n ∫_0^∞ d y |e B| y/b^2 π^2 (1 - λ) ( y ( 2 b √(2 n e B) + y ) )^3 / 2 ×{2 b √(2 n e B)( 2 b √(2 n e B) + y ) ( b √(2 n e B) + y ) /( 1 + e^2 ( b √(2 n e B) + y )) + ( b^2 2 n e B + 3 b √(2 n e B) y + y^2 ) ln( 1 + e^- 2 ( b √(2 n e B) + y )) }, and P^(1)_ Cas. is the first order correction to the Casimir pressure 𝒪(m^') explicitly given as P^(1)_ Cas. = ∑_n = 0^∞ i_n ∫_0^∞ d y 2 |e B| y b √(2 n e B)( 1 + e^2 ( b √(2 n e B) + y ) (1 + 2 y) + 4 e^2 ( b √(2 n e B) + y ) b √(2 n e B)) b m/b^2 π^2 ( 1 + e^2 ( b √(2 n e B) + y ))^2 ( y ( y + 2 b √(2 n e B)) )^3 / 2 (1 - λ) . We next investigate the Casimir pressure (<ref>) in the case of heavy mass m'≫ 1. In this case, we have P_ Cas.≃ - |e B| √(m')/(1 - λ)^1 / 2 8 π^3 / 2 b^2∑_n = 0^∞ i_n e^- 2 √(m'^2 + 2 n e B), and with the limit of the weak magnetic field B→ 0, the above Casimir pressure approximately reduces to P_ Cas.≃ - (1 - λ)^5 / 2m'^3 / 2/16 π^3 / 2ℓ^4 e^- 2 m'/(1 - λ). Similar behavior to the Casimir energy (<ref>), one can see that the Casimir pressure in the limit of heavy mass (<ref>) converges to zero as increasing of the particle's mass. Based on the result of the Casimir pressure in the cases of light (<ref>) and heavy masses (<ref>), we will analyze the behavior in the strong magnetic field. Taking the limit of strong magnetic field ℓ^2 |eB|≫ 1 for Eq. (<ref>), the Casimir pressure approximately reduces to P_ Cas.≃ -|eB|L^2 (1-λ) 48 ℓ^2, while for Eq. (<ref>), we obtain P_ Cas.≃ -|eB|L^2 (1-λ)^3/2√(m') 16 π^3/2ℓ^2 e^-2m' (1-λ). One can also derive both above equations by taking the derivative of the Casimir energy Eqs. (<ref>) and (<ref>) with respect to the plate's distance. § SUMMARY We have studied the Casimir effect of a Lorentz-violating Dirac with a background uniform magnetic field. The Lorentz violation is described by two parameters: (i) λ , which determines the intensity of the violation and (ii) vector u^μ, which determines the direction of the violation. In the present study, we investigated two vector cases, namely, timelike and spacelike vector cases. For the spacelike vector case, we only discussed the z-direction. The purpose of the study is to find the effect of the Lorentz violation parameter λ together with the presence of the magnetic field in the behavior of the Casimir energy as well as its pressure. We used the boundary condition from the MIT bag model <cit.> to represent the property of the plates. From our derivation, we find that for the timelike vector case, the magnetic field and the Lorentz violating parameter do not affect the structure of the momentum constraint while for the spacelike vector case, only Lorentz violating parameter appears. We noted that the vacuum energy under the MIT boundary condition is divergent. Using Abel-Plana like summation <cit.>, we can extract this vacuum energy into three main parts, namely, vacuum energy in the absence of the boundary condition, the vacuum energy in the present of single boundary condition that does not relevant to the Casimir effect, and the rest term that refers to the Casimir energy. We can derive the Casimir energy by subtracting the vacuum energy in the presence of the boundary condition from that in the absence of one. The Lorentz violation for the timelike vector case does not affect the structure of the Casimir energy as well as its pressure while for the spacelike vector case, the violation affects it. We also found that the magnetic field has an effect on the Casimir energy and the pressure for both timelike and spacelike vector cases. We have demonstrated the behavior of the scaled Casimir energy and the pressure as a function of mass, parameter λ, and magnetic field. For the fixed parameter λ and magnetic field, the scaled Casimir energy and the pressure converge to zero as the increase of mass (see left panel of Figs. <ref> and <ref>). For fixed parameter λ and mass, the scaled Casimir energy and the pressure converge to zero as the increasing of the magnetic field (see right panel of Figs. <ref> and <ref>). We also found that the increase of the parameter λ leads to the increase of the Casimir energy and the pressure, as has been pointed out by Ref. <cit.>. For future work, it is interesting to discuss the thermal effect in a similar setup to our present work (c.f., Ref. <cit.> for the scalar field). It is also interesting to study a similar setup under the general boundary, for example, chiral MIT boundary conditions <cit.>. § ACKNOWLEDGMENTS A. R. was supported by the National Research and Innovation Agency (BRIN) Indonesia, through the Post-Doctoral Program. § DETAIL DERIVATION OF CONSTRAINT FOR MOMENTUM In this section, we provide the complementary derivation for the momentum constraint. Applying the boundary condition from the MIT bag model (<ref>) to the solution of the Dirac equation, we have two equations as follows iσ^3χ_2|_z=0-χ_1|_z=0=0, iσ^3χ_2|_z=ℓ+χ_1|_z=ℓ=0, where we have used n^(0)_μ=(0,0,0,1) and n^(ℓ)_μ=(0,0,0,-1) at the first z=0 and second plates z=ℓ, respectively. Then, in a more explicit expression, we have four equations boundary conditions as follows iχ_21|_z=0-χ_11|_z=0=0, iχ_22|_z=0+χ_12|_z=0=0, iχ_21|_z=ℓ+χ_11|_z=ℓ=0, iχ_22|_z=ℓ-χ_12|_z=ℓ=0, where we have decomposed the two-component spinors χ_1 and χ_2 as χ_1= [ χ_11; χ_12 ], χ_2= [ χ_21; χ_22 ]. The boundary conditions of Eqs. (<ref>)-(<ref>) can be simultaneously written in the form of multiplication between two matrices as follows [ P_11 P_12; P_21 P_22 ][ C_0; C̃_0 ] =0,   for n=0, and [ Q_11 Q_12 Q_13 Q_14; Q_21 Q_22 Q_23 Q_24; Q_31 Q_32 Q_33 Q_34; Q_41 Q_42 Q_43 Q_44 ][ C_1; C_2; C̃_1; C̃_2 ] =0,   for n≥ 1, where the matrix elements are given by P^(t)_11=ik_3-((1+λ)ω^(t)_0k_3+m), P^(t)_12=-ik_3-((1+λ)ω^(t)_0k_3+m), P^(t)_21=[ik_3+((1+λ)ω^(t)_0k_3+m)]e^ik_3ℓ, P^(t)_22=[-ik_3+((1+λ)ω^(t)_0k_3+m)]e^-ik_3ℓ, Q^(t)_11=- Q^(t)_22=ik_3-((1+λ)ω^(t)_nk_3+m), Q^(t)_12= Q^(t)_14= Q^(t)_21= Q^(t)_23=i√(2neB), Q^(t)_13=- Q^(t)_24=-ik_3-((1+λ)ω^(t)_nk_3+m), Q^(t)_31=- Q^(t)_42=[ik_3+((1+λ)ω^(t)_nk_3+m)]e^ik_3ℓ, Q^(t)_32= Q^(t)_41=i√(2neB)e^ik_3ℓ, Q^(t)_34= Q^(z)_43=i√(2neB)e^-ik_3ℓ, Q^(t)_33=- Q^(t)_44=[-ik_3+((1+λ)ω^(t)_nk_3+m)]e^-ik_3ℓ. and P^(z)_11=i(1-λ)k_3-(ω^(z)_0k_3+m), P^(z)_12=-i(1-λ)k_3-(ω^(z)_0k_3+m), P^(z)_21=[i(1-λ)k_3+(ω^(z)_0k_3+m)]e^ik_3ℓ, P^(z)_22=[-i(1-λ)k_3+(ω^(z)_0k_3+m)]e^-ik_3ℓ, Q^(z)_11=- Q^(z)_22=i(1-λ)k_3-(ω^(z)_nk_3+m), Q^(z)_12= Q^(z)_14= Q^(z)_21= Q^(z)_23=i√(2neB), Q^(z)_13=- Q^(z)_24=-i(1-λ)k_3-(ω^(z)_nk_3+m), Q^(z)_31=- Q^(z)_42=[i(1-λ)k_3+(ω^(z)_nk_3+m)]e^ik_3ℓ, Q^(z)_32= Q^(z)_41=i√(2neB)e^ik_3ℓ, Q^(z)_34= Q^(z)_43=i√(2neB)e^-ik_3ℓ, Q^(z)_33=- Q^(z)_44=[-i(1-λ)k_3+(ω^(z)_nk_3+m)]e^-ik_3ℓ, for timelike and spacelike in the z-direction vector cases, respectively. For arbitrary non-zero complex coefficients C_0,C̃_0, C_1, C_2,C̃_1,C̃_2 requires the vanishing of the determinant of 2× 2 matrix of Eq. (<ref>) and 4× 4 matrices of Eq. (<ref>) that lead the constraint for momentum k_3. § NEGATIVE-ENERGY SOLUTIONS §.§ Timelike vector case The negative energy solution for the right-moving field component is as follows ψ^(-,t)_k_1,n,k_3 ( r) = e^-ik_1 xe^-ik_3 z2π√(2(1+λ)ω^(t)_n, k_3((1+λ) ω^(t)_n, k_3+m)) ×[C̃_1 [ k_3f^(t)_-k_1 n(y); -√(2neB) f^(t)_-k_1 n-1(y); ((1+λ)ω^(t)_nk_3+m) f^(t)_-k_1 n(y); 0 ] + C̃_2 [ -√(2neB) f^(t)_-k_1 n(y); -k_3f^(t)_-k_1 n-1(y); 0; ((1+λ)ω^(t)_nk_3+m) f^(t)_-k_1 n-1(y) ]],   for n≥ 1 and ψ^(-,t)_k_1,0,k_3 ( r)= e^-ik_1 xe^-ik_3 z2π√(2(1+λ)ω^(t)_0, k_3((1+λ) ω^(t)_0, k_3+m)) C̃_0 f^(t)_-k_1, 0(y) [ k_3; 0; (1+λ)ω^(t)_0,k_3+m; 0 ],   for n=0. The negative energy solution for the left-moving field component is as follows ψ^(-,t)_k_1,n,-k_3 ( r) = e^-ik_1 xe^ik_3 z2π√(2(1+λ)ω^(t)_n, k_3((1+λ) ω^(t)_n, k_3+m)) ×[ C_1 [ -k_3f^(t)_-k_1 n(y); -√(2neB) f^(t)_-k_1 n-1(y); ((1+λ)ω^(t)_nk_3+m) f^(t)_-k_1 n(y); 0 ] + C_2 [ -√(2neB) f^(t)_-k_1 n(y); k_3f^(t)_-k_1 n-1(y); 0; ((1+λ)ω^(t)_nk_3+m) f^(t)_-k_1 n-1(y) ]],   for n≥ 1 and ψ^(-,t)_k_1,0,-k_3 ( r)= e^-ik_1 xe^ik_3 z2π√(2(1+λ)ω^(t)_0, k_3((1+λ) ω^(t)_0, k_3+m)) C_0 f^(t)_-k_1, 0(y) [ -k_3; 0; (1+λ)ω^(t)_0,k_3+m; 0 ],   for n=0. The total spatial solution inside the confinement area is given by the linear combination between the left- and right-moving field components as follows ψ^(-,t)_k_1,n, l( r)=ψ^(-,t)_k_1,n,k_3 l( r)+ψ^(-,t)_k_1,n,-k_3 l( r), where we use k_3 l to represent the allowed momentum in the system. §.§ Spacelike vector case (z-direction) The negative energy solutions for the right-moving field component are given as follows ψ^(-,z)_k_1,n,k_3 ( r)= e^-ik_1 xe^-ik_3 z2π√(2ω^(z)_n, k_3(ω^(z)_n, k_3+m)) [C̃_1 [ (1-λ) k_3F^(z)_-k_1, n(y); -√(2neB) F^(z)_-k_1, n-1(y); (ω^(z)_n,k_3+m) F^(z)_-k_1, n(y); 0 ] + C̃_2 [ -√(2neB) F^(z)_-k_1, n(y); -(1-λ)k_3F^(z)_-k_1, n-1(y); 0; (ω^(z)_nk_3+m) F^(z)_-k_1, n-1(y) ]],   for n≥ 1 and ψ^(-,z)_k_1,0,k_3 ( r)= e^-ik_1 xe^-ik_3 z2π√(2ω^(z)_0, k_3(ω^(z)_0, k_3+m)) C̃_0 F^(z)_-k_1, 0(y) [ (1-λ)k_3; 0; ω^(z)_0, k_3+m; 0 ] ,   for n=0, where f^(t)_-k_1, n(y)=√((eB)^1/2 2^n n!π^1/2)exp[-eB 2(y-k_1 eB)^2]H_n[√(eB)(y-k_1 eB)]. The negative energy solutions for the left-moving field component are given as follows ψ^(-,z)_k_1,n,-k_3 ( r)= e^-ik_1 xe^ik_3 z2π√(2ω^(z)_n, k_3(ω^(z)_n, k_3+m)) [ C_1 [ -(1-λ) k_3F^(z)_-k_1, n(y); -√(2neB) F^(z)_-k_1, n-1(y); (ω^(z)_n,k_3+m) F^(z)_-k_1, n(y); 0 ] + C_2 [ -√(2neB) F^(z)_-k_1, n(y); (1-λ)k_3F^(z)_-k_1, n-1(y); 0; (ω^(z)_nk_3+m) F^(z)_-k_1, n-1(y) ]],   for n≥ 1 and ψ^(-,z)_k_1,0,-k_3 ( r)= e^-ik_1 xe^-ik_3 z2π√(2ω^(z)_0, k_3(ω^(z)_0, k_3+m)) C_0 F^(z)_-k_1 0(y) [ -(1-λ)k_3; 0; ω^(z)_0, k_3+m; 0 ] ,   for n=0. The total spatial solution inside the confinement area is given by the linear combination between the left- and right-moving field components as follows ψ^(-,z)_k_1,n, l( r)=ψ^(-,z)_k_1,n,k_3 l( r)+ψ^(-,z)_k_1,n,-k_3 l( r), where we use k_3 l to represent the allowed momentum in the system. 99 Casimir1948 H. B. G. Casimir, Kon. Ned. Akad. Wetensch. Proc. 51, 793 (1948). Sparnaay1958 M. J. Sparnaay, Physica 24, 751 (1958). Lamoreaux97 S. K. Lamoreaux, Phys. Rev. Lett. 78, 5 (1997), Phys. Rev. Lett. 81, 5475 (1998) (E). Mohideen:1998iz U. Mohideen and A. Roy, Phys. Rev. Lett. 81, 4549 (1998). Roy:1999dx A. Roy, C. Y. Lin and U. Mohideen, Phys. Rev. D 60, 111101 (1999). Bressi:2002fr G. Bressi, G. Carugno, R. Onofrio and G. Ruoso, Phys. Rev. Lett. 88, 041804 (2002). Belluci2009 S. Bellucci and A. A. Saharian Phys. Rev. D 79, 085019 (2009). Hassan:2022hcb Z. Hassan, S. Ghosh, P. K. Sahoo and K. Bamba, Eur. Phys. J. C 82, 1116 (2022). Grushin2021 A. G. Grushin and A. Cortijo, Phys. Rev. Lett. 106, 020403 (2021). Grushin2011 A. G. Grushin, P. Rodriguez-Lopez, and A. Cortijo, Phys. Rev. B 84, 045119 (2011). Onofrio:2006mq R. Onofrio, New J. Phys. 8, 237 (2006). Bordag:2001qi M. Bordag, U. Mohideen and V. M. Mostepanenko, Phys. Rept. 353, 1-205 (2001). Ambjorn1983 J. Ambjorn and S. Wolfram, Annals Phys. 147, 1 (1983). Chodos:1974je A. Chodos, R. L. Jaffe, K. Johnson, C. B. Thorn and V. F. Weisskopf, Phys. Rev. D 9, 3471 (1974). Chodos:1974pn A. Chodos, R. L. Jaffe, K. Johnson and C. B. Thorn, Phys. Rev. D 10, 2599 (1974). Johnson:1975zp K. Johnson, Acta Phys. Polon. B 6, 865 (1975). Rohim:2022mri A. Rohim, A. S. Adam and K. Yamamoto, Prog. Theor. Exp. Phys. 2023, 013B05 (2023). Lutken:1983hm C. A. Lutken and F. Ravndal, J. Phys. G 10, 123 (1984). Sitenko:2014kza Y. A. Sitenko, Phys. Rev. D 91, 085012 (2015). Cougo-Pinto:1998jwo M. V. Cougo-Pinto, C. Farina and A. C. Tort, Conf. Proc. C 9809142, 235 (1999). Ostrowski:2005rm M. Ostrowski, Acta Phys. Polon. B 37, 1753 (2006). Elizalde:2002kb E. Elizalde, F. C. Santos and A. C. Tort, J. Phys. A 35, 7403 (2002). Cougo-Pinto:1998jun M. V. Cougo-Pinto, C. Farina, M. R. Negrao and A. C. Tort, J. Phys. A 32, 4457 (1999). Frank:2006ww M. Frank and I. Turan, Phys. Rev. D 74, 033016 (2006). Erdas:2013jga A. Erdas and K. P. Seltzer, Phys. Rev. D 88, 105007 (2013). Martin-Ruiz:2016ijc A. Martín-Ruiz and C. A. Escobar, Phys. Rev. D 94, 076010 (2016). Cruz:2017kfo M. B. Cruz, E. R. Bezerra de Mello and A. Yu. Petrov, Phys. Rev. D 96, 045019 (2017). Erdas:2020ilo A. Erdas, Int. J. Mod. Phys. A 35, 2050209 (2020). Escobar-Ruiz:2021dxi A. M. Escobar-Ruiz, A. Martín-Ruiz, E. C. A. and R. Linares, Int. J. Mod. Phys. A 36, 2150168 (2021). Blasone:2018nfy M. Blasone, G. Lambiase, L. Petruzziello and A. Stabile, Eur. Phys. J. C 78, no.11, 976 (2018). Escobar:2020pes C. A. Escobar, L. Medel and A. Martín-Ruiz, Phys. Rev. D 101, 095011 (2020). Cruz:2018thz M. B. Cruz, E. R. Bezerra de Mello and A. Y. Petrov, Phys. Rev. D 99, 085012 (2019). Kostelecky:1988zi V. A. Kostelecky and S. Samuel, Phys. Rev. D 39, 683 (1989). Colladay:1996iz D. Colladay and V. A. Kostelecky, Phys. Rev. D 55, 6760 (1997). Colladay:1998fq D. Colladay and V. A. Kostelecky, Phys. Rev. D 58, 116002 (1998). Kostelecky:2003fs V. A. Kostelecky, Phys. Rev. D 69, 105009 (2004). Kostelecky:1994rn V. A. Kostelecky and R. Potting, Phys. Rev. D 51, 3923-3935 (1995). Colladay:1994cj D. Colladay and V. A. Kostelecky, Phys. Lett. B 344, 259 (1995). Colladay:1995qb D. Colladay and V. A. Kostelecky, Phys. Rev. D 52, 6224 (1995). Schwingenheuer1995 B. Schwingenheuer et al. Phys. Rev. Lett. 74, 4376 (1995). Gibbons1997 L. K. Gibbons et al. Phys. Rev. D 55, 6625 (1997). NA31:1990xkc R. Carosi et al. Phys. Lett. B 237, 303 (1990). Kostelecky:1997mh V. A. Kostelecky, Phys. Rev. Lett. 80, 1818 (1998). Schwinberg1981 P.B. Schwinberg, R.S. Van Dyck, H.G. Dehmelt, Physics Letters A 81, 2 (1981). VanDyck1986 R. S. Van Dyck, Jr., P. B. Schwinberg, and H. G. Dehmelt, Phys. Rev. D 34, 722 (1986). Brown1986 L. S. Brown and G. Gabrielse Rev. Mod. Phys. 58, 233 (1986). VanDyck1987 R. S. Van Dyck, Jr., P. B. Schwinberg, and H. G. Dehmelt Phys. Rev. Lett. 59, 26 (1987) Bluhm:1997ci R. Bluhm, V. A. Kostelecky and N. Russell, Phys. Rev. Lett. 79, 1432 (1997). Bluhm:1997qb R. Bluhm, V. A. Kostelecky and N. Russell, Phys. Rev. D 57, 3932 (1998). Bertolami:1996cq O. Bertolami, D. Colladay, V. A. Kostelecky and R. Potting, Phys. Lett. B 395, 178 (1997). Romeo:2000wt A. Romeo and A. A. Saharian, J. Phys. A 35, 1297 (2002). Bhattacharya:2007vz K. Bhattacharya, arXiv:0705.4275. Bhattacharya:1999bm K. Bhattacharya and P. B. Pal, arXiv:hep-ph/9911498. AFG P. Alberto, C. Fiolhais, and V. M. S. Gil, Eur. J. Phys. 17, 19 (1996). Bellucci:2009hh S. Bellucci and A. A. Saharian, Phys. Rev. D 80, 105003 (2009). Erdas:2021xvv A. Erdas, Int. J. Mod. Phys. A 36, 2150155 (2021).
http://arxiv.org/abs/2307.05167v1
20230711105345
A Non-Custodial Wallet for CBDC: Design Challenges and Opportunities
[ "Ryan Bowler", "Chris Speed", "Geoffrey Goodell", "Joe Revans" ]
cs.CY
[ "cs.CY" ]
Construction of Linear Codes from the Unit Graph G(ℤ_n) [ ============================================================== § ABSTRACT Central Bank Digital Currency (CBDC) is a novel form of money that could be issued and regulated by central banks, offering benefits such as programmability, security, and privacy. However, the design of a CBDC system presents numerous technical and social challenges. This paper presents the design and prototype of a non-custodial wallet, a device that enables users to store and spend CBDC in various contexts. To address the challenges of designing a CBDC system, we conducted a series of workshops with internal and external stakeholders, using methods such as storytelling, metaphors, and provotypes to communicate CBDC concepts, elicit user feedback and critique, and incorporate normative values into the technical design. We derived basic guidelines for designing CBDC systems that balance technical and social aspects, and reflect user needs and values. Our paper contributes to the CBDC discourse by demonstrating a practical example of how CBDC could be used in everyday life and by highlighting the importance of a user-centred approach. § INTRODUCTION Money is deeply embedded in many areas of society, connecting social, economic, and political discourse <cit.>. It is considered a social technology <cit.> used by millions of people at any given time. With the advent of the digital economy and the decreasing use of money in its tangible state of cash <cit.> in favour of digital representations stored on physical devices and accessible through software, our understanding of money continually evolves and adapts. The rise of cryptocurrency heralds a significant shift in the popular perception and use of money, with thousands of digital coins being produced for various applications <cit.> and acting as unregulated forms of money on trading platforms. Governments and central banks worldwide are looking to harness technologies and concepts similar to those used in cryptocurrency to create a sovereign digital ‘coin’ known as Central Bank Digital Currency (CBDC). Just as central banks have managed the distribution and redemption of money in the form of cash for centuries, CBDC offers the chance to do the same with digital assets. Several countries are currently trialling, testing, or discussing CBDC as a form of money within their economies. This means that millions of people could soon be interacting with these coins as a legitimate and institutionally backed form of currency. The Bank of England and the HM Treasury are some of the latest to engage in CBDC discussions <cit.>. CBDC provides an opportunity for designers to incorporate properties of established forms such as the anonymity of cash or the digital convenience of debit - things that people value. However, it is still up for debate whether people will use these ‘coins’. Some versions of CBDC may only be used on the wholesale market and not by everyday people. Because of the complexity intrinsic to retail forms of money <cit.>, infrastructural changes and diverse human perspectives will be crucial in the design of CBDC if it is to be used on a daily basis for retail transactions. Retail CBDC requires an understanding of the diverse needs of potential users. However, the discourse surrounding this digital ‘coin’ often overlooks the user angle <cit.> and focuses mainly on its technical details <cit.>. All scenarios of CBDC need to be understood and methods, technologies, and systems must be in place to store, validate, and operate its usage within varying transactional scenarios. This means the design corpus around this new form of currency will be complex and must encompass technical infrastructure along with user narratives. To contribute to the emergent CBDC discourse and provoke interest in user-focussed CBDC design, we explored ways people would come to use this digital ‘coin’ using a series of design methods. This resulted in the creation of our own non-custodial CBDC wallet - a software/hardware device that stores and allows the transaction of CBDC within a retail setting. We ran workshops to understand complex CBDC systems, align our team’s understanding, refine the journey of a CBDC asset, and explore human factors from the perspectives of different users. We used methods such as storytelling, metaphoric language, user journey mapping, and `provotypes' to aid in the creation of a series of non-custodial wallets. Each method had its benefits and downsides. By matching different wallet features, we arrived at our final prototype wallet which we termed ‘The Minimum Standalone Wallet’. In this paper, we characterise our approach, the challenges we encountered, and our overall design output. § NAVIGATING A SHIFTING PAYMENTS LANDSCAPE The concept of socio-technical systems allows designers to understand the interconnected workings of society, from individuals to institutions, and how these relate to technical systems <cit.>. Money fits into this social technology as it cannot be fully understood without considering the social relationships from which it emerges <cit.>. A socio-technical system evolves alongside the changing social, cultural, and political needs of the context in which it is designed <cit.>. As society changes, so does the way money is perceived and used. This is demonstrated by the decrease in the use and acceptance of established forms of money, especially cash. This trend is amplified by multiple factors ranging from social and technological changes <cit.>, <cit.>, to the COVID-19 pandemic <cit.>, to the emergence of systems for digital payments that offer individual ownership of money and provide greater financial inclusion for specific individuals <cit.>. Physical acceptance of cash by retailers has also changed. After the first lockdown uplift, 42 per cent of people in the UK experienced interactions with retailers who no longer accept cash payments <cit.>, a trend that continues. Further hampering cash, specifically in the UK, is the fact that the government has not proposed any laws to prohibit the non-acceptance of cash <cit.>. This becomes potentially a cautionary issue, as designers have sh`own that cash remains important to many people. This includes older generations who may prefer using cash and have created routines around it <cit.>, as well as those for whom the physical cost of using digital forms is prohibitive due to poor design that exacerbates an already present digital divide <cit.>. Because of these limitations inherent to digital forms of money, no matter how money evolves in the future, physical forms of money like cash are likely to remain pivotal as a tangible asset that upon which people rely. For instance, cash possesses many valuable properties. It can be obtained instantaneously, making it preferable over cashless payments that can take hours or even days to process <cit.>. Cash can also be easily converted into other assets and serves as both a sovereign currency and a store of value. Its distribution is monitored by central banks, and its creation is governed in a way that supports its legitimacy. Central banks manage cash using a variety of mechanisms, ranging from monitoring commercial banks to literally printing money so that people can purchase goods <cit.>. However, cash also has flaws that can be addressed by digital forms of money. For example, cash is not suitable for payments at a distance, and it has has size restrictions that limit the amounts that can be transacted at a single time and place. Digital forms of money are sometimes bound by fewer restrictions on its transport and use <cit.>. Cash is visual, making it challenging to use if a person cannot visually denote its value, currency, or amount <cit.>. Cash can be vulnerable to counterfeiting, physical theft, regulatory evasion, and lack of visibility into its transactions, all of which can support criminal activity. However, it is not clear whether removing banknotes, especially those of high value, might not eliminate crime but only change its form, such as a shift toward an increase in institutional corruption or the use of commodity money, such as precious metals or stones, or criminal activities <cit.>. Limitations inherent to cash have been presented by central banks across the world as a justification for pursuing CBDC. With 38 per cent of global countries either piloting CBDC or expressing public interest in it <cit.>, CBDC has the potential to change our concept of money in unprecedented ways across the globe. Nevertheless, the properties of cash, whether positive or negative, remain valuable to many people. As CBDC may become more prevalent in the future, designers are exploring ways to ensure continued accessibility to cash, particularly for those who rely on it or to support economic continuity in the event of infrastructural disruptions, such as power outages. A variety of potential designs have been proposed, including those that allow cash-like functionality without cash distribution infrastructure. One proposal is to allow users to self-print cash that has features compatible with digital payment systems, such as taking a picture to turn the cash into digital forms <cit.>. Consumer affordances of cash, such as anonymity, are becoming important considerations in the design of CBDC <cit.>. Yet, any design attempt is also thwarted by the theoretical unknowns of exactly what a CBDC system will come to be. Will it be issued by central banks, or privately, like Meta’s Libra coin, which is underpinned by financial assets <cit.>? Will CBDC be distributed as a token by banks, or will people have accounts with the central bank <cit.>? Will CBDC be used in wholesale markets, retail markets, or both? Designing a retail CBDC would be significantly more challenging due to the complexity of factors associated with retail usage <cit.>. Currently, several banks are working on CBDC as both a cross-border payment system and a consumer-focussed system <cit.>, making CBDC not only a country-specific narrative but a global one. Though a wide range of topics are covered in CBDC discourse, gaps in research focus have emerged. One example is the majority of focus on technical aspects while user-focussed approaches have been based on theory rather than practical methodologies <cit.>. Researchers have employed design thinking to account for user requirements, for instance, inclusive design <cit.> and a suite of methodologies to explore user requirements <cit.>. However, designers do not really have a design framework or set of methodologies for approaching design in money narratives <cit.>, especially in the context of CBDC, making user-centric design around money and CBDC discourse a relatively novel concept and space of exploration. Overall, CBDC represents a novel form of money that proponents argue will integrate with established forms of payment, such as cash, even if use of those forms of payment continues to decrease. This suggestion demonstrates that the monetary landscape remains unknown, continues to evolve, and provides an opportunity for designers to play a significant role. As proposed CBDC architectures continue to evolve, it is increasingly important for designers to bridge the research gap and seek to address the many questions that remain, including the value that people place on current forms of money and how those forms of money might fit into a world with CBDC. Designers can also work on formulating frameworks, methods, and user-centric approaches for CBDC systems and designs. § DESIGNING A NON-CUSTODIAL WALLET §.§ Challenges in creating a CBDC system Many questions persist in the debate about the right set of requirements for CBDC, and equally just as many design opportunities have presented themselves. We contribute to this development by offering our own approach to the design of a CBDC system. This approach took many steps, iterations, and methods. All of which we discuss in the following sections. Developing a new CBDC system, from conceptualisation to design, presented several unique challenges. One of the main challenges was figuring out how to incorporate normative values such as social, cultural, or political factors into the technical design requirements. When designing for something as complex as money, it’s important to consider it as a social relation. This means that designing its tangible and intangible properties involves more than just the form in which it is presented. It also involves designing the values that people associate with what money does, creates, or upholds <cit.>. Design researchers have developed methods for designing socio-technical systems in specific contexts <cit.>. However, their findings do not fully translate to designing for concepts like money and values. This results in a lack of framework and tools for generating shared understandings of values between designers and stakeholders <cit.>. This also means that there is little guidance available, opening up the opportunity for an exploratory design approach to money and CBDC systems. To address this challenge, we formulated our own guidelines for creating value within our CBDC system based on principles such as a need for diverse payment options, individual asset ownership, and private digital payments. These principles were derived from incorporating key features of tangible cash, such as anonymity, security, not requiring online access <cit.>, and its fungibility, accessibility, non-discrimination, and direct ownership. This approach seemed appropriate because the value people and society place on the usage of cash are firmly established <cit.>. A second challenge that arose was addressing the difficulty in understanding CBDC. CBDC is a complex topic and not always accessible for everyday people due to its novelty, its wide range of technical terms, and the complexity of the current payment infrastructures with which it would integrate. This problem of understanding complex technologies is not new and is referred to as ‘technical debt’ in software disciplines <cit.>. To develop a shared understanding and vision for the system among designers and stakeholders who were unfamiliar with CBDC discourse, we employed techniques inspired by design methods that use metaphors to bolster creativity with non-experts <cit.> and to communicate technical terms through metaphorical analogies in disciplines like computer science <cit.>. Metaphors helped us bridge pre-existing known concepts with the technical systems of CBDC. For example, we used metaphors such as “USO assets are like a sheet of infinitely extensible paper”, “blind signatures are like signing an opaque, carbon-lined envelope”, and “CBDC assets are hot when you create them, so you need to wait for them to cool down before spending”. These metaphors and allegories were then reused and reworked during further workshop activities with internal and external stakeholders. The final challenge was to effectively find a way to demonstrate the system’s technical specifications, as well as its benefits and drawbacks, to potential users and stakeholders. Compiling all these challenge points together allowed for the CBDC design work to be separated into three strands: * meta-design, which focuses on designing working practices, knowledge exchange, and stakeholder engagement <cit.>; * technical systems design, which focuses on designing how the system should function and be built; and * interaction design, which focuses on designing how the system interfaces with the wider world. These three strands provide a framework for understanding the integrity of the project by examining how each one interacts within our CBDC system concept. Each strand was not approached in a linear fashion, but rather in different non-linear steps, with each influencing and reacting to the outcomes of the others. The overall binding challenge points, as well as the considerations and choices that emerged, created a starting point for understanding the prerequisites of our CBDC system’s technical requirements. This led us to return to our inspirations of values that emerge from cash as a point of reference, bolstering our original requirements. * Ownership: The CBDC system must be token-based (not account-based). The tokens must be government-issued (i.e., an analogue to banknotes and coins) and unforgeable by design <cit.>. * possession and control: Users must have the option to store tokens directly, outside the context of accounts, in non-custodial wallets. Non-custodial wallets must not be identifiable, issued by third parties, or registered, and they must not require trusted computing or certified hardware. The system must be private by design for consumers. * Privacy: The identity of somebody who sends money must not be linked to the recipient of the transaction, the value of the transaction, and the transaction metadata (e.g., time, location, service providers, etc.). * Legal: The system must be compatible with anti-money laundering requirements for recipients of money, and authorities should have a way to identify the recipient of money, e.g. a vendor of consumer goods, in most transactions. This requirement introduces a limitation to whether or not peer-to-peer transactions are allowed within the system, and generally implies that recipients will not have the same degree of privacy as payers. The requirement for partial transparency implies that some facilitators of payments, for example a bank that allows vendors to deposit money received from customers, would be subject to regulations and auditable as a means of managing risk. * Maintaining value: The system must support the two-tiered banking system, wherein central banks issue money and private-sector banks make risky investments. We intend for the design to respect the overall structure of the existing financial system, with only the changes that are needed to support digital cash. The CBDC system should interoperate with today's set of institutions and infrastructure and should not be seen as a wholesale replacement for them. Clearing, settlement, and other operations should be performed by regulated, private sector organisations such as banks and other money services businesses, although the system should be overseen by a central bank, as is commonplace among payment systems operating within many jurisdictions today. These finalised requirements provided guidance on the directions we could take in formulating our CBDC system through our design practice. We began to realise that certain technical components were required within our design output. For example, blind signatures, which provide privacy by design with measurable anonymity, fit our design guidelines. Chaum (1983) suggests that blind signatures could be a method to balance privacy and prevent criminality in digital payment systems <cit.>. In this approach, the content of a transaction is disguised from the person signing it, allowing transactions to be validated and legality upheld while maintaining the privacy of the payer. Another component could be a public permissioned distributed ledger technology (DLT) system for decentralised transaction processing, with DLT nodes operated by independent payment service providers. A DLT system is like a big shared notebook that keeps track of transactions. Anyone can look at it, but only certain people are allowed to write in it. This means that different people and companies can work together to process transactions without needing a single entity to be in charge. Finally, unforgeable, stateful, oblivious (USO) assets could be used to avoid requiring issuers to keep track of the integrity or ownership of individual assets. A USO asset keeps track of its own history and can demonstrate its own legitimacy, without issuers or financial institutions having to keep track of the coin’s status as it passes from one person to another. Complexity aside, these technologies provide features that support adherence to our guidelines and provide a starting point for envisioning the CBDC system that we would come to design <cit.>. §.§ Human Centrerd Central Bank Digital Currency While CBDC design proposals, including ours, have received increasing attention for their technical characteristics, we believe it is crucial to also focus on the interaction aspects of our CBDC system and adopt a human-centred perspective. We created a workshop called “Human-centred CBDC” based on the Socio-Technical Walkthrough method <cit.>. This method helps groups understand complex systems by discussing a diagram of the system step-by-step. We held the workshop twice with a total of 15 participants and made changes between sessions based on feedback. The workshop had four main goals: to help team members get to know each other, to align everyone’s understanding of the CBDC system we were designing, to refine the journey of a CBDC asset in our system, and to explore the human factors of the system from different users’ perspectives. In the workshop, we utilised Miro to turn our current work-in-progress which focussed on the technical requirements of our system into an interactive experience. Participants engaged within our proposed technical system by taking on the role of assigned stakeholder roles and followed the process of creating, validating, and spending a CBDC asset using interactive elements (see Figure <ref>). After each step of the process, the group engaged in a discussion to reflect on their experiences. This discussion allowed participants to gain a deeper understanding of the technical system and build their knowledge and perspectives with each step. The discussion also aided in addressing the issue we had in presenting difficult-to-understand CBDC systems. Discussions focussed mainly on clarifying the mechanics of the process; critical reflection on the process; and the discussion of alternative approaches and perspectives. This first workshop, however, revealed gaps, such as the need for a visual representation of how our system would be used in everyday scenarios. Visual narrative-driven tools, like comics, are powerful tools for facilitating communication and encouraging discussion, as they can convey complex ideas in an understandable medium. Researchers have effectively used comics for data visualization <cit.>, code comprehension <cit.>, and dissemination of qualitative findings <cit.>. Through visual storytelling, comics allow complex concepts to be presented in a familiar format. This made the narrative-driven ability of comics to be a good solution to ground our technical system into a representational story that was potentially relatable for everyday people, such as buying a coffee at a shop (see Figure <ref>). Applying the story-driven irritation in the workshop directed the conversation towards human-centric issues, such as who would mint the CBDC tokens, how the role of a central bank would change in relation to our proposal, and what the benefits would be to vendors transacting within our CBDC system. These lines of enquiry were later used to refine the design and communication of our proposed CBDC system. The workshop was successful in achieving its objectives and enhancing participants’ comprehension of the CBDC asset lifecycle and the roles of stakeholders within it. The activity elicited questions and sparked discussions about the design of the system, and above all, it allowed us to iterate and deepen our understanding of both technical and user requirements within our proposed CBDC system, grounding participants in the user-centred and social-technical scenarios they might encounter themselves if they were to use such a CBDC system. The design team utilised the outcomes to refine the technical design of the non-custodial wallet as well as the system as a whole. §.§ Storytelling as a Method to Explore CBDC We continued with our goal to develop our CBDC system from a human-centred approach by extending our participant reach; we invited a total of 22 participants to another workshop titled ‘Stories of Central Bank Digital Currency.’ This workshop saw public sector, private sector, and academic organizations come together to engage in a multi-stakeholder evaluation of our CBDC system. The main goal of this workshop was to gather feedback on our CBDC system proposal, with a specific focus on human factors and end-user journeys. We gathered user requirements for devices or interfaces that would allow end-users to interact with the system. As with the first set of workshops, an overall goal of identifying potential directions for the future development of the system was also central to this workshop, which was broken into two stages, ‘Mapping present-day transaction journeys’ and ‘Provotypes to elicit user insights and critique'. Overall, the workshop centred on the theme of storytelling and narrative design. Storytelling more broadly has proven to be an essential tool in design, facilitating an understanding of how technology can support self-reflection around intense experiences such as grief <cit.> and how stories can be conveyed through emerging technological mediums like virtual reality <cit.>. It can also serve as a valuable instrument for comprehending and engaging individuals in existing socio-technical systems, like empowering youth to have a voice in the design of their environments <cit.>. The workshops consisted of presentations and participatory activities that utilised storytelling to discuss the topics of cash, CBDC, and the ongoing work of the project. §.§ Mapping Present-day Transaction Journeys The term `user journey' is used to describe the steps a user takes when interacting with software, hardware, or any other product. It can be challenging to identify a user’s needs right away, but user journey mapping can help. This creative method, as described by Endmann and Keßner (2016), allows for a quick understanding of user processes and helps prioritise design concepts <cit.>. Many researchers have used journey mapping to gain insight into a user’s experience in specific scenarios, such as using a library <cit.>. This method offers a 3-dimensional view of the user’s journey, providing more depth than other methods like personas <cit.>. We utilised the User Journey Mapping method by dividing participants into groups and guiding them through a step-by-step activity focussed on present-day payment scenarios. They created their User Journey Maps using Post-it notes(see Figure <ref>). Each group then presented their journey map to the other participants. This step of the task would later prepare participants to critically evaluate our CBDC proposal by breaking down any pre-existing complexities of payment scenarios encountered in daily life. While creating their journey maps, participants exchanged knowledge with one another, which they could then use to formulate questions about current payment scenarios and our CBDC proposal. A total of six user journey maps were created by participants that explored current payment scenarios in various contexts: a bureau de change (group 1), a supermarket (group 2), a restaurant (group 3), an international transfer (group 4), receiving funds from a charity (group 5), and a boiler repair job (group 6). The journey maps created by the participants illustrated the intricate ways in which payments are entangled with the social, political, economic, and, of course, personal circumstances of end-users. For example, group 2 presented a scenario in which a user at a supermarket splits their shopping basket into two forms of payment: certain goods to be purchased on their credit card and others, like cigarettes or lottery tickets, to be paid for with cash. The reason for this, as one participant explained, is “because they don’t want their partner to know quite what’s going on.” Group 1’s journey followed the various routes that foreign currency can take to end up in the register of a bureau de change, including the complex relationship between currency importation and international relations. Group 5’s journey considered several methods of donating to charity, including the use of automated tools such as round-ups on debit card transactions. At the end of this activity, participants were able to discuss complex user scenarios around payment options in different contexts. This provided valuable user insight and perspectives. This exploration bolstered our understanding of our CBDC proposal and the everyday decisions people must make around forms of money or ways to use their money. §.§ Provotypes to Illicit User Insights and Critique After completing the user journey mapping task, participants embarked on the next stage of the workshop. They gathered around two ‘provotypes’—a portmanteau of `provocation' and `prototypes' <cit.>—including a large-format print of the CBDC user journey comic and a 3D-printed model of the debit card-style CBDC hardware wallet featured in the comic. These tangible representations allowed participants to apply their insights from previous explorations of existing payment systems to our concepts. In smaller groups, they began a self-led discussion about the proposed system, capturing their conversation on Post-It notes (see Figure <ref>) and applying them to relevant areas of Alice’s journey, the person in the comic. This activity provided an opportunity for participants to offer feedback on both the technical aspects of the CBDC system design and the narrative approach used to convey it. Participants were encouraged to apply their insights from the first activity to the future user journey presented to them, considering how the social aspects of the CBDC system might impact its use. Drawing on insights from this workshop, we began to understand some additional considerations and interesting situations for which our system might need to adapt. Participants considered CBDC as a type of payment to be used within given scenarios, such as for low- to medium-value everyday purchases or to enable government uplift payments to people without bank accounts. Trust was also a key consideration, with questions raised about how a central bank could sign off on the creation of anonymised tokens and why a user would trust their hardware wallet or the app they use to make transactions. Complex notions of inclusion and exclusion were also discussed, with concerns that the highly technical nature of the system could alienate people already excluded from digital services. Conversely, it could also allow those without bank accounts to make and receive digital payments, expanding concepts of financial inclusion. Other important considerations included the system’s capacity to function offline or on local networks, the visual appeal and user-friendliness of the hardware wallets, and the measures in place to mitigate loss in the event that a hardware wallet is lost or stolen. As a whole, the workshops provided valuable feedback and user insights that helped shape the design of our CBDC system. Key outcomes were identified from running these user-centric workshops. Firstly, it became clear that clearer definitions were needed for how our CBDC would integrate into the UK economy. This included specific definitions of the types of transactions for which the CBDC system is best suited and considerations around the types of public and private organisations responsible for designing, building, and maintaining the physical and digital infrastructure required to run the system. Lastly, clarity was needed in our framing of the benefits and drawbacks of the system. Communication is essential and must be understandable by both experts and non-experts. Some users just want to pay and move on with their day without considering technical specifications. However, if they do want to understand the privacy of the system and build trust that it will work as advertised, then this information must be accessible to all levels of users. Ultimately, we are designing a socio-technological output; therefore, framing will be key for people to want to use the system. Our user-focussed approach to CBDC design has emphasized the importance of considering the user’s perspective. By using provotypes to integrate the technical aspects of the CBDC system into a narrative-driven story, we have been able to elicit deep and useful design insights that can shape the technical requirements of our system. This approach has also helped us identify flaws in our framing, leading us to iterate on our workshop approaches and continue refining the technical requirements for our proposed CBDC system. §.§ Towards creating a non-custodial wallet After gathering valuable insights from our users on what should inform our CBDC system, we are now faced with the task of delivering this system. To inspire our CBDC system design, we once again turn to cash, this time by looking at the concept of a wallet. Wallets carry cash and debit in the form of plastic and have become electronic, having been associated with the notion of money for decades. Therefore, to begin our exploration of how a user will come to use their CBDC, we examine the two types of these wallets: custodial and non-custodial. Custodial wallets are a range of digital escrow wallets that store crypto assets externally, outside of a user's device. Non-custodial wallets, on the other hand, can be either software-based or hardware-based and can satisfy different preferences related to security and privacy <cit.>. However, custodial wallets are not easy to use and often require a level of knowledge. Their user design is typically oriented towards experts, with a complex registration process and a high potential for errors when paying in cryptocurrencies <cit.>. Both experts and non-experts can face financial loss due to the relative lack of user-friendliness of custodial wallets, which could be mitigated by better mimicking traditional banking methods and providing pre-education on their use <cit.>. Designing a user-friendly experience for novel concepts like CBDC or crypto requires balancing functionality with user preferences such as security and risk minimisation, without overwhelming the user with technical details. Just as cash represents trust, a CBDC must also instil trust in its users. Traditional payments are backed by institutional protections that help establish trust, while decentralised crypto assets lack these protections, making trust harder to establish <cit.> As such, the design of a CBDC wallet must provide uncompromising security, privacy, and identity protection while also supporting the regulations and protections similar to those that apply to centralised payment methods. This would allow for trust and give users equal options to choose their preferred form of payment. A person using CBDC should not need service relationships to use their money. For this reason, we turned to the non-custodial design, which provides hardware and software options for wallet design and allows for better implementation of user-friendliness in its core design. Our design concerns follow core principles: the wallet must store CBDC information, perform cryptographic functions, and have the ability to send and receive data. To ensure privacy and security, wallets must not be identifiable and must not require registration or trusted computing. We considered several wallet design options and approaches that can satisfy our requirements to varying degrees. These were represented by four concept wallets: * The “I can’t believe it’s not a phone” wallet: a lightweight independent phone-like device with no wireless capability that runs a light OS on top of the core cryptographic functions required of the wallet (see Figure <ref>). * The “card plus” wallet: a device with no user interface and must be connected to a companion device such as a phone (see Figure <ref>). * The “final check” wallet: a mix of the two aforementioned wallets, with a simple numerical user interfaces that allows the user to verify the transaction, as it can show transaction information and can be physically plugged into other devices like a point-of-sale machine (see Figure <ref>). * The “minimum standalone” wallet: a direct upgrade of the “final check” wallet, although with a more adept user interface and input, numerical pad, and the ability to verify transactions through buttons and visual representations of transaction data information (see Figure <ref>). The development of a collection of non-custodial wallets provided us with new insights into the technical and hardware requirements that aimed to balance ease of use, transactional security, convenience, and affordability. Using a non-custodial wallet, especially a hardware version, incurs an automatic cost associated with acquiring the physical hardware. That is why the presence or absence of component parts, such as an LCD screen, can make a big difference in the cost ratio, with a concomitant impact on user experience. Experimentation with these design requirements showed that The minimum standalone wallet (see Figure <ref>) was the closest match to our requirements that we could find. Although it is more costly than the “card plus” wallet, its security properties and user-friendliness made for an overall better non-custodial wallet, thus providing a suitable proof of concept. However, this choice should not be seen as limiting the potential design space for wallets. We firmly believe that a variety of viable wallet designs is necessary to meet the diverse needs of users in different payment scenarios. A simulation of the wallet was created using the frontend JavaScript framework, wherein we were able to simulate the functionality required to view the contents of the wallet, withdraw assets (see Figure <ref>), and make a payment (see Figure <ref>). In summary, our proposed CBDC system is designed to exist within a diverse ecosystem of payment scenarios, stakeholders, and worldviews. We have focussed on over-the-counter payments as our core use case, but we plan to explore additional scenarios such as online payments, peer-to-peer transactions, and high-value payments. Our next step is to have our non-custodial wallet working in a simulated back-end environment, to research further use cases and implications of our proposed system. While we believe that a non-custodial hardware wallet is an important option for CBDC users, we also anticipate the development of alternative wallet solutions by market actors. These may include smartphone applications and account-based services, which may offer greater convenience at the expense of privacy and control. It is important for users to be aware of these trade-offs when choosing the right wallets for their intended use cases. § LIMITATIONS Our approach to the CBDC non-custodial wallet is just one of many possible approaches. We have yet to test our final prototype with users to confirm that it meets our wallet requirements in real-world scenarios with diverse users. Our prototype was built using off-the-shelf hardware, such as the Pinephone.[https://www.pine64.org/] However, we suspect that future non-custodial hardware will be custom-made, potentially featuring variations in factors such as affordability and security. Despite our thorough and user-centric approach, there is still room for improvement. For instance, co-design methods could be used in the decision-making process for our proof-of-concept wallet. Of course, these are limitations based on our research parameters. Future work could take a truly bottom-up approach all the way to testing and deployment. Other approaches may include smartphone applications that allow users to store CBDC tokens locally or account-based services that allow storage on the cloud. However, these approaches represent a shift from our privacy-by-design paradigm to a privacy-by-trust paradigm. Users must be aware of the trade-off between security and convenience. As CBDC comes to fruition, what we currently deem as non-secure may advance and change. Designing for CBDC is currently based on proof-of-concept implementations and working within a present space of knowledge. Some future variables and trade-offs are still unknown, making it a fertile design space, albeit one with many limitations. Other questions persist around the limitations in our design, such as the question of what happens if the device is stolen or lost. Although we have defined some proposed approaches for handing such scenarios, such as backups, selective disclosure of private information, and insurance services, these are beyond the scope of this article. Another important consideration is: Will merchants accept our proposal? These are questions that still remain and require future work to answer, and we anticipate that engaging more users and merchants will uncover answers to these questions. Overall, we encourage designers to pursue not only the challenge of engaging with non-custodial wallets, but also other aspects of delivering CBDC and the systems that will ultimately come to underpin them. § DESIGN TAKEAWAYS Cash provided a crucial starting point for designing a CBDC system, as there was no pre-existing method or framework to determine the necessary requirements for such a system. While there are many prospective architectures for CBDC systems that can be produced, each with its own unique set of methods and approaches, we believe that the insights we gained from our exploration could be valuable for others looking to enter this space. We provide basic guidelines for designers to follow or adapt, as part of the process of establishing a starting point for their own CBDC systems: * Use current existing concepts of money, especially cash, as a starting point for designing a CBDC system, considering its established values and practices. Since there was no pre-existing method or framework for designing a CBDC system, the insights gained from exploring cash proved to be invaluable in determining the necessary requirements for our own system. * Incorporate the opinions of diverse stakeholders, both within and outside the project team, to ensure an inclusive design process. Understand that CBDC discourse is heavily oriented towards expert knowledge. Use narrative-driven tools like comics or alternative methods to convey complex CBDC or payment information to engage diverse expert and non-expert stakeholders. Always aspire to establish an inclusive design process, to ensure that more people can engage with confidence despite limited knowledge. * Help people envision how they currently use money and how CBDC could fit into their existing practices and preferences. * Design CBDC systems to work in tandem with other forms of money and services, giving users flexible autonomy in what they purchase, how they pay, and whether such payments should offer anonymity to payers or not. * Acknowledge that introducing new options can result in some existing options being taken away. To ensure flexibility, a designer might look to balance CBDC system design by instead focusing design efforts towards keeping established forms of money from disappearing, to uphold the diversity of payment options. * Designers might take an international approach to designing a CBDC system to ensure that it is compatible with diverse monetary practices and preferences. * Design CBDC systems to be inclusive and accessible by considering diverse options for accessing CBDC assets, for instance allowing digital deposits without the need for a bank account and balancing affordability, design, user experience, accessible hardware, and infrastructure whenever possible. Explore CBDC from both inclusion and exclusion angles to inspire novel technologies, services, infrastructures, or even tangible methods for accessing CBDC. § CONCLUSION In this paper, we explored the design of a non-custodial wallet, a device that enables users to store and spend Central Bank Digital Currency (CBDC) in various payment scenarios. We drew on established values and practices of current forms of money, such as cash, to inform our design. We incorporated the opinions of diverse stakeholders, both within and outside the project team, to ensure an inclusive design process. We used narrative-driven tools, such as storytelling and metaphors, to make CBDC more accessible and comprehensible for users. We also elicited user feedback and critique on our CBDC system proposal by using provotypes. Our research revealed some basic guidelines for designing CBDC systems, such as designing for compatibility with other forms of money, ensuring accessibility and inclusion, and balancing the technical and social aspects of CBDC. We also highlighted the importance of protecting established forms of money like cash, as a way to maintain flexible payment options and prevent their decline in usage. We demonstrated the innovative potential of CBDC system design to protect the privacy and security of users while ensuring user-friendliness and giving more people more choices in their payment options. We encourage other designers to explore this novel opportunity to critically consider the design of money, which has the potential to shape everyday life. plain